FreeTrack Forum

Welcome, you're not connected. ( Log in - Register )

RSS >  freetrack feedback
C14 #1 29/06/2012 - 09h04

Class : Apprenti
Posts : 7
Registered on : 29/06/2012

Off line

Hi,

I'm new to the freetrack community.
First of all, thank you for this program and the detailed instructions!
I'd like to give you some feedback from the perspective of a first time user.

Story:
After watching some videos on youtube about trackir and freetrack,
I decided to try out freetrack.
Reading a bit through the posts in the hardware section, I decided to go for the PS3 Eye and an USB-powered SFH485P point model.
As I did not have any camera film laying around, I used a floppy disk as daylight filter and could see the points quite well with the cameras IR filter still in place (I figured out I have the 'good' model of the PS3 Eye with the removable IR filter). However, what is good for the human eye is not good for a simple thresholding algorithm, freetrack was not able to reliably track the points.
So I removed the IR filter. Unfortunately this filter also changes the focal length -> the image gets blurry when you simply remove it.
In order to compensate for this, one has to file off some plastic at the bottom of the lens system to make the lens system shorter (not easy to get it right, I took off too much and had to recompensate with some foam sheet).
So after some hours, I finally had my IR filter removed and a refocused image.
Then I struggled with the stability of freetrack. First, it crashed all the time with the CL-Eye-Driver-5.0.1.0528. I then installed the new 5.1.1.0176 version, but freetrack still crashes approximately one in two times with access violations.
Then came the calibration, which took some time and is still not really perfect, but finally freetrack works reasonable well.

Question:
What is the effect of the controls in the orientation tab in the Cam settings?
The relation between point model frame and head frame should imho be completely determined by the translation vector and
pressing the 'center' button, which should automatically determine the rotation with the help of the translation.

Feature Suggestions for the software:
- point extraction:
The simple thresholding needs an unnecessary high signal/noise ratio.
It would be better to have a short calibration phase, where the user selects the points he wants to track in the image and afterwards, the thresholding is done in a region around the last point positions.
Optionaly, the threshold could be adapted dynamically for each region, so that exactly one blob gets detected.
At least restricting point search to regions around the last points should be very easy to implement and could help alot.

- calibration:
The manual calibration of the translation is cumbersome and since it changes everytime one puts on the headphone in a slightly different way, it should be done automatically.
There should be a short calibration phase like the joystick calibration in old games. For the point model <-> head translation, the user should be asked to look around without moving its shoulders.
You can then calculate the model translation vector that minimizes movement of the head frame.
If needed, I could help with the math here.

Comparison to other tracking solutions:
I also tried out FaceTrackNoIR.
pro:
+ no point model, no calibration, no IR -> saves you alot of hours
+ more stable
con:
- severe lag due to underlying resource intensive face processing library
and filtering
- less accurate
- less features
Gian92 #2 29/06/2012 - 15h09

Class : Habitué
Posts : 92
Registered on : 08/04/2012

Off line

Hi and welcome ^^

C14 @ 29/06/2012 - 11h04 a dit:

What is the effect of the controls in the orientation tab in the Cam settings?


From the FreeTrack help documentation:

Often the camera is positioned on top of a monitor and pitched down at an angle towards the head position; without accounting for the pitch angle, forward-back movement can cause some vertical movement and vice versa. To address this, the orientation of the camera can be accounted for so that the direction of virtual head translation corresponds with reality.

Yaw: Positive anti-clockwise looking down from above

Pitch: Positive up.

Roll: Positive clockwise looking at camera.


_______________________

C14 @ 29/06/2012 - 11h04 a dit:

- calibration:
The manual calibration of the translation is cumbersome and since it changes everytime one puts on the headphone in a slightly different way, it should be done automatically.


What kind of "calibration" do you do? You just need to centre the tracking: Shift + F12 (default). Of course, as long as you imputed the right model dimensions.

About the noise, you should use a better filtering solution; furthermore, did you try to adjust the tracking point minimum and maximum diameters?

Even TrackIR sometimes has got problems with certain environment brightness conditions. Just flick through their forums.
Ordem e progresso” - Brazilian flag
C14 #3 01/07/2012 - 08h33

Class : Apprenti
Posts : 7
Registered on : 29/06/2012

Off line

Gian92 @ 29/06/2012 - 17h09 a dit:

Hi and welcome ^^

C14 @ 29/06/2012 - 11h04 a dit:

What is the effect of the controls in the orientation tab in the Cam settings?


From the FreeTrack help documentation...


ok, makes sense, forgot to check the help on this :rolleyes:


Gian92 @ 29/06/2012 - 17h09 a dit:


What kind of "calibration" do you do? You just need to centre the tracking: Shift + F12 (default). Of course, as long as you imputed the right model dimensions.


I mean the model position in the model tab.

Gian92 @ 29/06/2012 - 17h09 a dit:


About the noise, you should use a better filtering solution; furthermore, did you try to adjust the tracking point minimum and maximum diameters?


Yes, I tried out practically every setting. I admit that the image had quite some amount of sensor noise due to a high gain setting needed with the strong filter.
But by taking into account the time information, i.e. looking for points only in a small region around the last point position, I think it should have worked.
Gian92 #4 01/07/2012 - 10h08

Class : Habitué
Posts : 92
Registered on : 08/04/2012

Off line

Gain? You have to set the gain value to minimum, of course. What you should increase is the exposure and set white balance to auto.

I don't have any problems with the noise, and I'm not even using filters (although I'm using other objective lenses, I admit).

Moreover, get another infrared-passing filter: the floppy disk magnetic layer solution is terrible. The best home-made filter is made of two or more overlapped transparent acrylic layers: green and red. However, two or three (or more) exposed and developed photographic film layers will do the job as well.

And if you've configured the model position settings once it's fine. You don't need to constantly adjust them; the results are negligible.

About a hypothetical auto-calibration for the model position I guess it would be impractical since it would require the position of your head and, as consequence, it would need a built-in facial recognition (unless you stick some LEDs on your face). And since the purpose of FreeTrack is exactly recognising the position of your head, this "calibration" would be redundant: recognition of the position of your head to calibrate the model designed to recognise the position of your head (and there's already FaceTrackNoIR which works without LEDs).

Auto-threshold and dynamic zooming, even if feasible, could determine errors that would defeat their purpose since the only criterion upon which they could base is movement (I'm considering the worst case scenario: same blob size). So at least this hypothetical solution would need small movements to recognise the model pattern (and whilst in the non-recognition period or when wrong blobs are picked-up the tracking results staggered). The dynamic zooming would be useless in any case, though.

FreeTrack only requires some tuning to function properly, don't be lazy  ;D
Edited by Gian92 on 01/07/2012 at 11h03.
Ordem e progresso” - Brazilian flag
C14 #5 01/07/2012 - 12h22

Class : Apprenti
Posts : 7
Registered on : 29/06/2012

Off line

Gian92 @ 01/07/2012 - 12h08 a dit:

Gain? You have to set the gain value to minimum, of course. What you should increase is the exposure and set white balance to auto.

I don't have any problems with the noise, and I'm not even using filters (although I'm using other objective lenses, I admit).


Right, of course I also had the exposure at maximum.
Probably the floppy disk was too strong.
Its no more a problem now, since with the IR filter removed it works very well.
Judging from the camera image I just had the feeling that it should have also worked without the filter removal (then I would still have a working normal webcam)


About a hypothetical auto-calibration for the model position I guess it would be impractical since it would require the position of your head and, as consequence, it would need a built-in facial recognition (unless you stick some LEDs on your face).


No, it is possible to calculate the position of the model by asking the user to look around without translation of the head (i.e. fixed shoulders and neck)

Can someone tell me, how I can get the transformation from camera to model in homogeneous coordinates at runtime as easy as possible?
Then I would try to write a small external calibration program.
(I dont like pascal :) )


Auto-threshold and dynamic zooming, even if feasible, could determine errors that would defeat their purpose since the only criterion upon which they could base is movement (I'm considering the worst case scenario: same blob size). So at least this hypothetical solution would need small movements to recognise the model pattern (and whilst in the non-recognition period or when wrong blobs are picked-up the tracking results staggered). The dynamic zooming would be useless in any case, though.


I dont understand what you are saying.
You need an initial position of the blobs, yes. They could either be marked by hand on the camera image or the strongest blobs could be taken, so you need a good initial position of your point model (for example close to the camera, so that the blobs gets brighter)
After the initial positions have been found, the search for the current blobs would only be done in a region around the blobs last position, which limits the speed of the point model movement, but at 30-60 fps, you cant move your model very far in one frame.
Gian92 #6 01/07/2012 - 12h58

Class : Habitué
Posts : 92
Registered on : 08/04/2012

Off line

C14 @ 01/07/2012 - 14h22 a dit:



About a hypothetical auto-calibration for the model position I guess it would be impractical since it would require the position of your head and, as consequence, it would need a built-in facial recognition (unless you stick some LEDs on your face).


No, it is possible to calculate the position of the model by asking the user to look around without translation of the head (i.e. fixed shoulders and neck)



Well, first off you need to move your neck to rotate your head, and you could also translate it while having your shoulders fixed.

Yep, you would get the centre of the sphere when rotating your head without translation but, again, its results would be negligible. Furthermore, you would compensate instinctively and imperceptibly if there were any discrepancies. The main and most useful (and perhaps sufficient) calibration is already present, which is tracking centring.

There are bigger problems, as sensitivity/smoothness/lag tradeoff (overcome with a higher camera resolution, but at 640x480 still present).
The panacea would be an adaptive algorithm which takes into account speed, for instance.

C14 @ 01/07/2012 - 14h22 a dit:


Auto-threshold and dynamic zooming, even if feasible, could determine errors that would defeat their purpose since the only criterion upon which they could base is movement (I'm considering the worst case scenario: same blob size). So at least this hypothetical solution would need small movements to recognise the model pattern (and whilst in the non-recognition period or when wrong blobs are picked-up the tracking results staggered). The dynamic zooming would be useless in any case, though.


I dont understand what you are saying.
You need an initial position of the blobs, yes. They could either be marked by hand on the camera image or the strongest blobs could be taken, so you need a good initial position of your point model (for example close to the camera, so that the blobs gets brighter)
After the initial positions have been found, the search for the current blobs would only be done in a region around the blobs last position, which limits the speed of the point model movement, but at 30-60 fps, you cant move your model very far in one frame.



And what if, in the often and common eventuality, a LED blob and an unwanted blob of the same dimension intersect?

Anyway talk to the developers if you can, I'm just giving you my humble point of view, but I think the FreeTrack project has been left.

I too have observations and advices, but this forum is almost lifeless, and developers no longer take care of this program from what I can see.
Edited by Gian92 on 01/07/2012 at 13h15.
Ordem e progresso” - Brazilian flag

 >  Fast reply

Message

 >  Stats

1 user(s) connected during the last 10 minutes (0 member(s) and 1 guest(s)).