Head and Gaze Tracking
Head and Gaze Tracking is the title of my dissertation thesis, witch I can easily summarize it as a three part work. First part is a eye tracking , then a head pose estimation, and finally integration of them all into a full working application of gaze tracking.
I've just finished the first part, the eye tracking. My application so far is capable to estimate the center of the eyes of single user in a live stream of video captured from a web-cam. To accomplish that i'm using a HAAR detector to extract the face of the user. Then using some heuristic knowledge i've segmented the face into regions in witch again using HAAR detector i get the right and left eye. After some manipulations of the image the result is an approximate iris for each eye. First i tried to obtain the center of the iris using the Hough transformation, that as proven to be computing hard and has not given good results in time. As if i had a lot of time to process a image, maybe the hough transformation would work as fine, but this was not the case. As a result i've just gone to another simpler approach; 'cause i've already a good approximation of the iris, i can after some adjustment get a binary image of it, and then get the center of mass of it; the return is a very good approximation of the center of the iris. This way as shown to be quick and quite accurate to estimate the center of the eyes, by visual analysis. The drawbacks of it, sure there are, because i'm using a web-cam of a laptop, the distance of the user should not be greater than 80cm to 1m, and at that distance the eyes in the picture are only a dozen pixels large, so the center of the eye can't be precisely obtained; and because i'm getting only the approximation of the center, i can't yet, determine what kind of error will i get when determining the gaze, the goal is to be below one degree of error (as commercial products), that means a error of 1,7 cm at a distance of 1 m.