As a part of the performer’s interface, the application for eye movement and gaze calculation has been developed. There, the images of the performer’s eye are being captured by the video grabber, the relative iris movement is detected and the location of the gaze, the look-at point on the console screen is calculated. The only assumption here is that the eye never escapes a certain portion of the image being captured and that the performer ‘tunes’ the system by initially looking at predefined extremal locations of the screen grid.
The entire system thus consists of a video camera, video grabbing hardware and a fast algorithm to convert the eye/iris configuration into console screen X and Y coordinates. So far we have been able to achieve a reliable detection of performer’s gaze towards 9 times 5 screen grid resolution with allowance for slight head movements. The maximum speed at which we can report the X and Y
coordinates to the rest of the system is about 20 fps.
We can easily extend the tracking system for higher screen grid resolution if video-capturing system provides higher quality images. As such, the entire
procedure could eventually be fully automated.
Laboratory for computer structures and systems
Faculty of Computer and Information Science
University of Ljubljana