Eye-Tracking Interface: Gamers’ Looks Can Kill

Stephen Vickers, of De Montfort University, UK, is working on this fascinating research project to allow eye-tracking control of real-time 3D games. There are lots of reasons why this kind of research is relevant nowadays. Personally, I like two: first, these solutions can give disabled people the opportunity to equally participate to 3D online communities than able-bodied people; second, this technology can further extend the possibilities of able-bodied people in conjunction with other input modalities.

While we begin to have an extended knowledge of eye-tracking interaction for 2D worlds, few attempts have been made on 3D interfaces. This because of the increasing complexity of interaction possibilities. In virtual worlds we need to perform a large suite of commends in order to move a a character or avatar, change the viewpoint of the scene, and to manipulate objects. Finally we need an extra set of commands to communicate with other players.

The software developed by Vickers and collaborators approaches the complexity by splitting all these commands into different input modalities in order to subset the range of possible commands. Glancing momentarily off-screen in a particular direction switches between these modalities (e.g., to a mode that rotates the avatar or viewpoint, etc.). Also, the software allows the user to define gaze gestures to enable specific commands that do not belong to a certain modality, like that of turn off the eye-gaze control, to avoid unintentional selections.

Finally, this work was presented by Howell Instance at the the Eye Tracking Research & Applications Symposium 2008 in Savannah, US, where I chaired the session on Gaze Interfaces.

Leave a Reply