J. Ou, L. M. Oh, S. R. Fussell, T. Blum, and J. Yang. Analyzing and predicting focus of attention in remote collaborative tasks. In roceedings of the 7th international conference on Multimodal interfaces ICMI ’05, pages 116–123, Trento, Italy, October 4-6 2005. ACM Press. [pdf]
The core argument of this work is that face-to-face collaboration leads to better performance than remote collaboration. One of the reason that might influence this difference, according to the authors, is that there are less visual cues that help the peers to situate their conversation. As bandwidth is limited a solution might reside in automatically control the participants’ point of view use gaze information on both sides.
Previous work reported in the paper shows that people look at intended objects before making conversational references to these objects. However, a major problem of gaze-based interfaces is the difficulty in interpreting eye-movement patterns due to unconscious eye movements such as saccades and to gaze tracking failures.
To asses this idea they divided the workspace in three areas: the pieces bay in which pieces are stored; the workspace in which the worker has to construct the puzzle; and the target solution, which showed how the puzzle should be constructed. The authors performed a statistical analysis of the eye-movements in the collaborative phase and based on this constructed a Markov model that was predicting succesfully over half of the gaze sequences across conditions.