With rapid advancement of VR head-mounted display, researchers have been exploring possible interaction technique that allow users to easily interact with virtual environment.

Researches have emphasize on uses of head and eyes, since we instinctively interact with objects in our line of sight. When looking at a target, both the head and the eyes rotate in coordination to keep the target within the human’s central vision; however, how much and how fast they rotate have not been quantified.

Prior works have shown that head pointing is more accurate though slower than eye gazing, while gaze pointing is faster but less accurate. The better understanding of head movement versus gaze will enable us to combine these two pointing techniques to achieve a fast and more accurate pointing. This work first investigated head-eye coordination during target acquisition in a VR environment. Then, we further exploited the nature of head-eye coordination and implemented an algorithm utilizing the speed of eye gaze and the precision of head rotation. We later compared the performance of Gaze+Head with head and eye pointing in terms of speed, accuracy.

The result demonstrated that our technique is fast, effort-less, and highly accurate, thus a much more favored pointing method than the traditional head and eye gazing techniques.