Discussion in 'Article Discussion' started by bit-tech, 12 Sep 2018.
The actual hardware side of head/gaze tracking is a solved problem (i.e. solved at various price points depending on the tracking fidelity and latency required). The hard part is taking head pose or gaze target and trying to figure out intent from that extremely limited information, and without false positives, the bane of many a "I'll just make a stare-to-select gaze UI!" 'easy' project.
This does appear from first glance to be a run-of-the-mill stare-to-select UI without any sort of input rejection (potential options include simultaneous gaze and head tracking to disable selection when gaze is not targeted at the screen, a perennial issue for head-motion UIs), but documentation is... sparse, and I don't own an iPhone to test with directly.
Input rejection is normally handled by tweaking the dwell time [as in the length of time you need to stare at the thing before it registers as an input], so to cancel the input you simply look away/at something else.
[Based on limited dalliances with Windows 10's eye control gubbinz].
IIRC the problems come in where you have people whose conditions cause involuntary eye movement and the tracking not being so hot with certain eye colours [or glasses users].
Which is great when you're watching the device for input. The problem is all those times when the head axis (or even gaze axis, because the majority of eye trackers can pick up vergence but not accommodation so cannot be sure of where the true gaze target is when defocussing between very close and far objects) crosses and then dwells on a control target without the user even realising. This is more common with head axis because your eyes can move independently.
Separate names with a comma.