About me

I am currently employed as a PhD student at Aarhus University in Human-Computer Interaction (HCI) with a focus on the usage of eye-tracking in Augmented Reality (AR).

Research interests: Eye-tracking, Augmented Reality, Virtual Reality, Interaction Techniques, Study Design.

Publications

AUIT – the Adaptive User Interfaces Toolkit for Designing XR Applications

Joao Marcelo Evangelista Belo, Mathias Nørhede Lystbæk, Anna Maria Feit, Ken Pfeuffer, Peter Kán, Antti Oulasvirta, and Kaj Grønbæk

Adaptive user interfaces can improve experiences in Extended Reality (XR) applications by adapting interface elements according to the user’s context. Although extensive work explores different adaptation policies, XR creators often struggle with their implementation, which involves laborious manual scripting. The few available tools are underdeveloped for realistic XR settings where it is often necessary to consider conflicting aspects that affect an adaptation. We fill this gap by presenting AUIT, a toolkit that facilitates the design of optimization-based adaptation policies. AUIT allows creators to flexibly combine policies that address common objectives in XR applications, such as element reachability, visibility, and consistency. Instead of using rules or scripts, specifying adaptation policies via adaptation objectives simplifies the design process and enables creative exploration of adaptations. After creators decide which adaptation objectives to use, a multi-objective solver finds appropriate adaptations in real-time. A study showed that AUIT allowed creators of XR applications to quickly and easily create high-quality adaptations.

Exploring Gaze for Assisting Freehand Selection-based Text Entry in AR

Mathias N. Lystbæk, Ken Pfeuffer, Jens Emil Grønbæk, and Hans Gellersen

With eye-tracking increasingly available in Augmented Reality, we explore how gaze can be used to assist freehand gestural text entry. Here the eyes are often coordinated with manual input across the spatial positions of the keys. Inspired by this, we investigate gaze-assisted selection-based text entry through the concept of spatial alignment of both modalities. Users can enter text by aligning both gaze and manual pointer at each key, as a novel alternative to existing dwell-time or explicit manual triggers. We present a text entry user study comparing two of such alignment techniques to a gaze-only and a manual-only baseline. The results show that one alignment technique reduces physical finger movement by more than half compared to standard in-air finger typing, and is faster and exhibits less perceived eye fatigue than an eyes-only dwell-time technique. We discuss trade-offs between uni and multimodal text entry techniques, pointing to novel ways to integrate eye movements to facilitate virtual text entry.

Gaze-Hand Alignment: Combining Eye Gaze and Mid-Air Pointing for Interacting with Menus in Augmented Reality

Mathias N. Lystbæk, Peter Rosenberg, Ken Pfeuffer, Jens Emil Grønbæk, and Hans Gellersen

Gaze and freehand gestures suit Augmented Reality as users can interact with objects at a distance without need for a separate input device. We propose Gaze-Hand Alignment as a novel multimodal selection principle, defined by concurrent use of both gaze and hand for pointing and alignment of their input on an object as selection trigger. Gaze naturally precedes manual action and is leveraged for pre-selection, and manual crossing of a pre-selected target completes the selection. We demonstrate the principle in two novel techniques, Gaze&Finger for input by direct alignment of hand and finger raised into the line of sight, and Gaze&Hand for input by indirect alignment of a cursor with relative hand movement. In a menu selection experiment, we evaluate the techniques in comparison with Gaze&Pinch and a hands-only baseline. The study showed the gaze-assisted techniques to outperform hands-only input, and gives insight into trade-offs in combining gaze with direct or indirect, and spatial or semantic freehand gestures.

Challenges of XR Transitional Interfaces in Industry 4.0

Joao Marcelo Evangelista Belo, Tiare Feuchtner, Chiwoong Hwang, Rasmus Skovhus Lunding, Mathias Nørhede Lystbæk, Ken Pfeuffer, and Troels Ammitsbøl Rasmussen

Past work has demonstrated how different Reality-Virtuality classes can address the requirements posed by Industry 4.0 scenarios. For example, a remote expert assists an on-site worker in a troubleshooting task by viewing a video of the workspace on a computer screen, but at times switches to a VR headset to take advantage of spatial deixis and body language. However, only little attention has been paid to the question of how to transition between multiple classes. Ideally the benefit of making a transition should outweigh the transition cost. User support for Reality-Virtuality transitions can advance the integration of XR in current industrial work processes – particularly in scenarios from the manufacturing industry, where worker safety concerns, efficiency, error reduction, and adhering to company polices are critical success factors. Therefore, in this position paper, we discuss three scenarios from the manufacturing industry that involve transitional interfaces. Based on these, we propose design considerations and reflect on challenges for seamless transitional interfaces.

Self-hosted on Netcup servers: netcup.eu