Oldenburg Computer Science Series

Univ.-Prof. Dr. Susanne Boll,
Univ.-Prof. Dr. Sebastian Lehnhoff (Hrsg.)

Uwe Grünefeld

Visual Cues for Locating Out-of-View Objects in Mixed Reality

In the past decade, Mixed Reality has emerged as a promising technology for supporting users with everyday tasks. It allows one to alter perceived reality by blending it with virtual content. Mixed Reality can thus be used to overlay perceived reality with visual cues that empower users to locate relevant objects in the environment, regardless of whether they are in view. This approach seems auspicious for various scenarios, such as a traffic encounter in which a car driver overlooks a cyclist, or the docking process of large container vessels during which the person in charge must monitor several assisting tugboats. In such situations, visual cues that help users to locate relevant objects out of view can improve situational awareness and help to avoid fatal consequences.

In the presented work, we follow the “Research through Design” methodology to develop and evaluate visual cues in Mixed Reality that empower users to locate out-of-view objects. Initially, we analyzed three different scenarios, conducting an ethnographic study, an accident analysis, and literature reviews. Thereafter, inspired by the different scenarios, we reviewed all relevant background and related work to derive three research questions that are studied in depth in the presented work: (RQ1) To what extent can existing off-screen visualization techniques be adapted to cue the direction to out-of-view objects in Mixed Reality?, (RQ2) How can Mixed Reality devices with small fields-of-view be extended to present directional cues to out-of-view objects?, and (RQ3) In what way can the directions and distances of out-of-view objects be visualized for moving or non-moving objects in Mixed Reality?

Our results show that directional cues presented in the user’s periphery are easy to perceive and help the user to locate objects quickly. We showed that using three-dimensional visual cues can result in a lower direction estimation error than using two-dimensional cues. Furthermore, if the used Mixed Reality device suffers from a small field-of-view, radial light displays presented around the screen can be used to cue direction for out-of-view objects. However, sometimes a user must simultaneously locate several out-of-view objects, some of which may be occluded. Directional cues alone are insufficient for such cases, so distance information is required as well. We found that it is possible to convey this information, albeit with increased workload and additional visual clutter. Furthermore, visual cues that help a user to locate out-of-view objects should not disappear when these objects are visible on the screen, since assistance when these objects appear in view improves error rates and overall performance of the visual assistance.

Bd. 50, XII, 214 S., Edewecht 2020, € 49,80
ISBN-13 978-3-95599-065-7

Buchcover