However, researchers at the University of Washington’s Makeability Lab are exploring AR’s potential for accessibility. This month, they’re unveiling several projects, including RASSAR, an app designed to scan homes for accessibility and safety issues. RASSAR uses iPhone lidar technology to create 3D room scans, flagging potential hazards like furniture that’s too high. By simplifying complex accessibility guidelines into an app, RASSAR provides a practical solution for caregivers and homeowners.
Another app in development, GazePointAR, uses AR headsets to improve natural language processing for voice assistants. By tracking users’ gaze and hand movements, the app helps the assistant understand references like “that” or “it” based on context, which could be beneficial for visually impaired users. The project, using Microsoft’s HoloLens 2, aims to make natural language interactions more intuitive, although privacy considerations with gaze tracking remain a focus.
Finally, ARTennis is an early-stage system to assist low-vision users in sports. It uses AR to highlight moving tennis balls with red dots and green crosshairs, enhancing visibility and depth perception. Inspired by the experience of a low-vision squash player, the system is testing whether similar AR tools could work for other sports like basketball. Although current headset costs and frame rate limitations pose challenges, ARTennis is a promising step toward making sports more accessible through AR.
These projects from the Makeability Lab underscore how AR could be used to improve accessibility in daily life and recreational activities, with future advancements in technology likely to expand their potential applications.