About us
We are a research lab within the University of Michigan's Computer Science and Engineering department. Our mission is to deliver interactive, rich, and meaningful sonic experiences for everyone. Our primary research areas span human-computer interaction, accessible computing, and sound design. We focus on the interplay between sounds (including speech and music) and sensory abilities (such as deafness, blindness, ADHD, autism, and non-disability), and we work on projects that deliver sound information accessibly and also that use sounds to make the world more accessible (e.g., audio-based navigation systems for the blind).
We embrace the term ‘accessibility’ in its broadest sense, encompassing not only tailored experiences for people with disabilities, but also the seamless and effortless delivery of information to all users. By prioritizing accessibility, we are able to gain early insight into the future, recognizing that individuals with disabilities have often been early adopters of many everyday technologies, from telephones and earphones to email, texting, subtitles, and smart speakers.
Our team consists of people from diverse backgrounds, including designers, engineers, musicians, architects, psychologists, doctors, and sociologists. This diversity allows us to approach technical sound accessibility challenges from a multi-stakeholder perspective. We follow an iterative design, building, evaluation, and deployment approach, resulting not only in valuable research insights in the field of human-computer interaction but also in tangible products with immediate real-world impact. Our work has been recognized with best-paper awards in premier conferences such as CHI, ASSETS, and UIST, featured in prominent press outlets (e.g., CNN, Forbes, New Scientist), publicly released (e.g., one app has over 100,000 users), and has influenced products at leading technology companies such as Google, Apple, and Microsoft.
Currently, we are focusing on the following research areas, with generous support from National Institutes of Health (NIH), Google, and Michigan Medicine:
Interactive AI for Sound Accessibility. How can Deaf people, who cannot hear sounds, teach and train their own AI sound recognition models? How can sound recognition models adapt to changing contexts and environments? What sound cues will help provide holistic sound awareness to deaf and hard of hearing people and how can AI help?
Projects:
HomeSound
| SoundWatch
| AdaptiveSound
| ProtoSound
| InteractiveSound
Personalizable Soundscapes and Hearables. How can multiple intrusive environmental sound cues be delivered seamlessly? How can we dynamically adapt music based on the user's environment and context of use? How can earbuds be dynamically personalized to each user's hearing profile and fit? What promising sound interfaces can help manage hypersensitivity?
Projects:
MaskSound
| 3DSoundBuds
AR/VR Sound Experiences and Toolkits. How can developers easily integrate sound accessibility into their emerging VR apps? What features should VR developer toolkits support? What rich sound experiences can be enabled using AR technology? How can we seamlessly blend sound in the real and virtual world?
Projects:
SoundVR
| SoundBlender
Medical Communication Accessibility. How can speech technology improve communication for Deaf/disabled people in healthcare settings? What interfaces and form factors are most suitable for deployment in these settings? What are the reactions of various stakeholders, including patients, physicians, and staff on these technologies?
Projects:
MedCaption
| HoloSound
| CartGPT
| SoundActions
We're continously recruiting. If you are interested in these areas, please apply to work with us.