Deaf and hard-of-hearing (DHH) individuals encounter difficulties when engaged in group conversations with hearing individuals, due to factors such as simultaneous utterances from multiple speakers and speakers whom may be potentially out of view.
We interviewed and co-designed with eight DHH participants to address the following challenges:
1)~associating utterances with speakers,
2)~ordering utterances from different speakers,
3)~displaying optimal content length, and
4)~visualizing utterances from out-of-view speakers.
We evaluated multiple designs for each of the four challenges through a user study with twelve DHH participants.
Our study results showed that participants significantly preferred speech bubble visualizations over traditional captions.
These design preferences guided our development of SpeechBubbles, a real-time speech recognition interface prototype on an augmented reality head-mounted display.
From our evaluations, we further demonstrated that DHH participants preferred our prototype over traditional captions for group conversations.