Understanding how different accents affect speechreading
Seeing someone’s face when they are talking can dramatically improve human hearing of speech sounds in quiet and noisy environments. The system that encodes the visible movement of a talker’s face is remarkably adept at extracting the invariant properties that provide visual cues to the speech we hear. This ability occurs despite the fact that the visual appearance of speech movements varies considerably across different talkers owing, among other factors, to individual accent.
The main aim of the project is to enable hearing-impaired individuals to maximise the use of the speechreading (traditionally termed ‘lipreading’) skills they possess through training using variety of spoken accents, with the goal of developing the ability to recognize speech through the visual channel as much as is possible and to interpret it correctly.
With the project now in its final year, the research team have found out that people can tell the difference between languages and accents based only on what they see. However, understanding speech is more difficult if the talker has a different accent to the viewer. A survey of deaf and hearing impaired people confirmed that regional variation within British accents is an important factor. The team are currently exploring whether training can improve understanding of speech spoken with accents that are distinct enough to be recognised by eye.