In the US, most live or real-time captioning is done by stenocaptioners, who use a phonetic keyboard to create captions as a program is broadcast. (In Australia, live captioning is performed both by stenocaptioners, and by captioners using speech recognition software). The quality of live captions can be variable, and earlier this year, NCAM conducted an online survey which asked caption users to rate different types of caption errors, and the degree to which they make news programs hard to follow.
In the new project, which is funded by the US Department of Education, a system involving language processing, data analysis and benchmarking tools will be developed, which will use Nuance’s Dragon Naturally Speaking speech recognition software as a basis. The project will also work with advisors from the National Institute of Standards and Technology, Gallaudet University and the National Technical Institute for the Deaf.
The project comes as the Federal Communications Commission (FCC) has requested industry and consumer feedback as it considers setting quality standards for live captions.
Top of page