The researchers would need regulatory approval to develop it into an app. They said the crucial difference in the sound of an asymptomatic-Covid-patient cough could not be heard by human ears.
The artificial-intelligence (AI) algorithm was built at the Massachusetts Institute of Technology (MIT) lab. MIT scientist Brian Subirana, who co-authored the paper, published in the IEEE Journal of Engineering in Medicine and Biology, said: “The way you produce sound changes when you have Covid, even if you’re asymptomatic.”
“Practical use cases could be for daily screening of students, workers and public, as schools, jobs, and transport reopen, or for pool testing to quickly alert of outbreaks in groups,” the report says. Several organisations, including Cambridge University, Carnegie Mellon University and UK health start-up Novoic, have been working on similar projects.
In July, Cambridge’s Covid-19 Sounds project reported an 80% success rate in identifying positive coronavirus cases based on a combination of breath and cough sounds. By May, it had a dataset of 459 cough and breath sample sounds submitted by 378 members of the public, and it says it now has around 30,000 recordings. But the MIT lab has collected about 70,000 audio samples each containing a number of coughs.