Alexa, do I have COVID-19?

The future of voice diagnostics

Hi all,

Hope you’re hanging on tight through all the twists and turns of our bananas news cycle. Maybe take a break and read an email newsletter about the indoors?

I’ve got a bunch of events coming up this month, all of which are online and open to the public. Details:

Voice Diagnostics

Cartoon of a smart speaker listening to a woman's voice with the sound waves turning into virus particles

Way back in January (or approximately 5,000 news cycles ago), I started researching a story about the emerging field of vocal diagnostics. In brief, scientists are increasingly using AI and machine learning to search for specific vocal features that correspond to certain diseases, from Parkinson’s to PTSD. The hope is that doctors might one day be able to diagnose, or at least screen for, various diseases simply by analyzing a patient’s voice.

I was just wrapping up my reporting when the pandemic exploded. And some of my sources started getting in touch with updates: They were pivoting, racing to put together studies that would analyze the voices of people who had been diagnosed with COVID-19. To my knowledge, there are at least four different scientific teams, with researchers all over the world, that are now looking for a vocal “fingerprint” of COVID-19. One is already piloting an app that it hopes can be used to triage patients—flagging those who could have the disease just by listening to a sample of their voice. As I write in my new story for Nature:

It’s a sign of how hungry the young field of vocal diagnostics is to make its mark. Over the past decade, scientists have used artificial intelligence (AI) and machine-learning systems to identify potential vocal biomarkers of a wide variety of conditions, including dementia, depression, autism spectrum disorder and even heart disease. The technologies they have developed are capable of picking out subtle differences in how people with certain conditions speak, and companies around the world are beginning to commercialize them.

For now, most teams are taking a slow, stepwise approach, designing tailored tools for use in doctors’ offices or clinical trials. But many dream of deploying this technology more widely, harnessing microphones that are ubiquitous in consumer products to identify diseases and disorders. These systems could one day allow epidemiologists to use smartphones to track the spread of disease, and turn smart speakers into in-home medical devices. “In the future, your robot, your Siri, your Alexa will simply say, ‘Oh you’ve got a cold,’” says Björn Schuller, a specialist in speech and emotion recognition with a joint position at the University of Augsburg in Germany and Imperial College London, who is leading one of the COVID-19 studies.

But automated vocal analysis is still a new field, and has a number of potential pitfalls, from erroneous diagnoses to the invasion of personal and medical privacy. Many studies remain small and preliminary, and moving from proof-of-concept to product won’t be easy. “We are at the early hour of this,” Schuller says.

It’s a really interesting field of research that also raises some tricky scientific and ethical questions. Check out the full story here.

And here’s a neat graphic the Nature team put together. More context is in the full story.

DEPRESSED TONES: a visual analysis of a person with and without depression show identifiable differences.

Indoor Ephemera

Bonus Interspecies Animal Content

Stay safe out there!

Emily

The Great Indoors is now out! You can find it at Amazon, Barnes & Noble, Bookshop, IndieBound, or your local independent bookstore. (And if you’ve already read the book, please consider leaving an Amazon rating or review!)

You can read more of my work at my website and follow me on Twitter, Instagram, and Goodreads. (You can follow me on Facebook, too, I suppose, but I rarely post there.)