Coronavirus Updates: Now AI Can Recognize COVID-19 By The Sound Of Your Cough

Another Groundbreaking Scientific Innovation To Shape The Fight Against COVID-19

 

Emerged in the Wuhan city of China, COVID-19 is caused by SARS CoV-2, a virus that belongs to the coronaviruses' family.

In recent months, scientists have found that a vast majority of people with COVID-19 are asymptomatic. Some of the most common symptoms of the disease include high fever, cold, breathing difficulty, dry cough, and headache. These symptoms suggest an acute respiratory disease or radiologically detectable lung abnormalities. 

Health experts believe that artificial intelligence can help distinguish asymptomatic COVID-19 patients from healthy individuals, making a non-invasive tool for healthcare providers.

As asymptomatic individuals don’t exhibit clear signs and symptoms of the virus infection, they may not test themselves. They can still pass on the virus to several other people, even without being aware that they are infected. 

Coronavirus Updates: Now AI Can Recognize COVID-19 By The Sound Of Your Coughs

Past research has trained artificial intelligence algorithms to record coughs to precisely detect conditions like pneumonia and asthma. 

Similarly, investigators from MIT established AI models to check if forced-cough recordings could identify the signs of Alzheimer’s disease, which is a condition related to neuromuscular degeneration like weakened vocal cords. The team discovered that an AI tool could detect Alzheimer’s samples even better than the current models.

When the COVID-19 pandemic hit the United States, the team set forth implementing their AI technique to diagnose individuals infected with the novel coronavirus. 

The sounds of coughing and talking are both impacted by the vocal cords and the nearby organs. This implies that when you talk, a part of your talking is coughing and vice versa. It also indicates that things we easily deduce from fluent speech, artificial intelligence can catch simply from coughs, especially with regards to things like the person’s gender, native language, or even state of mind. 

Researchers have found that asymptomatic COVID-19 patients may distinguish from healthy individuals in the way they cough. Human ears can’t decipher these differences while AI can. 

In an experiment, a group of people was required to voluntarily submit their forced-cough recordings through web browsers and devices, including cell phones or laptops. The investigators trained the model on tens of thousands of cough samples, along with their spoken words. When they provided the model with new cough recordings, it accurately detected 98.5% of coughs from individuals who were confirmed to have Covid-19, including 100% of coughs from asymptomatics - who notified they did not have any symptoms but had tested positive for the novel coronavirus.

The team is working on integrating the model into a straightforward app, which, if FDA-approved and approved on a large scale, could be a free, handy, nonintrusive pre-screening tool to discover individuals who could be asymptomatic for Covid-19. A user can log in daily, cough into their phone, and immediately get details on whether they could be infected and consequently should validate with a conventional test.

The effective implementation of this AI-based tool can help control the spread of COVID-19 if everyone uses it before going to a factory, restaurant, and classroom. 


Also Read: Pfizer's vaccine short trials and reviews

 

How AI Works To Detect Diseases?

 

As discussed above, investigation groups already had been instructing algorithms on cell phone recordings of coughs to detect conditions including pneumonia and asthma, and Alzheimer’s disease.

They initially trained an extensive machine-learning algorithm, or neural network, called ResNet50, to distinguish sounds concerned with varying degrees of vocal cord strength. Research has revealed that the sound quality of “mmmm '' can signify how strong or weak a person’s vocal cords are. Subirana, a research scientist at MIT, trained the neural network on an audiobook dataset with over 1,000 hours of speech to recognize the word “them” from other words such as “the'' and “then.”

The team then trained a second neural network to differentiate emotional states apparent in speech because Alzheimer’s patients - and people with neurological turn down usually - have been found to reveal certain sentiments like frustration, or having a flat affect, more often than they express calm or happiness. The investigators designed a sentiment speech classifier model by training it on a larger dataset of actors vocalizing emotional states, such as happy, sad, calm, and neutral.

The investigators then trained a third neural network on a database of coughs to distinguish variations in lung and respiratory performance.

At last, the team assembled all three models and superimposed an algorithm to identify muscular degradation. The algorithm does so by primarily imitating an audio mask, or layer of noise, and differentiating strong coughs - those that can be listened to over the noise - over weaker ones.

With their latest AI model, the team catered in audio recordings comprising Alzheimer's patients and discovered that it could diagnose the Alzheimer’s samples even better than existing models. The outcomes proved that, together, sentiment, vocal cord strength, muscular degradation, and lung and respiratory performance were potent biomarkers for diagnosing the disease.

When the novel coronavirus pandemic began to extend, Subirana pondered whether their AI model for Alzheimer’s could also work for detecting Covid-19, as there was increasing evidence that infected patients encountered certain similar neurological symptoms, including temporary neuromuscular impairment.

 

 

A Remarkable Similarity

 

In April 2020, the research team set out to collect as many cough recordings as possible, including those who were COVID-19 positive. They created a website wherein people can record a series of their cough sounds through their cell phones, laptops, or any other web-enabled device. People were also required to fill in the symptoms they are experiencing, whether or not they have COVID-19, and whether they were screened using a formal test. They can also provide their gender, geographical location, and native language on the website. 

To date, more than 70,000 recordings have been collected, each comprising a series of coughs, summing to about 200,000 forced-cough recordings. Of this, around 2500 cough recordings were submitted by people who were confirmed to have COVID-19, including those who are asymptomatic. 

The team used the 2,500 Covid-related recordings, together with 2,500 more recordings that they randomly picked from the collection to balance out the dataset. They utilized about 4,000 of these samples to instruct the AI model. The rest 1,000 recordings were then supplied into the model to check if it could precisely distinguish coughs from COVID-19 patients versus healthy people. Astonishingly, the efforts of the researchers proved fruitful, and they found out a remarkable similarity between Alzheimer’s and COVID-19 diagnosis.

Without much adjustment within the AI model primarily meant for Alzheimer’s, they observed it was able to catch patterns in the four biomarkers - sentiment, vocal cord strength, lung, and respiratory performance, and muscular degradation - that is particular to Covid-19. The model detected 98.5% of coughs from people confirmed with Covid-19, and of those, it precisely recognized all of the asymptomatic coughs. This means thatCOVID-19 changes the way you produce sounds even if you are asymptomatic. 

 

 

What’s The Main Purpose Of Using The AI Model - Subirana Stresses?

 

While the tool is very effective in detecting asymptomatic COVID-19 patients, Subirana stresses that it is not intended to be an autonomous measure for diagnosis; it should differentiate between asymptomatic coughs and healthy coughs and depend on medical expertise for proper diagnosis.

The team is collaborating with an enterprise to establish a free pre-screening app built on their AI model. They are also associating with numerous hospitals worldwide to gather a larger, more diverse set of cough recordings, which will help educate and reinforce the model’s accuracy.

We can make pandemics a thing of the past if preliminary screening tools are always there in the background and are continuously improved.

Ultimately, they visualize that audio AI models such as the one they’ve created may be aligned with smart speakers and other listening devices so that people can easily get a preliminary examination of their disease risk, that too, on a daily basis.



Sources:

https://healthitanalytics.com/

https://www.weforum.org/

https://www.livescience.com/

https://spectrum.ieee.org/

 

Daniel Cooper

A graduate in Public Health, Daniel specializes in not just writing for the Healthy Lifestyle category but also in adhering to it in personal life. He is taciturn who spills his eloquence through poems and stories. If you want to bring his talkative side out, just start discussing Robert Frost and watch him speak endlessly.

PODCAST