AI for mental health screening can be biased based on gender, race

Some artificial intelligence tools for health care may be confused by the way people of different genders and races talk, according to a new study led by a CU computer scientist Boulder Theodora Chaspari.

The lesson is based on a truth, perhaps unspoken, of human society: Not everyone talks the same. Women, for example, tend to speak louder than men, while similar differences can appear between white and black speakers.

Now, researchers have found that those natural differences can confuse algorithms that screen people for mental health problems like anxiety or depression. The results add to a growing body of research showing that AI, like humans, can make assumptions based on race or gender.

“If an AI isn’t trained well, or doesn’t include enough agent information, it can propagate these human or social biases,” said Chaspari, an assistant professor in the Department of Computer Science.

He and his colleagues published their findings July 24 in the journal “Frontiers in Digital Health.”

Theodora Chaspari

Chaspari noted that AI could be a promising technology in the world of healthcare. Fine-tuned algorithms can scan recordings of people’s speech, looking for subtle changes in the way they speak that could indicate mental health concerns.

But those devices must work consistently for patients from many demographic groups, the computer expert said. To find out if the AI ​​was working, the researchers fed sound samples of real people into a standard set of machine learning algorithms. The results raised a number of red flags: AI tools, for example, seemed to under-identify women who were more vulnerable to depression than men—an effect that, in the real world, could prevent people to get the care they need.

“With artificial intelligence, we can identify these patterns that humans don’t always recognize,” said Chaspari, who did the work as a faculty member at Texas A&M University. However, despite this opportunity, there is also a lot of risk.

Language and feelings

He added that the way people talk can be a powerful indicator of their feelings and well-being – something poets and playwrights have long known.

Research suggests that people diagnosed with depression often speak more softly and in one voice than others. People with anxiety disorders, however, tend to speak louder and with more jitter, a measure of breathiness in speech.

“We know that language is greatly influenced by human nature,” Chaspari said. “For depression, there have been studies that show changes in the way the voice vibrates, or even the way the voice is modulated by the voice.”

Over the years, scientists have developed AI tools to look for those kinds of changes.

Chaspari and his colleagues decided to put the algorithms under the microscope. To do that, the team used recordings of people speaking in different situations: In one event, people had to give a 10- to 15-minute speech to a group of people those who do not know them. In one place, men and women talked for a long time in a setting similar to a doctor’s visit. In both cases, the speakers filled out questions about their mental health separately. The study involved Michael Yang and Abd-Allah El-Attar, undergraduate students at Texas A&M.

Correcting bias

The results seemed to be all over the place.

For example, in public speaking recordings, Latino participants reported feeling more nervous on average than white or Black speakers. However, the AI ​​failed to recognize the growing concern. In the second test, the algorithms also mark an equal number of men and women at risk of depression. In fact, female speakers experienced symptoms of depression at significantly higher rates.

Chaspari noted that the team’s results are just the first step. Researchers will need to analyze more human records from different groups of people before they can understand why the AI ​​got it wrong in certain situations—and how to correct that bias.

But, he said, the study is a sign that AI developers should proceed with caution before bringing AI tools into the medical world:

“If we think that the algorithm is really underestimating depression for a certain group, this is what we need to communicate to nurses.”

#mental #health #screening #biased #based #gender #race

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top