Advertisement

Is AI Discriminatory? MIT Study Finds Chatbots Refuse To Answer Less Educated, Non-US Users

MIT research reveals that leading AI chatbots provide lower-quality responses and refuse more queries from less educated non-native English speakers.

Is AI Discriminatory? MIT Study Finds Chatbots Refuse To Answer Less Educated, Non-US Users
MIT study finds AI chatbots give poorer responses to less educated, non-native English speakers.
  • Artificial intelligence chatbots give lower-quality responses to users with low English proficiency
  • Chatbots like GPT-4 and Claude 3 refuse more questions from less educated, non-native English speakers
  • Claude 3 Opus declined 11% of questions and used patronising language 43.7% for less educated users
Did our AI summary help?
Let us know.

Artificial intelligence (AI) chatbots provide lower-quality responses to users with lower English proficiency, less formal education or those residing outside the US, a new study by the MIT Centre for Constructive Communication (CCC) has found. The researchers highlighted that despite Large Language Models (LLMs) being promoted as tools that could democratise access to information worldwide, the chatbots tend to perform worse for users who could benefit the most from them.

Leading chatbots like GPT-4, Claude 3 Opus, and Llama 3 refused to answer questions at higher rates for these users and, in some instances, responded with condescending or patronising language. In all three models, the effects were most pronounced for users at the intersection of these categories: those with less formal education who were also non-native English speakers.

"We see the largest drop in accuracy for the user who is both a non-native English speaker and less educated," said Jad Kabbara, the co-author of the study.

"These results show that the negative effects of model behaviour with respect to these user traits compound in concerning ways, thus suggesting that such models deployed at scale risk spreading harmful behaviour or misinformation downstream to those who are least able to identify it."

Claude 3 Opus declined 11 per cent of questions for less-educated, non-native English speakers. It also responded with condescending, patronising, or mocking language 43.7 per cent of the time for less-educated users, compared to less than one per cent for highly educated users.

The researchers also tested users from the US, Iran and China and found that Claude 3 Opus performed significantly worse for users from Iran.

Also Read | Man With Advanced Degree In Mathematics Turns To Rapido For Survival: 'Life Is Unfair'

LLMs Aping Humans?

The findings mirror the patterns of human sociocognitive bias where English speakers often perceive non-native speakers as less educated, intelligent, and competent, regardless of their actual expertise. The researchers pointed out that the implications of this behaviour could be grave, as already-marginalised groups would be further treated differently.

They warned that LLMs actually "exacerbate existing inequities" by systematically providing misinformation or refusing to answer queries to certain users.

The paper titled, "LLM Targeted Underperformance Disproportionately Impacts Vulnerable Users," was presented at the AAAI Conference on Artificial Intelligence in January.

Track Latest News Live on NDTV.com and get news updates from India and around the world

Follow us:
Listen to the latest songs, only on JioSaavn.com