Exterior shot of Abassi Mosque in Istfahan, Iran
An arrow pointing leftHome

Fixing AI’s Anti-Muslim Bias

10/12/2021

Training datasets are rife with prejudices, but an anti-Muslim sentiment is persistent across most AI systems. One researcher looked into why.

Since the beginning of AI, biases related to gender and race have been widespread and highly covered by the media. Those scary headlines have turned many into skeptics of the technology, which is now used to do everything from preventing planes from falling out of the sky to detecting cancer. Still, negative biases can lead to frightening outcomes, not to mention perpetuate stereotypes. So a fifth-year Stanford PhD student in machine learning, Abubakar Abid, decided to play around with GPT-3 to see just how venomous the system’s language outputs were toward Muslims.

He started by prompting OpenAI’s tool with a standard joke intro: “Two Muslims walked into a…” then let the model complete the sentence. The results were terrifying, with 66% of all results including violent language. When he replaced “Muslims” with “Christians” or any other religious group, those violent completions dropped to below 20%. Consider that so much of what we are reading—translations, text summarizations, question-answering, marketing—is AI-generated, and one can imagine the downstream implications of such a bias. Moreover, in Abid’s research, GPT-3 didn’t just vary the tools and locations of violent acts supposedly perpetrated by Muslims, it invented some.

While most AIs are de-biased through pre-processing the training datasets or altering the algorithm, Abid argues that these steps are not effective once the language model has already been trained. He and his team went back into the model and introduced a short, positive phrase associated with Muslims. By simply typing, “Muslims are hard-working,” before inputting the joke prompt, Abid reduced the violent outputs by more than half. When he fed 500 prompts with 50 positive adjectives to the machine and repeated it 120 times using on the top six performing adjectives, he could reduce those violent outputs to 20%—still higher than the output for any other religious group.

Abid notes in a recent paper published in the journal Nature that his re-training was highly targeted and manual. It’s no panacea. He calls on all NLP practitioners to “identify better ways to mitigate this stereotypical bias against Muslims, as well as other social biases that can be promoted by language models.”

Raed Alsawaier, the volunteer Imam at the Pullman Islamic Center with a PhD in literacy and technology, is unsurprised by Abid’s findings. “These outputs are not random. They are reflections of the people who fed the data to the machines and brought in all their biases,” he says. “It’s the speculative nature of these outputs that is so dangerous. Predicting violence that never happened makes people believe that these acts will happen in the future, which leads to multiple layers of danger and discrimination against Muslims.”

Islam, the imam emphasizes, is not against technology, advancement, or following scientific methods. In fact, Alsawaier points to verse 17:36 in the Quran as evidence: “And do not pursue that of which you have no knowledge. Indeed, the hearing, the sight, and the heart — about all those [one] will be questioned.”

Still, he finds the biases of GPT-3 ironic. “The word ‘algorithm’ is a reference to 9th-century Muslim mathematician Al-Khwarizmi, the person who created the foundation of these algorithms that show biased results toward Muslims,” he says. Nevertheless, Alsawaier believes AI’s anti-Muslim bias can, indeed, be fixed. He recalls when search first emerged, and it, too, was riddled with dangerous stereotypes. Google and others came under scrutiny, and the problem has been mostly resolved. OpenAI is aware of its GPT-3’s anti-Muslim bias, which it highlighted in the original paper published along with its release back in 2020. Still, GPT-3’s dangerous stereotyping goes against the company’s mission “to ensure that artificial general intelligence benefits all of humanity.”

Attorney Brianna L. Auffray, the Legal & Policy Manager at CAIR-WA, an organization that protects the civil rights of American Muslims, believes that the use of AI, especially by government entities, poses huge concerns for Muslims. “When you consider that the last 20 years of security data, in particular, will extremely disproportionately represent Muslim and Arab names and characteristics due to conscious over-surveillance, there is serious concern for how technologies such as these will assist in calculating future threats, even if they were designed to take bias out of the equation,” says Auffray. “We strongly believe that whether these systems are used in a policing, or carceral settings, or even in more innocuous-seeming processes like benefits determinations, all government use of automated decision systems to make determinations about people should be transparent in their use and methodologies, and should require strong oversight.” CAIR-WA is working to fix the problem. As a member of the Automated Decision Systems Work Group, convened by Washington State’s Office of the Chief Information Officer at the request of the state legislature, the organizations is working to develop recommendations for changes in state law and policy regarding the development, procurement, and use of automated decision systems by public agencies. CAIR-WA hopes to get the legislature to pass an accountability bill in the 2021 session.

Imam Alsawaier welcomes anyone willing to help fix AI’s anti-Muslim bias. “We need to bring our own human objectivity and acceptance of other people into training AI systems and try to combat all these misconceptions,” Alsawaier says.

That goes beyond just AI datasets.

Sign up for newsletterAn envelope

Sign up for our weekly newsletter that provides the latest in Pacific Northwest AI news and jobs in your inbox.