Justice statue with scales
An arrow pointing leftHome

When it comes to morals, Delphi leans left

12/9/2021

Can machines have a moral compass? It depends on the data.

In October, the Allen Institute for AI (AI2), unveiled Delphi, a new research prototype designed to model people’s moral judgments on a variety of everyday situations. The Delphi demo is open for anyone to test out. Since its launch, Delphi has received mixed reviews: It’s been criticized for its “knotty” interpretations of morality and praised for performing well with a form of reasoning that seems at times almost as complex as our own. Delphi’s FAQ warns that some questions, “especially those that are not actions/situations, could produce unintended or potentially offensive results.” Like any AI system, Delphi’s biases are subject to both its creators and the annotators of the training data that it was trained on; in this case giving rise to a left-leaning bias..

Delphi was trained with 1.7 million examples of human moral judgments taken from the Commonsense Norm Bank. 1000 of Delphi’s subsequent moral judgments were then run by real people, and raters agreed with judgments 92% of the time. Given enough data, it appears Delphi could potentially reflect general society’s opinion on certain ethical/socially acceptable responses.

After asking Delphi some questions about current social “hot topics” its biases become more apparent.

Delphi Q1
Delphi Q2
Delphi Q3
Delphi Q4
Delphi Q5
Delphi Q6
Delphi Q7
Delphi Q8

AI2 acknowledges that with a narrow scope of training data, AI systems like Delphi must “invest in the research that will make [these platforms] more transparent, unbiased, and robust on the social and ethical norms of the societies in which they operate.” If technologies like this are to be implemented, further development to make them more inclusive, ethically informed and socially aware will be crucial.

But as Ryan McConnell from PSU Vanguard says, “creating an open discourse about how AI learns is extremely important for the public to understand, and may inspire future developers and researchers to solve these problems. Because they aren’t solved well through computers, the first step is to understand why computers fail at this level in the first place.”