Man sitting on pile of books reading glossary
An arrow pointing leftHome

All AI Ethics Glossaries Are Not Made Equal

  • Zephin Livingston
11/9/2021

We compared glossaries that purport to help define everyday terms used in critical discussions around ethics.

In order to discuss ethics, we need a common language — terms with agreed-upon definitions to help advance our understanding and discourse. Glossaries are typically developed to standardize definitions and remove confusion, and in AI, finding a common language around ethics is nascent. We searched around, and while there are loads of glossaries for AI practitioners, there are a scant few for those focusing on the ethics of the technology. In fact, we found only three: one by the EU-appointed High-Level Expert Group on Artificial Intelligence (AI HLEG), one by the Institute of Electrical and Electronics Engineers (IEEE), and a quirky one by tech website TechnologyReview.

Maya Cakmak, associate professor and director of the University of Washington’s Human-Centered Robotics Lab, says that two key attributes of AI systems are important to consider when discussing their ethical applications: They are complex and they are powerful. As a result, says the professor, “they are capable of harm in two ways: unintended actions due to human error in building complex systems or incapability of considering emergent behaviors of complex systems, and weaponization, [meaning] giving people who have this powerful tool an advantage over people who don’t have it and [using] that power for evil.”

In order to provide a framework within which to discuss ethical approaches to AI applications, we took a hard look at the three AI ethics glossaries out there in the wild. What follows are our entirely subjective and unscientific reviews.

AI HLEG’s Glossary

In the executive summary,AI HLEG explains that this glossary is built on the ideal of “Trustworthy AI.” It then goes on to explain three conditions an AI product must meet in order to be “trustworthy.” The glossary focuses on two: adhering to ethical principles and values and its ability to be robust in order to avoid unintentional harm.

To give an example of how the glossary addresses these conditions, let’s look at the entry for “trust.” The definition of “trust” for the AI HLEG glossary is long and complicated, but boils down to this: “While ‘Trust’ is usually not a property ascribed to machines, this document aims to stress the importance of being able to trust not only in the fact that AI systems are legally compliant, ethically adherent, and robust, but also that such trust can be ascribed to all people and processes involved in the AI system’s life cycle.”

This definition would likely gel with Dr. Cakmak’s considerations for ethics in AI, chiefly by recognizing AI’s complexity and its potential power. We liked this glossary, but with only 15 definitions, we find it rather lacking.

IEEE’s Glossary

This is probably the most practical glossary for actual AI development use. It also provides, unlike the other two, different definitions for the terms, both in plain English as well as different disciplines and fields — which makes it a good translator for people in those disciplines and fields, as well.

Between the IEEE and the prior glossary, this is the less accessible of the two because it’s designed for use by people in specific disciplines. That said, the list of phrases is also much bigger than AI HLEG’s, containing a whopping 166 terms. This means a reader can get exponentially more use out of IEEE’s glossary on sheer quantity alone.

IEEE’s glossary doesn’t quite work as nicely with Dr. Cakmak’s perspective on AI ethics, and this can really be attributed to the way it was compiled, chiefly its use of quotes for most of the definitions with no additional interpretation from the glossary’s authors. Despite this, many of the definitions used in the document are still good, just not as cohesive as the smaller AI HLEG document.

In keeping with its pattern, IEEE’s glossary has multiple definitions for the term. To define “trust,” the glossary quotes Hardin’s 2006 book Trust. In the cited section, Hardin says: “Trust is generally a three-part relation: A trusts B to do X. First, I trust someone if I have reason to believe it will be in that person’s interest to be trustworthy in the relevant way at the relevant time. My trust turned, however, not directly on the Trusted’s interests per se, but on whether my own interest [sic] are encapsulated in the interests of the trusted, that is, on whether the Trusted counts my interests as partly his or her own interests just because they are my interests.” It’s a lot to follow.

TechnologyReview’s Glossary

We love satire, so we’re going to give this glossary a good review. It’s basically a funny public relations glossary, which doesn’t match Dr. Cakmak’s point of view. But at the same time, it is, in its own way, talking about the potential of humans to create harmful AI systems in the field — chiefly those caused by corporate AI and the PR language used to soften public anger over such harm. TechnologyReview doesn’t have an entry for “trust,” but it does for “trustworthy,” which is defined as, “An assessment of an AI system that can be manufactured with enough coordinated publicity.” We get the joke.