college class listening to lecture
An arrow pointing leftHome

Why universities need AI ethics programs now more than ever

  • Hope Reese
5/4/2022

As explained by Dr. Michael J. Quinn, the head of Seattle University’s new Initiative in Ethics and Transformative Technologies.

Michael J. Quinn, Ph.D. (Seattle University)

In 1994, Michael J. Quinn was a computer science professor at Oregon State University, teaching a new computer ethics class for undergrads. At the time, this was a brand-new requirement for the major. He’d taken some philosophy classes at Gonzaga University, and was interested in questions around computer ethics, which, back then, included things like intellectual property and privacy issues.

Today, nearly thirty years later, Quinn is Dean of the College of Science and Engineering at Seattle University, where he helped spearhead the new Initiative in Ethics and Transformative Technologies. In these last few decades, AI has become the new pressing topic in ethics and technology. With innovations in everything from machine learning to driverless cars to widespread automation, new sets of questions have emerged. The initiative, in partnership with Microsoft, includes a fellows program with cross-disciplinary faculty — from nursing to law to communications — gathering to read and apply ideas about AI ethics into their courses.

I spoke to Quinn about what he sees as the most pressing issues in AI ethics today, how far ahead we should look — and what a new species of procreating robots might mean for the future.

The conversation has been edited and condensed for clarity.

What do you see as the most pressing issues in AI ethics today?

Self-driving cars, and how the federal government has been a little too lenient in regulating what’s going on. In the last year, there’s been progress made in that the National Highway Transportation Safety Administration and other federal agencies are stepping in and looking harder at Tesla, in particular. Tesla has oversold the capabilities of its system and hasn’t put in the safety features that Cadillac and some of the other manufacturers have used.

And how are our legal systems considering AI ethics?

Governments can be slow to enact new laws or to put into place new regulations that could be controlling. That’s one of the fears — if it’s too much of an “anything goes” environment, or if companies have too much latitude, could some company deploy a product or service that is harmful? So, what guardrails should be put up to protect the public? Microsoft has taken a lead in that, going down to Olympia and talking about what should be done in terms of certain regulations, to make sure that companies are all behaving themselves.

Look at Arizona, for example. When Uber was kicked out of California, the governor of Arizona welcomed Uber. Because there’s a sense that these companies can bring high paying jobs, and states want to welcome good new jobs. Then, of course, there was the accident a few years ago where a pedestrian was killed, and they ended up clamping down on Uber, saying they should shut down the test facilities. State governments don’t always have control over what happens within their borders, and that has caused harm.

You’re teaching AI ethics to students. But what happens outside of the university? Are businesses taking these issues seriously?

It’s a constant tension. You have companies that legitimately have a profit motive — they’re trying to get new products and services out there and sell them. So you have a tremendous amount of energy behind the creation, development, deployment of new technologies.

At the same time, there are consequences of implementing the technologies. Perhaps we shouldn’t implement them this way. For instance, to what extent should facial recognition software be used by police departments? Do we have to worry about software systems coming up with incorrect conclusions — misidentifying people, for example. And how that could be harmful — particularly, if it’s members of minority populations who are being misidentified at a higher rate. To what extent are systems perpetuating injustices?

Recently, a new form of biological reproduction has created the first-ever, self-replicating living robots. It has a bit of the “robots are taking over the world” vibe, right? What do you make of this development?

We’re talking about a clump of toad cells. It takes them a week to reproduce, and they have to be in a carefully controlled environment. It has to be exactly 20 degrees Celsius. They have to keep the environment clean from contamination. It’s a very fragile system — it’s not like we have to worry about these things going out there and reproducing in the wild. For me, what’s interesting is that it shows how fast computers and learning algorithms allow spaces to be searched quickly. They used an evolutionary algorithm to come up with the body shape, the Pac-man shape, that reproduces the best. And then, by hand, they could assemble these things that were capable of reproducing, incredibly slowly, under controlled circumstances.

But it’s astonishing — usually, reproduction means that there’s some kind of growth happening inside a living organism or on the body of a living organism. And this is not how these Xenobots are reproducing. They’re assembling copies from other cells in the environment and pushing them together to create new ones, which is awesome. Here’s a new way for something alive to reproduce itself.

I saw it more in terms of the power of AI. It made me think of IBM Watson that looked at more than 9000 recipes to build an understanding of what ingredients go well with each other, and then by analyzing the chemical compounds of ingredients, it could suggest new combinations of ingredients to professional chefs. For example, up in Toronto it suggested five new poutine recipes based on the ethnic population up in Toronto. It invented a Chinese-Greek Chop Suey poutine. And Google’s AlphaGo that defeated the human Go champion in 2017 by studying human matches. It was so fast at looking at combinations of things that people can’t, because our brains don’t work that fast.

I see the Xenobots in the same category — here’s something that a computer is able to invent simply because it could look at billions of combinations.

The trope is that these machines could one day “think for themselves” — how much of AI ethics is concerned with this idea?

We’re a long way from any kind of artificial general intelligence. All of these systems doing amazing things — whether driving cars or playing Go — are incredibly specialized. And they require a lot of human development and intervention. In the short term, the humans developing these systems need to be aware that they are encoding values into the systems that they’re building. And what are the values going to be?

For example, David Danks who once taught at CMU said, “if you’re going to build a self-driving car, should it follow the traffic laws? Yes or no.” Almost everyone held their hands up. And said: “do you think the car should do everything possible to prevent an accident. And just about everyone held their hand up. And he said: “Well, those two goals are contradictory. What if you’re going down an arterial and the posted speed limit is 30 and everyone’s driving 40. You have the least chance of an accident if you go the same speed as everybody else. But that means you’re speeding. So what are you going to do?” Somebody has to instruct the car what to do in that situation — and it’s a value decision. And that’s just getting started. It’s important that people understand the responsibility they have to ensure public safety.

And there are no guarantees, right? Because even if the right thing is encoded, the context of the situation could shift things — it’s hard to account for every situation out in the world.

Right. And this is the problem when systems don’t have general intelligence, or common sense. When a situation out of the ordinary happens, someone with common sense knows how to respond, but the system, if it hasn’t been trained on that example, could end up making a mistake — that doesn’t make sense from our point of view, but is outside of its training.

It’s these rare, but important, exceptions that can cause systems built on machine learning to fail catastrophically.