DH Reads

DH Read: “Are you scared yet? Meet Norman, the psychopathic AI”

This article from the BBC by Jane Wakefield reports on a “psychopathic” algorithm created at MIT “as part of an experiment to see what training AI on data from ‘the dark corners of the net’ would do to its world view.” The algorithm is trained to interpret abstract shapes. Trained on images of people dying, Norman (named after Norman Bates) sees death in every picture. A “normal” AI trained alongside Norman on pleasant images of people and animals has much more positive interpretations of the same abstract shapes. This highlights a crucial point about AI:

The fact that Norman’s responses were so much darker illustrates a harsh reality in the new world of machine learning, said Prof Iyad Rahwan, part of the three-person team from MIT’s Media Lab which developed Norman.

“Data matters more than the algorithm.

“It highlights the idea that the data we use to train AI is reflected in the way the AI perceives the world and how it behaves.”

We’ve heard a lot lately about the dangers of racist and sexist algorithms programmed mainly by white men in Silicon Valley, but Norman shows that, regardless of the algorithm itself, “AI trained on bad data can itself turn bad.” That’s a problem because the available data is of course shaped by our culture’s existing biases and inequities. AI trained on this data will reproduce those biases. The article cites the examples of predictive policing algorithms and an AI-generated program for risk assessment in US courts, both of which were biased against black people because of the flawed historical data they were working with. So what can we do?

Prof Rahwan said his experiment with Norman proved that “engineers have to find a way of balancing data in some way,” but, he acknowledges the ever-expanding and important world of machine learning cannot be left to the programmers alone.

“There is a growing belief that machine behaviour can be something you can study in the same way as you study human behaviour,” he said.

This new era of “AI psychology” would take the form of regular audits of the systems being developed, rather like those that exist in the banking world already, he said.

The article also quotes Microsoft’s ex-chief envisioning officer Dave Coplin, who hopes Norman can spark a conversation between businesses using AI and the public:

It must start, he said, with “a basic understanding of how these things work”.

“We are teaching algorithms in the same way as we teach human beings so there is a risk that we are not teaching everything right,” he said.

“When I see an answer from an algorithm, I need to know who made that algorithm,” he added.

I see a role for digital humanists in all of this. I think most DHers understand the flaws and limitations of the historical data that we work with, and I think most of us do a great job of foregrounding those issues instead of hiding or minimizing them. We think critically about the data we use, and we seek to use digital tools in ways that challenge rather than reproduce archival silences and historical power structures. These are skills that are important to share with students but also with the tech world and the public.

Read the original article here.

Tagged , , ,