The women who understand the future of AI better than these men

When I give talks about the future of Artificial Intelligence, I start my discussion by showing the following photo.

It is from a panel discussion about AI at the Future of Life Institute in January 2017. Theoretical physicist Max Tegmark is the host, to nine of the most influential men in the field, including entrepreneur and Tesla CEO Elon Musk; the Google guru Ray Kurzweil; DeepMind’s founder Demis Hassabis and Nick Bostrom, the philosopher who has mapped our way to, what he calls, ‘superintelligence’.

The panel members varied in their views as to whether human-level machine intelligence would come gradually or all of a sudden, or whether it will be good or bad for humanity. But they all agreed that a general form of AI was more or less inevitable. They also thought that it was sufficiently close that we needed to start thinking now about how we would deal with it.

I am sceptical. As I argue in when I make my presentation, and in my book Outnumbered, we are currently somewhere short of modelling the intelligence of a bacterium. So I think their worries are overblown.

But that is not the point I want to make just now. Because what I realised yesterday, or more correctly, what was pointed out to me by Ruth King at my talk in Edinburgh, was the contrast between the picture in this slide and the one I show directly after it.

I close my talk by talking about people doing good work on understanding algorithms. Algorithmic activists I call them. These are people who are investigating bias in Artificial Intelligence and machine learning and/or giving a balanced view of the area. And here is the slide I used to illustrate this last night.

What Ruth pointed out is that all the people on my slide are women! In contrast to the slide before that is all men.

I was certaintly aware of the preponderance men hanging out with Tegnmark and often make fun of this fact. But because I change around the people I put on the ‘activism’ slide, depending on the other parts of my talk, I hadn’t noticed that this particular slide contained only women.

Women make up a solid majority of algorithmic activists.

I haven’t even included all the women working in this area. In addition to Cathy O’Neill (the data scietist who blew the lid on algorithmic bias), Carole Cadwalladr (investigator of Cambridge Analytica and Google racism), Hannah Fry (mathematician taking a balanced view of algorithms), Timandra Harkness (who was writing and reporting for Radio 4 about this well before the rest of us) and Jordan Erica Webber (who when not reviewing computer games is podcasting about AI), I could have included and many more (Joy Buolamwini, Caroline Criado Perez, Joanna Bryson, Julia Angwin…)

There are, of course, men who are ‘algorithmic activists’. Some of the best work on bias in machine learning was done by Tolga Bolukbasi. And Amit Datta was one of the first people to identify sexism in Google search. It is this approach I take in my own work when I have looked at, for example, the Cambridge Anlayica scandal and ideas about filter bubbles and echo chambers. The key to working in this area lies in concentrating on the details, and not making overreaching general claims.

There was another thing that Dawn Wasley, knowledge transfer officer at ICMS noticed, the noticed that I had missed. The slide before my slide of the Tegmark panel, I show a slide of AI research groups within tech giants.

I usually joke that these companies think that the future of AI is apparently blue. Now call me naive (I have been living in Sweden too long), but what I hadn’t thought about is the association many people have between blue and boys. In fact, if I am honest with myself, maybe the reason I didn’t reflect over this was because I am male, and blue probably makes me feel welcome to the world of AI. But it doesn’t make everyone feel welcome…

This is the central problem for much of AI research. We are creating algorithms that don’t work very well and overhyping them. Then we are encouraging a narrow demographic to be interested in the future of machine learning using a combination of false intellectual arguments and misleading imagery.

As I write in Outnumbered: “Max Tegmark and his friends are entitled to sit around speculating about the future of AI, but their discussion should be seen for what it is. It is a bunch of wealthy men with more or less the same socio-economic background, similar education and work experience who want to argue with each other about science fiction.”

While these men are sitting around talking it appears that they expect the women to clear up after them.

Professor of Applied Mathematics. Books: The Ten Equations (2020); Outnumbered (2018); Soccermatics (2016) and Collective Animal Behavior (2010).