What is the most complex animal for which we can model its general intelligence?
The argument against superintelligence or general-AI in the near future is really taking off, with recent books by Melanie Mitchell and Gary Marcus, putting much needed common sense back in to the debate. Gary’s Twitter feed, in particular, is a great starting point for learning how claims about AI have got out of hand.
The general-AI question has interested me for some time and in my book, Outnumbered, I approached the problem using my own background in mathematical biology. I took a starting point I think we can all accept: at present AI can’t do all human-level tasks. I then asked whether it could compete with other animals? I want to find the most advanced organism whose intelligence we currently understand.
Some people, including many scientists who should know better, talk about animals in terms of simple stimulus-response reactions. The classic example is Pavlov’s dog salivating at the sound of a bell. Anyone who owns a dog will tell you that this Pavlovian view is a vast over-simplification, and they are right. A typical dog owner’s view of their pets as family members and friends is not just an emotional, anthropocentric view. It is in line with how most modern behavioural biologists see domesticated animals — as sharing many of our complex behaviours. Juliane Kaminski, head of the dog cognition project at the University of Southampton, has found that dogs can learn in a similar way to small children, take into account their owner’s perspective of the world when deciding which objects to fetch and understand our intentions from our body movements[i].
These qualities, of understanding the context of different situations and learning about how to learn, remain open questions in AI research. Until we have made much more significant progress towards modelling a human than we have up to today, we won’t be able to simulate dogs, cats and other domestic animals.
I have maybe set my sights too high with dogs, so let’s skip down a few levels to insects, and bees in particular. Lars Chittka at Queen Mary University of London has recently documented our increased understanding of bee cognition and found bees have an amazing intellect[ii]. After a few flights looping around their nests, newborn bees have a good idea of what their world looks like. They then quickly set to work collecting food. Worker bees learn the smell and colour of the best flowers and solve the ‘travelling salesman problem’ of visiting the available food sources in the shortest possible time. They can remember where they have experienced threats and sometimes ‘see ghosts’ as they react to a perceived danger that isn’t there. Bees that find lots of food become optimistic and start to underestimate the danger of predator attacks. The underlying neural network, in the form of the bee’s brain, has a very different structure than that of artificial convolutional or recurrent neural networks. Bees seem to be able to recognise the difference between objects using just four input neurons and appear to lack any internal representation of images. Other simpler stimulus-response tasks, which could be modelled as a small number of logic gates, instead affect entire regions of the brain.
The most remarkable thing about bees is that they can learn to play football! Well, not quite football, but a game very like it. Lars’ research group has trained bees to push a ball through a goal. The bees could learn to do this task in a variety of different ways, including watching a plastic model bee push the ball and watching other real bees complete the task. They didn’t need to extensively practise the game themselves in order to learn the task. Ball-rolling is not something bees usually encounter in their lives, so the study shows that bees can learn novel behaviours quickly, without the need for repeated trial-and-error attempts. This is exactly the problem that artificial neural networks have failed to overcome so far. Bees can generalise their skills in other areas to tackle a new problem like football.
It is important to remember here that the question of general artificial intelligence isn’t about whether or not computers are better at particular tasks than humans. We have already seen that a computer can play games like Chess, Go and Poker better than humans. So I don’t think the machines would have much problem beating a bee at these games. The question is about whether we can produce bottom-up learning on a computer of the type widely observed in animals. For now, bees are able to generalise their understanding of the world in a way computers are not.
The species of worm C. elegans is one of the simplest living animals. A fully developed adult consists of 959 cells, of which around 300 are neurons. This compares with the 37,200,000,000,000 cells in your body[iii] and 86,000,000,000 neurons in your brain. C. elegans are widely studied because, despite their relative simplicity, they share many of our properties, including behaviour, social interactions and learning.
Monika Scholz at the University of Chicago has recently created a model of how the worm uses probabilistic inference to decide when to move[v]. The worm ‘polls’ its local environment to measure how much food is available and then ‘predicts’ whether it is better to stay put or start exploring for new resources. Studies like these reveal details of worm decision-making, but they don’t yet model the organism as a whole. A separate project, known as OpenWorm, attempts to capture aspects of the mechanics of how worms move, but more work is needed to put these models together and reproduce C. elegans’ full repertoire of behaviour. For now, we don’t really know how the 959 cells act together and thus can’t properly model the behaviour of one of the simplest animals on Earth.
So let’s forget, for the time being at least, about creating the intelligence of a dog, a bee, a worm or even a football player. How about an amoeba? Can we reproduce the intelligence of microorganisms?
The slime mould, Physarum polycephulum, is an amoeboid organism that builds tiny networks of tubes to transport nutrients between different parts of its body. Audrey Dussutour, at Toulouse University in France, has shown that slime moulds habituate to caffeine, a substance they usually try to avoid, and then revert to their normal behaviour when given an option of avoiding the substance[vi]. Other studies have shown that slime moulds can anticipate periodic events, choose a balanced diet, navigate around traps and build networks that efficiently connect up different food sources. The slime can be thought of as a form of distributed computer, taking in signals from different parts of its body and making decisions based on its previous experience. It does all of this without a brain or a nervous system.
It may be possible to produce a comprehensive mathematical model of slime moulds in the near future, but we are certainly not there yet. The ‘memory’ and learning of slime could potentially be modelled by a type of electrical component known as a ‘memristor’, a combination between a resistor and capacitor, that provides a form of flexible memory[vii]. But we still don’t know how to set up a network of memristors to combine with each other and solve problems in the way as a slime mould.
The next step down in biological complexity from slime moulds are bacteria. The bacteria E. coli is a ‘bug’ that lives in our gut. Although most strains are benign or even beneficial, a few of them give us food poisoning. E. coli and other bacteria navigate through our bodies, take in sugars and ‘decide’ how to grow and when to split[viii]. They are highly adaptable. When you drink a glass of milk, the genes for lactose uptake are activated within E. coli, but if you then eat a chocolate bar the genes that process glucose, which the E. coli ‘prefers’, suppress the lactose genes. Bacteria move around through a run-and-tumble motion, making runs in one direction before tumbling to ‘choose’ a new direction. They tune these ‘tumbling’ rates to the quality of the environment they find themselves in. Each of the bacteria’s different ‘objectives’ — obtaining resources, moving around and reproducing — are balanced through different combinations of genes switching on and off.
In 2018, Microsoft announced that they had designed an AI which learnt to play the Atari game Ms. Pac Man. E. coli’s balancing of different objectives in a quest to obtain resources is very similar to Ms Pac-Man. The tasks these two organisms, articficial and real and, aim to complete are very similar. In order to adapt they both have to respond to the input signals from a variety of different sources: E. coli regulates intake of resources, responds to dangers and navigates obstacles. Ms Pac-Man’s neurons respond to ghosts, food pellets and the structure of the maze. The bodies that bacteria live in are not identical, just like Ms Pac-Man mazes differ from each other, but the algorithms each of them employ are flexible enough to handle a wide range of environmental challenges.
I had found the closest biological equivalent of the highest level of current AI. It is a tummy bug.
One argument against my bacteria-brain analogy is that the reason we can’t simulate worms and slime moulds is that we don’t know what these organisms are aiming to achieve. Some of the neural network researchers I talked to argued that we don’t know, what these experts call, the objective function of worms. To train a neural network we need to be able to tell it what pattern it is meant to produce and, in theory, if we know the pattern, i.e. the objective function, we should be able to reproduce the pattern. There is some validity to this argument — biologists don’t have a full understanding of C. elegans or slime moulds.
Ultimately, however, the ‘tell us the objective function’ argument sidesteps the real issue. Biologists’ experimental work on intelligence reveals more about how the brain works — understanding the connections between neurons and the roles of different parts of the brain — than it reveals about the overall pattern of why our brains have particular objectives. If neuroscientists are going to work together with artificial intelligence experts to create intelligent machines, then this joint work can’t rely on biologists finding the objective function of animals and telling it to the machine-learning experts. Progress in AI must involve biologists and computer scientists working together to understand the details of the brain.
Tests of AI should, in my view, build on the one first proposed by Alan Turing in his famous imitation game test[ix]. A computer passes the Turing Test or ‘imitation game’ if it can fool a human, during a question-and-answer session, into believing that it is in fact a human. This is a tough test and we are a long way from achieving this, but we can use the main Turing test as a starting point for a series of simpler tests.
In a less well-cited section of his article from 1950, Turing proposes simulating a child as a step toward simulating an adult. We could consider ourselves having ‘passed’ a mini imitation game test when we are convinced the computer is a child. My argument is that we should use the rich diversity of organisms on our planet as a series of test cases[x]. Can we reproduce the intelligence exhibited by slime mould, worms and bees in a computer model? If we can capture their behaviour when moving around their environments and interacting with each other, then we can claim to have produced a model of their general intelligence. Until we produce these models then we should be careful about the claims we make. Based on current evidence, we are modelling intelligence on a level similar to that of a single bacterium.
Well … not quite. Harm van Seijen, the Microsoft researcher who created the Ms Pac-Man algorithm, had been very careful to explain that his model could not be considered as having been built from scratch. He had helped it by telling it to pay attention to ghosts and pellets. In contrast, the bacteria’s knowledge of the dangers and rewards of its environment have been built by bottom up, through evolution.
Harm told me: ‘A lot of people talking about AI are too optimistic, they underestimate how hard it is to build systems.’ Based on his experience of developing Ms Pac-Man and other machine-learning systems, he felt we were really far away from a general form of AI.
Even if we can create full bacterial intelligence, Harm was sceptical how much further we can go. He said, ‘Humans are really good at reusing what we learn in doing one task for a different related task; our current state-of-the-art algorithms are terrible in this.’
Harm saw a risk in giving neural networks fancy names and making big claims.
The founder of the company Harm now works for seems to agree with him. In September 2017, Bill Gates told the Wall Street Journal the subject of AI is not something we need to panic about. He said he disagreed with Elon Musk about the urgency of the potential problems.
So if we are currently mimicking a level of ‘intelligence’ around that of a tummy bug, why has Elon Musk declared AI such a big concern? Why is Stephen Hawking getting so worried about the predictive power of his speech software? What causes Max Tegmark and his buddies to sit in a row and declare, one after another, their belief that superintelligence is on its way? These are smart people, what is clouding their judgement?
I think there is a combination of factors. One is commercial. It doesn’t hurt DeepMind to have a bit of buzz around artificial intelligence. Demis Hassabis has toned down the emphasis on ‘solving intelligence’ his company had when Google first acquired DeepMind and in recent interviews focuses more on solving mathematical optimisation problems. The work on Go demonstrates that DeepMind has a leading edge on problems like drug discovery and energy optimisation in power networks that require heavy computation to find the best solution out of many available alternatives. Without a bit of hype early on, DeepMind might not have acquired the resources to solve some of these important problems.
You can read more about Artificial Intelligence and all the algorithms in your life in Outnumbered.
[i] Kaminski, Juliane, and Nitzschner, Marie. 2013. ‘Do dogs get the point? A review of dog–human communication ability.’ Learning and Motivation 44, no. 4: 294–302.
[ii] The text that follows is based on the review Chittka, Lars. 2017. ‘Bee cognition.’ Current Biology 27, no. 19: R1049–53.
[iii] Bianconi, Eva, Piovesan, Allison, Facchin, Federica, Beraudi, Alina, Casadei, Raffaella, Frabetti, Flavia, Vitale, Lorenza, et al. 2013. ‘An estimation of the number of cells in the human body.’ Annals of Human Biology 40, no. 6: 463–71.
[iv] Herculano-Houzel, Suzana. 2009. ‘The human brain in numbers: a linearly scaled-up primate brain.’ Frontiers in Human Neuroscience3.
[v] Scholz, Monika, Dinner, Aaron R., Levine, Erel, and Biron, David. 2017. ‘Stochastic feeding dynamics arise from the need for information and energy.’ Proceedings of the National Academy of Sciences 114, no. 35: 9261–6.
[vi] Boisseau, Romain P., Vogel, David, and Dussutour, Audrey. 2016. ‘Habituation in non-neural organisms: evidence from slime moulds.’ In Proc. R. Soc. B, vol. 283, no. 1829, p. 20160446. The Royal Society.
[vii] I look at this in more detail in this paper: Ma, Qi, Johansson, Anders, Tero, Atsushi, Nakagaki, Toshiyuki, and Sumpter, David J. T. 2013. ‘Current-reinforced random walks for constructing transport networks.’ Journal of the Royal Society Interface 10, no. 80: 20120864.
[viii] Baker, Melinda D., and Stock, Jeffry B. 2007. ‘Signal transduction: networks and integrated circuits in bacterial cognition.’ Current Biology 17, no. 23: R1021–4.
[ix] Turing, Alan M. 1950. ‘Computing machinery and intelligence.’ Mind 59, no. 236: 433–60.
[x] I looked at one such example in the following article. Herbert-Read, James E., Romenskyy, Maxym, and Sumpter, David J. T. 2015. ‘A Turing test for collective motion.’ Biology letters 11, no. 12: 20150674.