Google set out to design more energy efficient AI but may have found something even better

Sign up for our newsletter, Artificiality, to get our latest AI insights delivered directly to your email inbox.

Neuroscience and artificial intelligence have multiple intersection points. There are many examples of discoveries in neuroscience and psychology which provide inspiration for AI researchers, as well as the other way around. It’s a mutually reinforcing cycle as AI researchers adapt biological structures to machine architectures while neuroscientists and psychologists learn to apply various AI techniques such as large-scale data processing and the mathematics of AI systems.

One big difference between computers and the human brain is energy efficiency. Human brains are incredibly efficient, using only about 10 watts of power, less than a household lightbulb. Compare this to the K supercomputer in Japan which uses 10MW (or 10 million watts)! AI is specifically in the spotlight with recent research from UMass Amherst finding that training some NLP models produced the equivalent of “a trans-American flight” in carbon emissions based on the electricity used for compute.

Google has long seen the link between AI and energy efficiency, so it makes sense that Google AI researchers would be looking for ways to increase the energy efficiency of AI models. In September, a Google team based in Zurich published research into a new type of neural network architecture with the aim of mimicking the energy efficiency of a biological neuron. The timing of individual neuronal spikes is essential for biological brains to make fast responses to sensory stimuli. There is information encoded in the signal based specifically on its timing. In brains, timing provides fine-grained instructions, for example, the precise angle by which an animal is trying to move a limb.

Artificial neural networks do not have this intrinsic temporal coding ability which is one of the reasons they are such an energy hog. A lot of energy goes into additional training on larger data sets than would otherwise be required than if signals themselves contained timing information. The question is, could an artificial neuron that was sensitive to timing of inputs behave more like a biological neuron? The Google researchers wanted to test this idea so created a neural network that contained a biologically-plausible alpha synaptic function that encodes information in the relative timing of individual spikes – a spiking neural network.

Here’s the surprising part. The spiking network worked in two different ways, mirroring the accuracy-speed trade-off observed in human decision-making: a highly accurate but slow regime, and a fast but slightly lower-accuracy regime. This sounds eerily like Kahneman and Tversky’s System 1 and System 2, which describes how humans use emotions and intuition to make fast (but not always accurate) decisions with System 1 and slower (but more deliberative and logical) decisions with System 2.

Making this parallel even more interesting is that the spiking neural network spontaneously shifted between the two modes. Early during training, the network exhibited a slow and highly accurate regime, where almost all neurons fired before the network made a decision. Later in training, the network spontaneously shifted into a fast but slightly less accurate regime. The researchers described this as “intriguing.” They go on to say, “spiking networks can, in a sense, be “deliberative”, or make a snap decision on the spot.”  This is conceptually a parallel idea to how System 2 in humans has to be consciously “called into action” because we naturally rely on our instincts (System 1) first.

Google AI Blog

It matters anytime an AI researcher gets a surprise. While this is very early research, has only been applied on a limited dataset and there are trade-offs such as increased training time and a small decrease in relative performance, it offers a fascinating glimpse into the mutually reinforcing cycle of AI and neuroscience knowledge. Spiking networks can contribute to understanding low-level information processing occurring in the brain and help in the search for neural correlates of cognition. On the other hand, they can be deployed in neuromorphic hardware to implement rapid and energy-efficient computations.

It could even be that it will be possible to integrate artificial spiking networks with biological neural networks and create interfaces between AI and our brains. That’s a long way out there, but hints at a future where biological and artificial intelligence can be seamlessly integrated.

Image credit: Photo by Juan Carlos Becerra on Unsplash

At Sonder Scheme, we help humans win in the age of AI, with practical services, training and tools. We combine deep expertise in AI with decades of experience working with leading organizations including Adobe, Apple, Cisco, Fonterra, Google, Morgan Stanley, National Headstart Association, the New York Times, Quartz, Transpower NZ, US Department of Energy and the World Green Building Council.

Share on email
Share on facebook
Share on linkedin
Share on twitter
Scroll to Top