Data Journalism

Driven to the edge of chaos and distraction

How emerging technologies could help satisfy AI’s hunger for power and make algorithms more ethical

Photo by Tara Winstead from Pexels

Artificial intelligence has power and secrecy problems, encapsulated by an insatiable appetite for data, energy, and deep-learning models producing inscrutable, unaccountable outputs. A glimmer of hope that AI’s environmental and ethical impact could be counteracted comes from a confluence of emerging technologies.

AI research investment rose steadily since 2010, with a noticeable upward trend from India, China, and Korea before the pandemic struck.

Get the data here

AI’s data diet

Artificial intelligence research is also driving its increasingly ravenous appetite for data and power.

According to the International Energy Agency, demand for data is “rising exponentially”, causing a rapid expansion of global data centres. Alongside power-hungry AI systems, estimated to have a substantial environmental impact, the combined effects are significant.

Two of the top three countries who have invested most heavily in AI research since 2010 also have the lowest proportion of power stations using non-renewable fuel.

Get the data here

Over 8000 data centres crunch the world’s data, according to a briefing for the United States International Trade Commission using proprietary figures from Cloudscene, an Australian data intelligence firm.

A study of global data centre energy consumption shows how comprehensive, accessible energy consumption figures remain elusive. Due to a lack of location data, and “extrapolation” based models, current estimates lack certainty.

This data gap led me to create estimates based on my own analysis of publicly available data. Though this imperfect snapshot gave conspicuously low figures for some industry giants (e.g. Amazon, Google, et al.), it provided insight into some front-runners like Digital Realty and Equinix.

3 companies have over 100 data centres | Get the data here

Digital Realty leads the USA by virtue of building new data centres meeting the Environmental Protection Agency’s Energy Star rating.

Based on Cloudscene’s figures and official EPA data however, only around 7% of US data centres have this rating. 

Digital Realty also had the highest number of hyperscale data centres with at least 5000 servers spread over 10000 square feet.

Get the data here

AI’s explosive growth means demand for servers with “high-end” Graphical Processing Units has “skyrocketed”, according to Telehouse.

Though not the dominant industry processor of choice, a significant uptick in GPUs dedicated to AI has led to uncertainty about their projected power footprint.

Emerging technology

One innovative AI technology which could mitigate environmental concerns is called reservoir computing at the “edge of chaos”.

Joel Hochstetter, a postgraduate student at the University of Sydney, Australia, introduced his team’s research into novel artificial neural networks, seeking to mimic the human brain, in a recent article commissioned by his university. I spoke with him to find out more about their work and its potential applications.

So essentially I study networks of tiny silver wires….when we stimulate these networks with electricity, we see…interesting electrical switching phenomena.

One of the aims of the work which we’re trying to harness are interesting dynamical behaviours…for…information processing tasks in a framework known as reservoir computing.”

Reservoir computing is startlingly diverse and due to its “low training cost”, and “fast learning” processes compared to conventional neural networks, could revolutionise artificial intelligence.

So if you think about a conventional, artificial neural network you have…layers and they’re fully connected. Then you have some inputs at the start and outputs at the end, and you have weights between the layers.

The way that you train these neural networks is that you go through and kind of fiddle with all the weights and go back and forth through the neural network, to try and optimise some kind of cost function, or basically get the highest accuracy you can.”

Alongside removing this intensive training process, a reservoir computer evolves autonomously “by its own intrinsic dynamics”, and is akin to the “last layer in a neural network.”

Experimental forms of reservoir computing show promise for substantial power efficiency savings with tasks like image classification. One study demonstrated a hundred-fold improvement over conventional neural networks with an equivalent level of accuracy.

I can’t give you any numbers…but if you think about how your regular kind of neural network has many dense layers, then it’s very computationally intensive.”

“If you do this on a regular computer, then you’re limited by…this…Von Neumann bottleneck, where your CPU is passing information between RAM and processing. Because you’re having to access the memory and then operate on [it], that slows you down.”

Transistors with memories

One promising technology being developed which might exceed contemporary computing limitations, whilst reducing AI’s carbon footprint, is called a “memristor”.

So essentially a memristor is…like an electrical device that has a memory of past stimuli. It’s different to a transistor because, for a transistor, if you turn all the voltages off everything is forgotten. 

In terms of energy efficiency these memristor devices overcome this Von Neumann bottleneck that I mentioned because processing and memory occurs at the same location. So you no longer have to pass information back and forth to do processing.”

Industry has yet to exploit the nascent technology beyond “speeding up existing algorithms”.

Commercially, I think companies like IBM would be doing research into this but no one’s using this technology yet, because it’s limited by the material side in terms of – it’s not quite reliable enough yet – but it’s promising.”

An almost chaotic solution

Joel’s team discovered that their memristive system performs more effectively when pushed to the brink of disorder, a delicate equilibrium between stability and instability known in scientific literature as the “edge of chaos“.

Essentially it’s been hypothesised that, for a wide range of dynamical systems… – like brains or gene regulatory networks for example – being close to what’s called the edge of chaos or criticality, where you’re near some kind of phase transition between two different, completely distinct regimes, might give you optimal performance in different ways.

If you think about a regime where the system is very ordered, [it is]…predictable and many parts of the system are either doing the same thing, or they’re not doing anything at all. By contrast, in a regime where a dynamical system might be chaotic, different parts of the system are uncorrelated and…all over the place in how they’re working. Somewhere in the middle between this kind of unpredictable chaotic regime, and this kind of ordered, slowly changing system, you might be able to have the greatest…coherence…complexity and randomness.”

Memristive technologies driven to the edge of chaos could be applied to myriad complex computational tasks, providing solutions to managing the relentless rise of big data and increasingly sophisticated, power-hungry AI.

Joel shared what he hoped the technology might be used for in the future:

Looking forward to the next 15-20 years…predicting chaotic or unpredictable time series. For example, if you were trying to predict the weather or the stock market. Another thing that we might be able to do with these networks is to handle other kinds of streaming data, like video data.”

I asked Joel if he thought it could dramatically improve the way data centres currently function.

Definitely. I think so. Once we accomplish the initial challenges then I think looking long-term this would be the hope: That we could process large data sets, like you described.”

Unlocking AI’s black box

As AI becomes ubiquitous, experts have questioned the trustworthiness of automated systems underpinned by opaque algorithms, which are steeped in hidden biases but so complex that they are inscrutable to the researchers deploying them.

This has sparked controversy, like Google’s recent decision to censor critical research by Dr Timnit Gebru, former co-lead for ethical AI research, who co-authored a paper looking at the “environmental and ethical implications of large-scale AI language models”. The situation raised several eyebrows.

I spoke to David Morales, a Phd student at the university of Granada working on “AI explainability for machine learning algorithms”, whose team discovered a new method of distracting image classification algorithms to make them more transparent using “visual explanation techniques”.

The problem with machine learning algorithms is that they are known as black boxes because the algorithm gets an input, and you get an output, but you…don’t know why the algorithm made that decision.”

“Visual explanation techniques try to explain machine learning algorithms.”

“So…usually the algorithm learns by itself how to classify an image. We modified this training process in order to force the algorithm to discover new features and…regions of interest.”

Though requiring human intervention whilst running a “classical” algorithm, before using the new technique to query the results, this could be simplified to improve efficiency.

“Data scientists have to learn to improve visual explanation techniques…to develop new deep learning…and…machine learning models to make them more trustworthy and interpretable.”

New algorithms could be more accountable and energy efficient, whilst enabling humanity to learn from AI.

“At the end it’s just one algorithm…not two, so I think the energy [footprint] will be much lower.”

“We can learn to perform many tasks in a better way if we can see how artificial intelligence resolved these problems.”

Get involved

The “pernicious effect” of AI bias is being tackled by organisations like the USA’s National Institute of Standards and Technology, who are about to publish a special paper on the topic, aiming to identify and manage AI biases whilst improving trust in algorithms. NIST wants public input on their proposals, which can be submitted by “completing the template form (in Excel format) and sending it to ai-bias@list.nist.gov.“[𝟏]


Endnotes

[𝟏] Link will open on the NIST website. Author has no affiliation to NIST.