Training computation of notable AI systems has doubled every 6 months

Artificial intelligence has advanced rapidly over the past 15 years, fueled by the success of deep learning.
A key reason for the success of deep learning systems has been their ability to keep improving with a staggering increase in the inputs used to train them — especially computation.
Before deep learning took off around 2010, the amount of computation used to train notable AI systems doubled about every 21 months. But, as you can see in the chart, this has accelerated significantly with the rise of deep learning, now doubling roughly every six months.
As one example of this pace, compared to AlexNet, the system that represented a breakthrough in computer vision in 2012, Google’s system “Gemini 1.0 Ultra” just 11 years later used 100 million times more training computation.
To put this in perspective, training Gemini 1.0 required roughly the same amount of computation as 50,000 high-end graphics cards working nonstop for an entire year.
(This Daily Data Insight was written by Charlie Giattino and Veronika Samborska.)