TLDR: Humanity — which includes all nations, organisations, and individuals — should limit the growth rate of machine learning training runs from 2020 until 2050 to below 0.2 OOMs/year.

Paris Climate Accords

In the early 21st century, the climate movement converged around a "2°C target", shown in Article 2(1)(a) of the Paris Climate Accords:

Holding the increase in the global average temperature to well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C above pre-industrial levels, recognizing that this would significantly reduce the risks and impacts of climate change;
"Holding the increase in the global average temperature to well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C above pre-industrial levels, recognizing that this would significantly reduce the risks and impacts of climate change;"(source)

The 2°C target helps facilitate coordination between nations, organisations, and individuals.

  • It provided a clear, measurable goal.
  • It provided a sense of urgency and severity.
  • It promoted a sense of shared responsibility.
  • It establishes common knowledge of stakeholder goals.
  • It helped to align efforts across different stakeholders.
  • It signals a technical practical mindset for solving the problem.
  • It created a shared understanding of what success would look like.

The 2°C target was the first step towards coordination, not the last step.

The AI governance community should converge around a similar target.

0.2 OOMs/year target

I propose a fixed target of 0.2 OOMs/year. "OOM" stands for "orders of magnitude" and corresponds to a ten-fold increase, so 0.2 OOMs/year corresponds to a 58% year-on-year growth. The 0.2 OOMs/year figure was recently suggested by Jaime Sevilla, which prompted me to write this article.

  • I do not propose any specific policy for achieving the 0.2 OOMs/year target, because the purpose of the target is to unify stakeholders even if they support different policies.
  • I do not propose any specific justification for the 0.2 OOMs/year target, because the purpose of the target is to unify stakeholders even if they have different justifications.

Here is the statement:

"Humanity — which includes all nations, organisations, and individuals — should limit the growth rate of machine learning training runs from 2020 until 2050 to below 0.2 OOMs/year."

The statement is intentionally ambiguous about how to measure "the growth rate of machine learning training runs". I suspect that a good proxy metric would be the effective training footprint (defined below) but I don't think the proxy metric should be included in the statement of the target itself.

Effective training footprint

What is the effective training footprint?

The effective training footprint, measured in FLOPs, is one proxy metric for the growth rate of machine learning training runs. The footprint of a model is defined, with caveats, as the total number of FLOPs used to train the model since initialisation.

Caveats:

  • A randomly initialised model has a footprint of 0 FLOPs.
  • If the model is trained from a randomly initialised model using SGD or a variant, then its footprint is the total number of FLOPs used in the training process.
  • If a pre-trained base model is used for the initialisation of another training process (such as unsupervised learning, supervised learning, fine-tuning, or reinforcement learning), then the footprint of the resulting model will include the footprint of the pre-trained model.
  • If multiple models are composed to form a single cohesive model, then the footprint of the resulting model is the sum of the footprints of each component model.
  • If there is a major algorithmic innovation which divides by a factor of  the FLOPs required to train a model to a particular score on downstream tasks, then the footprint of models trained with that innovation is multiplied by the same factor .
  • This list of caveats to the definition of Effective Training Footprint is non-exhaustive. Future consultations may yield additional caveats, or replace Effective Training Footprint with an entirely different proxy metric.

Fixing the y-axis

  • According to the 0.2 OOMs/year target, there cannot exist an ML model during the year  with a footprint exceeding , where . That means that  FLOPs for some fixed constant .
  • If we consult EpochAI's plot of compute training runs during the large-scale era of ML, we see that footprints have been growing with approximately 0.5 OOMs/year.
  • We can use this trend to fix the value of . In 2022, the footprint was approximately 1.0e+24. Therefore .
  • In other words, .
  • I have used 2022 as an anchor to fix the y-axis . If I had used an earlier date then the 0.2 OOMs/yr target would've been stricter, and if I had used a later date then the 0.2 OOMs/yr target would've been laxer. If the y-axis for the constraint is fixed to the day of the negotiation (the default schelling date), then stakeholders who want a laxer constraint are incentivised to delay negotiation. To avoid that hazard, I have picked "January 1st 2022" to fix the y-axis. I declare 1/1/2022 to be the schelling date for the 0.2 OOMs/year target.

Year-by-year limits

In year , all model must have a log10-footprint below .

YearMaximum training footprint (FLOPs) in logarithm base 10Maximum training footprint (FLOPs)
202023.63.98E+23
202123.86.31E+23
202224.01.00E+24
202324.21.58E+24
202424.42.51E+24
202524.63.98E+24
202624.86.31E+24
202725.01.00E+25
202825.21.58E+25
202925.42.51E+25
203025.63.98E+25
203125.86.31E+25
203226.01.00E+26
203326.21.58E+26
203426.42.51E+26
203526.63.98E+26
203626.86.31E+26
203727.01.00E+27
203827.21.58E+27
203927.42.51E+27
204027.63.98E+27
204127.86.31E+27
204228.01.00E+28
204328.21.58E+28
204428.42.51E+28
204528.63.98E+28
204628.86.31E+28
204729.01.00E+29
204829.21.58E+29
204929.42.51E+29
205029.63.98E+29

Implications of the 0.2 OOMs/year target

  • Because , this means that the maximum footprint would grow 58% every year.
  • 0.2 OOMs/year is equivalent to a doubling time of 18 months.
  • Every decade, the maximum permissible footprint increases by a factor of 100.
  • 0.2 OOMs/year was the pre-AlexNet growth rate in ML systems.
  • The current growth rate is 0.5 OOMs/year, which is 2.5 times faster than the target rate.
  • As the current 0.5 OOMs/year growth rate, after 10 years we would have ML training runs which are 100 000x larger than existing training runs. Under the 0.2 OOMs/year growth rate, this growth would be spread over 25 years instead.
  • Comparing 0.2 OOMs/year target to hardware growth-rates:
    • Moore's Law states that transitiors per integrated circuit doubles roughly every 2 years.
    • Koomey's Law states that the FLOPs-per-Joule doubled roughly every 1.57 years until 2000, whereupon it began doubling roughly every 2.6 years.
    • Huang's Law states that the growth-rate of GPU performance exceeds that of CPU performance. This is a somewhat dubious claim, but nonetheless I think the doubling time of GPUs is longer than 18 months.
    • In general, the 0.2 OOMs/year target is faster than the current hardware growth-rate.
  • On March 15 2023, OpenAI released GPT-4 which was trained with an estimated 2.8e+25 FLOPs. If OpenAI had followed the 0.2 OOMs/year target, then GPT-4 would've been released on March 29 2029. This is because if  then.
  • 0.2 OOMs/year target would therefore be an effective moratorium on models exceeding GPT-4 until 2029. Nonetheless, the moratorium would still allow an AI Summer Harvest — in which the impact of ChatGPT-3.5/4 steadily dissipates across the economy until a new general equilbirum is reached where...
  1. People have more money to spend.
  2. The products and services are more abundant, cheaper, and of a higher quality.
  3. People have more leisure to enjoy themselves.
New Comment