I have a guesstimate for number of parameters, but not for overall compute or dollar cost:
Each agent was trained on 8 TPUv3's, which cost about $5,000/mo according to a quick google, and which seem to produce 90 TOPS, or about 10^14 operations per second. They say each agent does about 50,000 steps per second, so that means about 2 billion operations per step. Each little game they play lasts 900 steps if I recall correctly, which is about 2 minutes of subjective time they say (I imagine they extrapolated from what happens if you run the game at a speed such that the physics simulation looks normal-speed to us). So that means about 7.5 steps per subjective second, so each agent requires about 15 billion operations per subjective second.
So... 2 billion operations per step suggests that these things are about the size of GPT-2, i.e. about the size of a rat brain? If we care about subjective time, then it seems the human brain maybe uses 10^15 FLOP per subjective second, which is about 5 OOMs more than these agents.
Do you mind sharing your guesstimate on number of parameters?
Also, do you have per chance guesstimates on number of parameters / compute of other systems?
Michael Dennis tells me that population-based training typically sees strong diminishing returns to population size, such that he doubts that there were more than one or two dozen agents in each population/generation. This is consistent with AlphaStar I believe, where the number of agents was something like that IIRC...
Anyhow, suppose 30 agents per generation. Then that's a cost of $5,000/mo x 1.3 months x 30 agents = $195,000 to train the fifth generation of agents. The previous two generations were probably quicker and cheaper. In total the price is prob...
Also for comparison, I think this means these models were about twice as big as AlphaStar. That's interesting.
I'm talking about these agents (LW thread here)
I'd love an answer either in operations (MIPS, FLOPS, whatever) or in dollars.
Follow-up question: How many parameters did their agents have?
I just read the paper (incl. appendix) but didn't see them list the answer anywhere. I suspect I could figure it out from information in the paper, e.g. by adding up how many neurons are in their LSTMs, their various other bits, etc. and then multiplying by how long they said they trained for, but I lack the ML knowledge to do this correctly.
Some tidbits from the paper:
This suggests 120 billion steps of training for the final agents. But elsewhere in the post they state each agent in the final generation experienced 200 billion training steps, so.... huh?
Anyhow. Another tidbit:
So, if 100 million steps takes 30min in their setup, and they did 200 billion steps for the final generation, that means the final generation took 30 x 2,000 = 41 days. Makes sense. So the whole project probably took something like 100 - 200 days, depending on whether generations 1 - 4 were quicker.
How much does that cost though??? In dollars or FLOPs? I have no idea.
EDIT: It says each agent was trained on 8 TPUv3's. But how many agents were there? I can't find anything about the population size. Maybe I'm not looking hard enough.