Yet, at no point during this development did any project leap forward by a huge margin. Instead, every paper built upon the last one by making minor improvements and increasing the compute involved. Since these minor improvements nonetheless happened rapidly, the result is that the GANs followed a fast development relative to the lifetimes of humans.
Does anyone have time series data on the effectiveness of Go-playing AI? Does that similarly follow a gradual trend?
AlphaGo seems much closer to "one project leaps forward by a huge margin." But maybe I'm mistaken about how big an improvement AlpahGo was over previous Go AIs.
Man, I agree with almost all the content of this post, but dispute the framing. This seems like maybe an oportunity to write up some related thoughts about transparency in the x-risk ecosystem.
A few months ago, I had opportunity to talk with a number of EA-aligned or x-risk concerned folks working in policy or policy adjacent roles as part of a grant evaluation process. My views here are informed by those conversations, but I am overall quite far from the action of AI policy stuff. I try to carefully flag my epistemic state regarding the claims below... (read more)
As a relevant piece of evidence here, Jason... (read more)