One piece of advice I gave to EAs of various stripes in early 2021 was: do everything you can to make the government sane around biorisk, in the wake of the COVID pandemic, because this is a practice-run for AI.

I said things like: if you can't get the world to coordinate on banning gain-of-function research, in the wake of a trillions-of-dollars tens-of-millions-of-lives pandemic "warning shot", then you're not going to get coordination in the much harder case of AI research.

Biolabs are often publicly funded (rather than industry-funded). The economic forces arrayed behind this recklessly foolish and impotent research consists of “half-a-dozen researchers thinking it’s cool and might be helpful”. (While the work that would actually be helpful—such as removing needless bureaucracy around vaccines and investing in vaccine infrastructure—languishes.) Compared to the problem of AI—where the economic forces arrayed in favor of “ignore safety and rush ahead” are enormous and the argument for expecting catastrophe much murkier and more abstract—the problem of getting a sane civilizational response to pandemics (in the wake of a literal pandemic!) is ridiculously easier.

And—despite valiant effort!—we've been able to do approximately nothing.

We're not anywhere near global bans on gain-of-function research (or equivalent but better feats of coordination that the people who actually know what they're talking about when it comes to biorisk would tell you are better targets than gain-of-function research).

The government continues to fund research that is actively making things worse, while failing to put any serious funding towards the stuff that might actually help.

I think this sort of evidence has updated a variety of people towards my position. I think that a variety of others have not updated. As I understand the counter-arguments (from a few different conversations), there are two main reasons that people see this evidence and continue to hold out hope for sane government response:

 

1. Perhaps the sorts of government interventions needed to make AI go well are not all that large, and not that precise.

I confess I don't really understand this view. Perhaps the idea is that AI is likely to go well by default, and all the government needs to do is, like, not use anti-trust law to break up some corporation that's doing a really good job at AI alignment just before they succeed? Or perhaps the idea is that AI is likely to go well so long as it's not produced first by an authoritarian regime, and working against authoritarian regimes is something governments are in fact good at?

I'm not sure. I doubt I can pass the ideological Turing test of someone who believes this.

 

2. Perhaps the ability to cause governance to be sane on some issue is tied very directly to the seniority of the government officials advising sanity.

EAs only started trying to affect pandemic policy a few years ago, and aren't very old or recognized among the cacophony of advisors. But if another pandemic hit in 20 years, the sane EA-ish advisors would be much more senior, and a lot more would get done. Similarly, if AI hits in 20 years, sane EA-ish advisors will be much more senior by then. The observation that the government has not responded sanely to pandemic near-misses, is potentially screened-off by the inexperience of EAs advising governance.

I have some sympathy for the second view, although I'm skeptical that sane advisors have significant real impact. I'd love a way to test it as decisively as we've tested the "government (in its current form) responds appropriately to warning shots" hypotheses.

On my own models, the "don't worry, people will wake up as the cliff-edge comes more clearly into view" hypothesis has quite a lot of work to do. In particular, I don't think it's a very defensible position in isolation anymore. The claim "we never needed government support anyway" is defensible; but if you want to argue that we do need government support but (fortunately) governments will start behaving more reasonably after a warning shot, it seems to me like these days you have to pair that with an argument about why you expect the voices of reason to be so much louder and more effectual in 2041 than they were in 2021.

(Which is then subject to a bunch of the usual skepticism that applies to arguments of the form "surely my political party will become popular, claim power, and implement policies I like".)

 

See also: the law of continued failure, and Rob Bensinger's thoughts on the topic.

New Comment
11 comments, sorted by Click to highlight new comments since: Today at 2:30 AM

The only viable counterargument I've heard to this is that the government can be competent at X while being incompetent at Y, even if X is objectively harder than Y. The government is weird like that. It's big and diverse and crazy. Thus, the conclusion goes, we should still have some hope (10%?) that we can get the government to behave sanely on the topic of AGI risk, especially with warning shots, despite the evidence of it behaving incompetently on the topic of bio risk despite warning shots.

Or, to put it more succinctly: The COVID situation is just one example; it's not overwhelmingly strong evidence.

(This counterargument is a lot more convincing to the extent that people can point to examples of governments behaving sanely on topics that seem harder than COVID. Maybe Y2K? Maybe banning bioweapons? Idk, I'd be interested to see research on this: what are the top three examples we can find, as measured by a combination of similarity-to-AGI-risk and competence-of-government-response.)

I can't seem to figure out the right keywords to Google, but off the top of my head, some other candidates: banning CFCs (maybe easier? don't know enough), the taboo against chemical weapons (easier), and nuclear non proliferation (probably easier?)?

I think Anders Sandberg did research on this at one point, and I recall him summarizing his findings as "things are easy to ban as long as nobody really wants to have them". IIRC, things that went into that category were chemical weapons (they actually not very effective in modern warfare), CFCs (they were relatively straightforward to replace with equally effective alternatives), and human cloning.

This is my impression as well, but it's very possible that we're looking at the wrong reference class (IE its plausible that many "sane" things large governments have done are not salient). Maybe some of the big social welfare/early environmental protection programs? 

On welfare: Bismarck is famous as a social welfare reformer but these efforts were famously made to undermine socialism and appease the working class, a result any newly-formed volatile state would enjoy. I expect that the effects of social welfare are useful in most countries from the same basis.

On environmentalism today, we see significant European advances in green energy right now, but this is accompanied by large price hikes in natural energy resources, providing quite an incentive. Early large-scale state-driven environmentalism (e.g. Danish wind energy R&D and usage) was driven by the 70s oil crises in the same fashion. And then there's of course the democratic incentives, i.e. are enough of the population touting environmentalism, then we'll do it (though 3.5% population-wide active participation seems to work as well).

And that's just describing state-side shifts. Even revolutions have been driven by non-ideological incentives. E.g. the American revolution started as a staged "throwing tea in the ocean" act by tea smugglers because London reduced the tax on tea for the East India company, reducing their profits (see myths and the article about smugglers' incentives). Perpetuating a revolution also became a large personal profit for Washington.

[-]Mau2y25

I'd guess the very slow rate of nuclear proliferation has been much harder to achieve than banning gain-of-function research would be, since, in the absence of intervention, incentives to get nukes would have been much bigger than incentives to do gain-of-function research.

Also, on top of the taboo against chemical weapons, there was the verified destruction of most chemical weapons globally.

I agree that nuclear non proliferation is probably harder than a ban on gain-of-function. But in this case, the US + USSR both had a strong incentive to discourage nuclear proliferation, and had enough leverage to coerce smaller states to not work on nuclear weapon development (e.g one or the other were the security provider for the current government of said states). 

Ditto with chemical weapons, which seem to have lost battlefield relevance to conflicts between major powers (ie it did not actually break the trench warfare stalemate in WWI even when deployed on a massive scale, and is mainly useful as a weapon of terror against weaker opponents).  At this point, the moral arguments + downside risk of chemical attacks vs their own citizens shifted the calculus for major powers. Then the major powers were able to enforce the ban somewhat successfully on smaller countries. 

I do think that banning GoF (especially on pathogens that already have or are likely to cause a human pandemic) should be ~ as hard as the chemical weapons case---there's not much benefit to doing it, and the downside risk is massive. My guess is a generally sane response to COVID is harder, since it required getting many things right, though I think the median country's response seems much worse than the difficulty of the problem would lead you to believe. 

Unfortunately, I think that AGI relevant research has way more utility than many of the military technologies that we've failed to ban. Plus, they're super financially profitable, instead of being expensive to maintain. So the problem for AGI is harder than the problems we've really seen solved via international coordination?

I broadly agree with this general take, though I'd like to add some additional reasons for hope:

1. EAs are spending way more effort and money on AI policy. I don't have exact numbers on this, but I do have a lot of evidence in this direction: at every single EAG, there are far more people interested in AI x-risk policy than biorisk policy, and even those focusing on biorisk are not really focusing on preventing gain-of-function (as opposed to say, engineered pandemics or general robustness). I think this is the biggest reason to expect that AI might be different.

I also think there's some degree of specialization here, and having the EA policy people all swap to biorisk would be quite costly in the future. So I do sympathize with the majority of AI x-risk focused EAs doing AI x-risk stuff, as opposed to biorisk stuff. (Though I also do think that getting a "trial run" in would be a great learning experience.)

2. Some of the big interventions that people want are things governments might do anyways. To put it another way, governments have a lot of inertia. Often when I talk to AI policy people, the main reason for hope is that they want the government to do something that already has a standard template, or is something that governments already know how to do. For example, the authoritarian regimes example you gave, especially if the approach is to dump an absolute crapton of money on compute to race harder or to use sanctions to slow down other countries. Another example people talk about is having governments break up or nationalize large tech companies, so as to slow down AI research. Or maybe the action needed is to enforce some "alignment norms" that are easy to codify into law, and that the policy teams of industry groups are relatively bought into. 

The US government already dumps a lot of money onto compute and AI research, is leveling sanctions vs China, and has many Senators that are on board for breaking up large tech companies. The EU already exports its internet regulations to the rest of the world, and it's very likely that it'd export its AI regulations as well. So it might be easier to push these interventions through, than it is to convince the government not to give $600k to a researcher to do gain-of-function, which is what they have been doing for a long time. 

(This seems like how I'd phrase your first point. Admittedly, there's a good chance I'm also failing the ideological Turing test on this one.)
 
3. AI is taken more seriously than COVID. I think it's reasonable to believe that the US government takes AI issues more seriously than COVID---for example, it's seen as more of a national security issue (esp wrt China), and it's less politicized. And AI (x-risk) is an existential threat to nations, which generally tends to be taken way more seriously than COVID is. So one reason for hope is that policymakers don't really care about preventing a pandemic, but they might actually care about AI, enough that they will listen to the relevant experts and actually try. To put it another way, while there is a general factor of sanity that governments can have, there's also tremendous variance in how competent any particular government is on various tasks. (EDIT: Daniel makes a similar point above.)

4. EAs will get better at influencing the government over time. This is similar to your second point. EAs haven't spent a lot of time trying to influence politics. This isn't just about putting people into positions of power---it's also about learning how to interface with the government in ways that are productive, or how to spend money to achieve political results, or how to convince senior policymakers. It's likely we'll get better at influence over time as we learn what and what not to do, and will leverage our efforts more effectively. 

For example, the California Yimbys were a lot worse at interfacing with the state government or the media effectively when they first started ~10 years ago. But recently they've had many big wins in terms of legalizing housing!

(That being said, it seems plausible to me that EAs should try to get gain-of-function research banned as a trial run, both because we'd probably learn a lot doing it, and because it's good to have clear wins.)

Are any of these cruxes for anyone?

My impression is that 2 and 4 are relatively cruxy for some people? Especially 2. 

IE I've heard from some academics that the "natural" thing to do is to join with the AI ethics crowd/Social Justice crowd and try to get draconian anti tech/anti AI regulations passed. My guess is their inside view beliefs are some combination of:

A. Current tech companies are uniquely good at AI research relative to their replacements. IE, even if the US government destroys $10b of current industry RnD spending, and then spends $15b on AI research, this is way less effective at pushing AGI capabilities. 

B. Investment in AI research happens in large part due to expectation of outsized profits. Destroy expectation of outsized profits via draconian anti innovation/anti market regulation or just by tacking on massive regulatory burdens (which the US/UK/EU governments are very capable of doing) is enough to curb research interest in this area significantly. 

C. There's no real pressure from Chinese AI efforts. IE, delaying current AGI progress in the US/UK by 3 years just actually delays AGI by 3 years. More generally, there aren't other relevant players besides big, well known US/UK labs.

(I don't find 2 super plausible myself, so I don't have a great inside view of this. I am trying to understand this view better by talking to said academics. In particular, even if C is true (IE China not an AI threat), the US federal government certainly doesn't believe this and is very hawkish vs China + very invested in throwing money at, or at least not hindering, tech research it believes is necessary for competition.)


As for 4, this is a view I hear a lot from EA policy people? e.g. we used to make stupid mistakes, we're definitely not making them now; we used to just all be junior, now we have X and Y high ranking positions; and we did a bunch of experimentation and we figured out what messaging works relatively better. I think 4 would be a crux for me, personally - if our current efforts to influence government are as good as we can get, I think this route of influence is basically unviable. But I do believe that 4 is probably true to a large extent. 

This post seems to make an implicit assumption that the purpose of a warning shot is to get governments to do something. I usually think of a warning shot as making it clear that the risk is real, leading to additional work on alignment and making it easier for alignment advocates to have AGI companies implement specific alignment techniques. I agree that a warning shot is not likely to substitute for a technical approach to alignment.

(EDIT: Whoops, I see Evan made basically this comment already)