The comments under a post often contains valuable insights and additions. They are also often very long and involved, and harder to cite than posts themselves. Given this, I was motivated to try to distill some comment sections on LessWrong, in part to start exploring whether we can build some norms and some features to help facilitate this kind of intellectual work more regularly. So this is my attempt to summarise the post and discussion around What Failure Looks Like by Paul Christiano.

Epistemic status: I think I did an okay job. I think I probably made the most errors in place where I try to emphasise concrete details more than the original post did. I think the summary of the discussion is much more concise than the original.

What Failure Looks Like (Summary)

On its default course, our civilization will build very useful and powerful AI systems, and use such systems to run significant parts of society (such as healthcare, legal systems, companies, the military, and more). Similar to how we are dependent on much novel technology such as money and the internet, we will be dependent on AI.

The stereotypical AI catastrophe involves a powerful and malicious AI that seems good but suddenly becomes evil and quickly takes over humanity. Such descriptions are often stylised for good story-telling, or emphasise unimportant variables.

The post below will concretely lay out two ways that building powerful AI systems may cause an existential catastrophe, if the problem of intent alignment is not solved. This is solely an attempt to describe what failure looks like, not to assign probabilities to such failure or to propose a plan to avoid these failures.

There are two failure modes that will be discussed. First, we may increasingly fail to understand how our AI systems work and subsequently what is happening in society. Secondly, we may eventually give these AI systems massive amounts of power despite not understanding their internal reasoning and decision-making algorithms. Due to the massive space of designs we'll be searching through, if we do not understand the AI, this will mean certain AIs will be more power-seeking than expected, and will take adversarial action and take control.

Failure by loss of control

There is a gap between what we want, and what objective functions we can write down. Nobody has yet created a function that when maximised perfectly describes what we want, but increasingly powerful machine learning will optimise very hard for what function we encode. This will lead to a strong increases the gap between what we can optimise for and what we want. (This is a classic goodharting scenario.)

Concretely, we will gradually use ML to perform more and more key functions in society, but will largely not understand how these systems work or what exactly they’re doing. The information we can gather will seem strongly positive: GDP will be rising quickly, crime will be down, life-satisfaction ratings will be up, congress's approval will be up, and so on.

However, the underlying reality will increasingly diverge from what we think these metrics are measuring, and we may no longer have the ability to independently figure this out. In fact, new things won’t be built, crime will continue, people's lives will be miserable, and congress will not be effective at improving governance, we’ll just believe this because the ML systems will be improving our metrics, and will have a hard time understanding what’s going on outside of what they report.

Gradually, our civilization will lose its ability to understand what is happening in the world as our systems and infrastructure shows us success on all of our available metrics (GDP, wealth, crime, health, self-reported happiness, etc) and in the end we will no longer be in control at all. Giving up this new technology would be analogous to living like the Amish today (but more extreme), which the vast majority of people do not even consider.

In this world, the future will be managed by a system we do not understand, designed to show us everything is going well, and the system will be integral to all parts of society. This is not a problem that we do not already face (the economy and the stock market are already quite confusing), but it is a problem that can be massively exacerbated by machine learning.

This gradually moves toward a world where civilization is run using tools that seem positive, but whose effects we don’t really understand, with no way out of this state of affairs.

Failure by enemy action

Machine Learning systems not only run computation we don’t understand, but entire agents too. The focus in this failure mode is on agents that seek to gain influence in the world, agents that we are not fully aware we have built.

As above, we will gradually use machine learning to perform more and more key functions in society, and will largely not understand how these systems work or what algorithms they’re running.

Despite this lack of understanding, we will design very competent systems with different goals to us (with many objective functions) in charge of a lot of the economy and civilization (law, health, governance, industry, etc).

We generally don’t understand a lot of what happens inside ML systems, all we know is that massive numbers of types of cognition and policies are being built and tested as we train such systems. We know that such systems may create various subsystems optimising for different goals (this is analogous to how evolution created humans who do not primarily care about self-replication, but care about many other things like art and adventure), but we do not know of a way to ensure that such deceptive influence-seeking agents are not created. As long as this is true, and as long as we don’t understand how to ensure that they are not instantiated in our ML systems, then they will eventually be built ‘by default’.

Concretely, this looks like us building systems that are very competent and good at being useful – they inform policy, they provide healthcare, serve as military units, give people life advice, and so on, even though we don't understand the internals of these systems. But they will sometimes blatantly fail – an automated corporation may just take the money and run, a law enforcement system may start seizing resources and try to defend itself from being discovered or decommissioned. There are many adversarial forces already present in the world (feral animals, corrupt bureaucrats, companies obsessed with growth, psychopaths, etc), and to this, we will add certain classes of powerful ML systems. 

We’ll likely solve small versions of this problem, preventing any medium sized failures. But as automation grows, we will reach a point where we could not recover from a correlated automation failure, and now the incentives on the automated agents will be very different. During a period of heightened vulnerability – a conflict between states, a natural disaster, a serious cyberattack, etc – we will see a cascading series of automation failures, where we suddenly critically rely on systems in off-distribution behaviour and have to allow it substantial influence over the world without leaving us the ability to course correct afterwards.

After the disaster, we will realise that we have a number of powerful influence-seeking systems that are sophisticated enough that we can probably not get rid of them.

This is the classic Bostromian scenario of an AI taking a treacherous turn into explicit power-grabbing behaviour, and will take control over civilization and its trajectory with no way for humanity to retake control. While the increase in capabilities will be gradual, the turn will be fairly sudden and binary, and not have clear signalling in the lead-up.

The key distinction between this and the prior failure mode is that the systems are taking clear adversarial action to take control, as opposed to just trying to do their jobs very well. The AIs will eventually be taking clear power-grabbing action. At a point where the ML systems are in a position of sufficient power and control to stop obeying us, they will directly take power and control out of the hands of human civilization.

This doesn’t depend on a very specific story of how a given AI system is built, just that it is searching a very large space of algorithms that includes adversarial agents.

Further Discussion Summary.

This is not a comprehensive list of AI Risks

The scenarios above are failures from failing to solve the intent alignment problem, the problem of ensuring that AIs we build are “trying to do what their creators want”.

But this is not a comprehensive list of existential risks to which AI may contribute. Two other key problems associated with AI include:

  • We may build powerful AI weaponry that we use to directly end civilization such as nations corrupting each others’ news sources to the point of collapse, or nations engaging in war using advanced weapons such as drones to the point of extinction.
  • We may not use the powerful AI weaponry to end civilization directly, but use it to directly instantiate very bad end states for humanity that on reflection are very limited in value. This could look like building a universe of orgasmium, or something else that is confused about ethics and resultantly pretty meaningless.

This was discussed in comments by Wei Dai and Paul Christiano.

Parts of the stereotypical AI movie story may still apply

Not all the parts of the stereotypical movie story are necessarily mistaken. For example it is an open question as to whether failure by robot armies committing genocide is a likely outcome of the two failures modes discussed above.

Failure from loss of control. It is unclear whether or not the failures from loss of control involve mass military action – whether this is one of the simple metrics that we can optimise for whilst still losing control, or whether our simple metrics will actually be optimised around e.g. if the simple metric is “our security cameras show no death or murder” whether this means that we succeed on this basic metric yet still lose, or whether it means something the AI fabricating a fake feed into the video camera and then doing whatever it wants with the people in real life.

Failure from enemy action. Advanced AI may be used to help run legal and military systems, which could be used to commit genocide by an adversarial actor. There are many ways to take over the world that don’t involve straightforward mass murder, so it is not a necessary scenario for the initial point-of-failure, but either way a treacherous turn likly results in the swift end of humanity soon after, and this is a plausible mechanism.

This was discussed in comments by Carl Shulman and Paul Christiano.

Multipolar outcomes are possible

In a failure by enemy action, the precise details of the interactions between different AI systems running society are unclear. They may attempt to cooperate or enter into conflict, potentially leading to a multipolar outcome, and this can change the dynamics by which power-grabbing occurs.

This was discussed in comments by Richard Ngo and Wei Dai.

These failure modes are very general and may already be occurring

The two scenarios above do not depend on a detailed story of how AI systems work, and are problems that apply generally in the world. It is an interesting and open question to what extent civilization is currently gradually collapsing due to the problems stated above.

This was discussed in a comment by Zvi Mowshowitz.

A faster takeoff may emphasise different failures

This is specifically a description of failure in a world where there is a slow takeoff, whereby AI capabilities rise gradually – more specifically, where there is not a “one year doubling of the world economy before there's been a four year doubling”. Faster takeoff may see other failures.

This was discussed in comments by Buck Shlegeris and Ben Pace.

We can get more evidence and data from history

Further evidence about these scenarios can come from analysing how previous technological progress has affected these variables. How much has loss of control been a problem for tech historically? How much has ML faced such problems so far? These would help gather the evidence together on these topics to figure out the likelihood of these scenarios.

This was discussed in a post by Grue_Slinky.

Discussion not yet summarised.

If people comment with summaries that seem good to me, I will add them to the post. I also may accept edits to the above, as I didn't spend as much time as I'd like on it, so there will be improvements to be made (though I don't promise to spend a lot of time negotiating over edits).

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 2:59 PM

Thanks for this post! I think the initiative is great, and I'm glad to be able to read a summary of the discussion.

Two points, the first more serious and important than the second:

  • Concerning your summary, I think that having only the Eleven Paragraph Summary (and maybe the one paragraph, if you really want a short one), is good enough. Notably, I feel like you end throwing too many important details in the three paragraphs summary. And nine paragraphs is short enough than anyone can read it.
  • Imagine that I want to respond to a specifc point that was discussed. Should I do that here or in the comments of the original post? The first option might make my comment easier to see, but it will split the discussion. (Also, it might cause an infinite recursion of distilling the discussions of the distillation of the discussion of the distillation of ... the discussion.)

I don't know re: the second. Ideally this post would become more like a publicly editable wiki. I'd kind of like more discussion under this post, but don't have a strong feeling.

Good point on the first part, seems right, I'll cut it to be that.

Added: I've now re-writtten it.