Sam Clarke

Wiki Contributions

Comments

Sorted by

Re the argument for "Why internalization might be difficult", I asked Evan Hubinger for his take on your rendition of the argument, and he thinks it's not right.

Rather, the argument that Risks from Learned Optimization makes that internalization would be difficult is that:

  • ~all models with good performance on a diverse training set probably have to have a complex world model already, which likely includes a model of the base objective,
  • so having the base objective re-encoded in a separate part of the model that represents its objective is just a waste of space/complexity.

Especially since this post is now (rightly!) cited in several introductory AI risk syllabi, it might be worth correcting this, if you agree it's an error.

Re: corrigibility being "anti-natural" in a certain sense - I think I have a better understanding of this now:

  • Eventually, we need to train an AI system capable enough to enable a pivotal act (in particular, actions that prevent the world from being destroyed by any other future AGI)
  • AI systems that are capable enough to enable a pivotal act must be (what Eliezer calls) a “consequentialist”: a system that “searches paths through time and selects high-scoring ones for output”
  • Training an aligned/corrigible/obedient consequentialist is something that Eliezer can’t currently see a way of doing, because it seems like a very unnatural sort of system. This makes him pessimistic about our current trajectory. The argument here seems kinda like a more subtle version of the instrumental convergence thesis. We want to train a system that:
    • (1) searches for (and tries to bring about) paths through time that are robust enough to hit a narrow target (enabling a pivotal act and a great future in general)
    • but also (2) is happy for certain human-initiated attempts to change that target (modify its goals, shut it down, etc.)
  • This seems unnatural and Eliezer can’t see how to do it currently.
  • An exacerbating factor is that even if top labs pursue alignment/corrigiblity/obedience, they will either be mistaken in having achieved it (because it’s hard), or honestly panic about not having achieved it and halt, by which point a runner-up who doesn’t understand the importance of alignment/corrigibility/obedience deploys their system which destroys the world.
  • (This is partly based on this summary)

Strong upvote, I would also love to see more disscussion on the difficulty of inner alignment.

which if true should preclude strong confidence in disaster scenarios

Though only for disaster scenarios that rely on inner misalignment, right?

... seem like world models that make sense to me, given the surrounding justifications

FWIW, I don't really understand those world models/intuitions yet:

  • Re: "earlier patches not generalising as well as the deep algorithms" - I don't understand/am sceptical about the abstraction of "earlier patches" vs. "deep algorithms learned as intelligence is scaled up". What seem to be dubbed "patches that won't generalise well" seem to me to be more like "plausibly successful shaping of the model's goals". I don't see why, at some point when the model gets sufficiently smart, gradient descent will get it to throw out the goals it used to have. What am I missing?
  • Re: corrigibility being "anti-natural" in a certain sense - I think I just don't understand this at all. Has it been discussed clearly anywhere else?

(jtbc, I think inner misalignment might be a big problem, I just haven't seen any good argument for it plausibly being the main problem)

If we don’t have the techniques to reliably align AI, will someone deploy AI anyway? I think it’s more likely the answer is yes.

What level of deployment of unaligned benchmark systems do you expect would make doom plausible? "Someone" suggests maybe you think one deployment event of a sufficiently powerful system could be enough (which would be surprising in slow takeoff worlds). If you do think this, is it something to do with your expectations about discontinuous progress around AGI?

A more valid criticism would be that the authors spend most of their time on showing that all of these failure mechanisms are theoretically possible, without spending much time discussing how likely each of them is are in practice

Sure, I agree this is a stronger point.

The collection of posts under the threat models tag may be what you are looking for: many of these posts highlight the particular risk scenarios the authors feel are most compelling or likely.

Not really, unfortunately. In those posts, the authors are focusing on painting a plausible picture of what the world looks like if we screw up alignment, rather than analysing the arguments that we should expect alignment failures in the first place - which is what I'm interested in (with the exception of Steven's scenario, who already answered here).

The main problem with distilling this work into, say, a top 3 of most powerful 1-page arguments is that we are not dealing purely with technology-driven failure modes.

I fully agree that thinking through e.g. incentives that different actors will have in the lead up to TAI, the interaction between AI technology and society, etc. is super important. But we can think through those things as well - e.g. we can look at historical examples of humanity being faced with scenarios where the global economy is (mis)aligned with human needs, and reason about the extent to which AI will be different. I'd count all of that as part of the argument to expect alignment failure. Yes, as soon as you bring societal interactions into the mix, things become a whole lot more complicated. But that isn't reason not to try.

As it stands, I don't think there are super clear arguments for alignment failure that take into account interactions between AI tech and society that are ready to be distilled down, though I tried doing some of it here.

Equally, much of the discussion (and predictions of many leading thinkers in this space) is premised on technical alignment failure being the central concern (i.e. if we had better technical alignment solutions, we would manage to avoid existential catastrophe). I don't want to argue about whether that's correct here, but just want to point out that at least some people think that at least some of the plausible failure modes are mostly technology-driven.

So will you be distilling for an audience of pessimists or optimists?

Neither - just trying to think clearly through the arguments on both sides.

In the particular case you describe, I find the "pessimist" side more compelling, because I don't see much evidence that humanity has really learned any lessons from oil and climate change. In particular, we still don't know how to solve collective action problems.

This has kept me from spending much energy myself on rationally quantifying the odds of different failure mode scenarios. I'd rather spend my energy in finding ways to improve the odds.

Yeah, I'm sympathetic to this line of thought, and I think I personally tend to err on the side of trying to spend too much energy on quantifying odds and not enough on acting.

However, to the extent that you're impartial between different ways of trying to improve the odds (e.g. working on technical AI alignment vs other technical AI safety vs AI policy vs meta interventions vs other cause areas entirely), then it still pays to work out (e.g.) how plausible AI alignment failure is, in order to inform your decision about what to do if you want to have the best chance of helping.

I'm broadly sympathetic to your point that there have been an unfortunate number of disagreements about inner alignment terminology, and it has been and remains a source of confusion.

to the extent that Evan has felt a need to write an entire clarification post.

Yeah, and recently there has been even more disagreement/clarification attempts.

I should have specified this on the top level question, but (as mentioned in my own answer) I'm talking about abergal's suggestion of what inner alignment failure should refer to (basically: a model pursuing a different objective to the one it was trained on, when deployed out-of-distribution, while retaining most or all of the capabilities it had on the training distribution). I agree this isn't crisp and is far from a mathematical formalisim, but note that there are several examples of this kind of failure in current ML systems that help to clarify what the concept is, and people seem to agree on these examples.

If you can think of toy examples that make real trouble for this definition of inner alignment failure, then I'd be curious to hear what they are.

Thanks for your reply!

depends on what you mean with strongest arguments.

By strongest I definitely mean the second thing (probably I should have clarified here, thanks for picking up on this).

Also, the strongest argument when you address an audience of type A, say policy makers, may not be the strongest argument for an audience of type B, say ML researchers.

Agree, though I expect it's more like, the emphasis needs to be different, whilst the underlying argument is similar (conditional on talking about your second definition of "strongest").

many distilled collections of arguments already exist, even book-length ones like Superintelligence, Human Compatible, and The Alignment Problem.

Probably I should have clarified some more here. By "distilled", I mean:

  • a really short summary (e.g. <1 page for each argument, with links to literature which discuss the argument's premises)
  • that makes it clear what the epistemic status of the argument is.

Those books aren't short, and neither do they focus on working out exactly how strong the case for alignment failure is, but rather on drawing attention to the problem and claiming that more work needs to be done on the current margin (which I absolutely agree with).

I also don't think they focus on surveying the range of arguments for alignment failure, but rather on presenting the author's particular view.

If there are distilled collections of arguments with these properties, please let me know!

(As some more context for my original question: I'm most interested in arguments for inner alignment failure. I'm pretty confused by the fact that some researchers seem to think inner alignment is the main problem and/or probably extremely difficult, and yet I haven't really heard a rigorous case made for its plausibility.)

Answer by Sam Clarke60

Arguments for outer alignment failure, i.e. that we will plausibly train advanced AI systems using a training objective that doesn't incentivise or produce the behaviour we actually want from the AI system. (Thanks to Richard for spelling out these arguments clearly in AGI safety from first principles.)

  • It's difficult to explicitly write out objective functions which express all our desires about AGI behaviour.
    • There’s no simple metric which we’d like our agents to maximise - rather, desirable AGI behaviour is best formulated in terms of concepts like obedience, consent, helpfulness, morality, and cooperation, which we can’t define precisely in realistic environments.
    • Although we might be able to specify proxies for those goals, Goodhart’s law suggests that some undesirable behaviour will score very well according to these proxies, and therefore be reinforced in AIs trained on them.
  • Comparatively primitive AI systems have already demonstrated many examples of outer alignment failures, even on much simpler objectives than what we would like AGIs to be able to do.

Arguments for inner alignment failure, i.e. that advanced AI systems will plausibly pursue an objective other than the training objective while retaining most or all of the capabilities it had on the training distribution.[1]

  • There exist certain subgoals, such as "acquiring influence", that are useful for achieving a broad range of final goals. Therefore, these may reliably lead to higher reward during training. Agents might come to value these subgoals for their own sake, and highly capable agents that e.g. want influence are likely to take adversarial action against humans.
  • The models we train might learn heuristics instead of the complex training objective, which are good enough to score very well on the training distribution, but break down under distributional shift.
    • This could happen if the model class isn't expressive enough to learn the training objective; or because heuristics are more easily discovered (than the training objective) during the learning process.
  • Argument by analogy to human evolution: humans are misaligned with the goal of increasing genetic fitness.
    • The naive version of this argument seems quite weak to me, and could do with more investigation about just how analogous modern ML training and human evolution are.
  • The training objective is a narrow target among a large space of possible objectives that do well on the training distribution.
    • The naive version of this argument also seems quite weak to me. Lots of human achievements have involved hitting very improbable, narrow targets. I think there's a steelman version, but I'm not going to try to give it here.
  • The arguments in Sections 3.2, 3.3 and 4.4 of Risks from Learned Optimization are also relevant, which give arguments for mesa-optimisation failure.
    • (Remember, mesa-optimisation failure is a specific kind of inner alignment failure. It's an inner alignment failure when the learned model is a optimiser in the sense that it is internally searching through a search space looking for elements that score highly according to some objective function that is explicitly represented within the system).

  1. This follows abergal's suggestion of what inner alignment should refer to. ↩︎

I found this post helpful and interesting, and refer to it often! FWIW I think that powerful persuasion tools could have bad effects on the memetic ecosystem even if they don't shift the balance of power to a world with fewer, more powerful ideologies. In particular, the number of ideologies could remain roughly constant, but each could get more 'sticky'. This would make reasonable debate and truth-seeking harder, as well as reducing trusted and credible multipartisan sources. This seems like an existential risk factor, e.g. because it will make coordination harder. (Analogy to how vaccine and mask hesitancy during Covid was partly due to insufficient trust in public health advice). Or more speculatively I could also imagine an extreme version of sticky, splintered epistemic bubbles this leading to moral stagnation/value lock-in.

Minor question on framing: I'm wondering why you chose to call this post "AI takeover without AGI or agency?" given that the effects of powerful persuasion tools you talk about aren't what (I normally think of as) "AI takeover"? (Rather, if I've understood correctly, they are "persuasion tools as existential risk factor", or "persuasion tools as mechanism for power concentration among humans".)

Somewhat related: I think there could be a case made for takeover by goal-directed but narrow AI, though I haven't really seen it made. But I can't see a case for takeover by non-goal-directed AI, since why would AI systems without goals want to take over? I'd be interested if you have any thoughts on those two things.

Thanks for your comment!

Are we sure that given the choice between "lower crime, lower costs and algorithmic bias" and "higher crime, higher costs and only human bias", and we have dictatorial power and can consider long-term effects, we would choose the latter on reflection?

Good point, thanks, I hadn't thought that sometimes it actually would make sense, on reflection, to choose an algorithm pursuing an easy-to-measure goal over humans pursuing incorrect goals. One thing I'd add is that if one did delve into the research to work this out for a particular case, it seems that an important (but hard to quantify) consideration would be the extent to which choosing the algorithm in this case makes it more likely that the use of that algorithm becomes entrenched, or it sets a precedent for the use of such algorithms. This feels important since these effects could plausibly make WFLL1-like things more likely in the longer run (when the harm of using misaligned systems is higher, due to the higher capabilities of those systems).

Note ML systems are way more interpretable than humans, so if they are replacing humans then this shouldn't make that much of a difference.

Good catch. I had the "AI systems replace entire institutions" scenario in mind, but agree that WFLL1 actually feels closer to "AI systems replace humans". I'm pretty confused about what this would look like though, and in particular, whether institutions would retain their interpretability if this happened. It seems plausible that the best way to "carve up" an institution into individual agents/services differs for humans and AI systems. E.g. education/learning is big part of human institution design - you start at the bottom and work your way up as you learn skills and become trusted to act more autonomously - but this probably wouldn't be the case for institutions composed of AI systems, since the "CEO" could just copy their model parameters to the "intern" :). And if institutions composed of AI systems are quite different to institutions composed of humans, then they might not be very interpretable. Sure, you could assert that AI systems replace humans one-for-one, but if this is not the best design, then there may be competitive pressure to move away from this towards something less interpretable.

Load More