I mostly agree with this post's characterization of my position.
Places where I disagree with your characterization of my view:
Place where I disagree with your view:
Take-aways that I agree with:
We will need to have a better understanding of deliberation in order to be confident in any alignment scheme. (I prefer a more surgical approach than most MIRI folk, trying to figure exactly what we need to know rather than trying to have an expansive understanding of what good reasoning looks like.)
I can see two kinds of understanding of deliberation that could help us achieve confidence in alignment schemes:
(Note that I'm using the same white-box/black-box terminology as here, but the meaning is a bit different since I'm now applying the terms to understandings of deliberation as opposed to implementations of deliberation.)
The problem with 2, which I see you as implicitly advocating (since I don't see how else you hope to eventually be confident in your alignment scheme), is that I have a low prior for any specific implementation of deliberation (such as meta-execution) producing distributions of outcomes acceptably close to the ideal (unless it's just a very close direct approximation of the ideal process like using a group of highly accurate uploads), and I don't currently see what kind of arguments or evidence we can hope to produce in a relevant timeframe that would make me update enough to become confident. (Aside from something like achieving a white-box understanding of deliberation and then concluding that both the black-box definition of "ideal reasoning" and the actual implementation would be able to approximate the white-box definition of "ideal reasoning", but presumably that's not what you have in mind.)
Perhaps you think empirical work would help, but even if you're able to gather a lot of data on what black-box ideal reasoning eventually produces (which you can't for certain types of reasoning, e.g., philosophical reasoning) and are able to compare that with the AI alignment scheme, how would you rule out the possibility of edge cases where the two don't match?
Another possibility is that you think black-box ideal reasoning would first decide to follow a set of rules and procedures before doing any further deliberation, and that would make it easier for an AI alignment scheme to approximate the ideal. But A) the group of humans would likely spend a lot of time exploring different alternative for how to do further deliberation, and B) whatever rules/procedures they end up adopting would likely include ways to back out of those rules and/or adopt new rules. For 2, you would need to predict with justifiably high confidence what rules/procedures they eventually converge upon (if they in fact converge instead of, e.g., diverge depending on what question they face or which humans we start with), and again I don't see how you hope to do that, within the time we likely have available.
An aligned AI doesn't need to share human preferences-on-reflection. It just needs to (a) be competent, and (b) help humans remain in control of the AI while carrying out whatever reflective process they prefer (including exploration of different approaches, reconciliation of different perspectives, etc.).
So all I'm hoping is to show something about how deliberation can (a) be smarter, while (b) avoiding introducing "bad" (incorrigible?) optimization. I don't think this requires either your #1 or #2.
So all I’m hoping is to show something about how deliberation can (a) be smarter, while (b) avoiding introducing “bad” (incorrigible?) optimization. I don’t think this requires either your #1 or #2.
If you try to formalize (a) and (b), what does that look like, and how would you reach the conclusion that an AI actually has (a) and (b)? My #1 and #2 are things I can come up with when I try to think how we might come to have justified confidence that an AI has good reasoning, and I'm not seeing what other solutions pop up if we say an aligned AI doesn’t need to share human preferences-on-reflection but only needs to be competent and help humans remain in control.
Competence can be an empirical claim, so (a) seems much more straightforward.
Is there some sense in which "argue that a process is normatively correct" is more of a solution than "argue that a process doesn't optimize for something 'bad'"? I agree that both of of the properties are hard to formalize or achieve, the second one currently looks easier to me (and may even be a subproblem of the first one---e.g. my current best guess is that good reasoners need a cognitive immune system).
Competence can be an empirical claim, so (a) seems much more straightforward.
Once again I'm having trouble seeing something that you think is straightforward. If an AI can't determine my values-upon-reflection, that seems like a kind of incompetence. If it can't do that, it seems likely there are other things it can't do. Perhaps you can define "competence" in a way that excludes that class of things and argue that's good enough, but I'm not sure how you'd do that.
Is there some sense in which “argue that a process is normatively correct” is more of a solution than “argue that a process doesn’t optimize for something ‘bad’”?
I think we might eventually be able to argue that a process is normatively correct by understanding it at each of philosophical/mathematical/algorithmic levels, but that kind of white-box understanding does not seem possible if your process incorporates an opaque object (e.g., a machine-learned imitation of human behavior), so I think the best you can hope to achieve in that case is a black-box understanding where you show that your process induces the same (or close enough) distribution of outcomes as a group of humans in an ideal environment.
If your “argue that a process doesn’t optimize for something ‘bad’” is meant to be analogous to my white-box understanding, it seems similarly inapplicable due to presence of the opaque object in your process. If it's meant to be analogous to my black-box understanding, I don't see what the analogy is. In other words, what are you hoping to show instead of "induces the same (or close enough) distribution of outcomes as a group of humans in an ideal environment"?
but that kind of white-box understanding does not seem possible if your process incorporates an opaque object (e.g., a machine-learned imitation of human behavior), so I think the best you can hope to achieve in that case is a black-box understanding where you show that your process induces the same (or close enough) distribution of outcomes as a group of humans in an ideal environment
Suppose I use normatively correct reasoning, but I also use a toaster designed by a normal engineer. The engineer is less capable than I am in every respect, and I watched them design and build the toaster to verify that they didn't do anything tricky. Then I verified that the toaster does seem to toast toast. But I have no philosophical or mathematical understanding of the toaster-design-process. Your claim seems to be that there are no rational grounds for accepting use of the toaster, other than to argue that accepting it doesn't change the distribution of outcomes (which probably isn't true, since it e.g. slightly changes the relative influence of different internal drives by changing what food I eat). Is that right?
What if they designed a SAT solver for me? Or wrote a relativity textbook? Do I need to be sure that nothing like that happens in my deliberative process, in order to have confidence in it?
If those cases don't seem analogous, can you be more clear about what you mean by "opaque," or what quantitative factors make an opaque object problematic? So far your argument doesn't seem rely on any properties of the opaque object at all.
(This case isn't especially analogous to the deliberative process I'm interested in. I'm bringing it up because I don't think I yet understand your intuitive dichotomy.)
When you wrote "suppose I use normatively correct reasoning" did you mean suppose you, Paul, use normatively correct reasoning, or suppose you are an AI who uses normatively correct reasoning? I'll assume the latter for now.
Generally, the AI would use its current reasoning process to decide whether or not to incorporate new objects into itself. I'm not sure what that reasoning process will do exactly, but presumably it would involve something like looking for and considering proofs/arguments/evidence to the effect that incorporating the new object in some specified way will allow it to retain its normatively correct status, or the distribution of outcomes will be sufficiently unchanged.
If an AI uses normatively correct reasoning, the toaster shouldn't change the distribution of outcomes, since letting food influence its reasoning process is obviously not a normative thing to do, and it should be easy to show that there is no influence. For the SAT solver, the AI should be able to argue that it's safe to use it for certain purposes, because it can verify the answer that the solver gives, and for the relativity textbook, it may be able to directly verify that the textbook doesn't contain anything that can manipulate or bias its outputs.
I guess by "opaque" I meant a complex object that wasn't designed to be easily reasoned about, so it's very hard to determine whether it has a given property that would be relevant to showing that it can be used safely as part of an AI's reasoning process. For example a typical microprocessor is an opaque object because it may come with hard to detect flaws and backdoors, whereas a microprocessor designed to be provably correct would be a transparent object.
(Does that help?)
Suppose that I, Paul, use a toaster or SAT solver or math textbook.
I'm happy to drop the normatively correct reasoning assumption if the counterfactual begs the question. The important points are:
I'm not convinced that any of your arguments would be sufficient to trust a toaster / textbook / SAT solver:
and it should be easy to show that there is no influence
Having new memories will by default change the output of deliberation, won't it?
For the SAT solver, the AI should be able to argue that it’s safe to use it for certain purposes, because it can verify the answer that the solver gives
Satisfying instances produced by an arbitrarily powerful adversary don't seem safe for anything.
and for the relativity textbook, it may be able to directly verify that the textbook doesn’t contain anything that can manipulate or bias its outputs
I don't see how this would fit into your framework, without expanding it far enough that it could contain the kind of argument I'm gesturing at (by taking bad === "manipulating or biasing its outputs.")
If we're talking about you, Paul, then what's different is that since you don't have a good understanding of what normatively correct reasoning is, you can only use black-box type reasoning to conclude that certain things are safe to do. We'd happily use the opaque toaster or textbook because we have fairly strong empirical evidence that doing so doesn't change the distribution of outcomes much. Using a toaster might change a particular outcome vs not using it, but there seems to be enough stochasticity in a human deliberation process that it wouldn't make a significant difference to the overall distribution of outcomes. With a textbook, you reason that with enough time you'd reproduce its contents yourself, and whatever actual differences between reading the textbook and figuring out relativity by yourself is again lost in the overall noise of the deliberative process. (We have fairly strong empirical evidence that reading such a textbook written by another human is unlikely to derail our deliberative process in a way that's not eventually recoverable.)
One reply to this might be that we can hope to gather an amount of empirical evidence about meta-execution that would be comparable to the evidence we have about toasters and textbooks. I guess my concern there is that we'll need much stronger assurances if we're going to face other superintelligent AIs in our environment. For example that textbook might contain subtle mistakes that cause you to reason incorrectly about certain questions (analogous to edge case questions where your meta-execution would give significantly different answers than your reflective equilibrium), but there is no one in your current environment who can exploit such errors.
ETA: Another reason to be worried is that, compared to humans using things produced by other humans, it seems reasonable to suspect (have a high prior) that meta-execution's long run safety can't be extrapolated well from what it does in the short term, since meta-execution is explicitly built out of a component that emphasizes imitation of short-term human behavior while throwing away internal changes that might be very relevant to long-run outcomes. (Again this may be missing your point about not needing to reproduce values-upon-reflection but I just don't understand how your alternative approach to understanding deliberation would work if you tried to formalize it.)
Satisfying instances produced by an arbitrarily powerful adversary don’t seem safe for anything.
Not sure if this is still relevant to the current interpretation of your question, but couldn't you use it to safely break encryption schemes, at least?
I want to add that I think meta-execution, in particular, will have problems with deliberation for the same reason that it will have problems with learning: when you hear an argument or explanation (like when you learn), your mind is changed in ways that are hard or impossible to articulate. If every 10 minutes (or 1 day, or what have you) you throw away the part of what your brain does that it can't write down, it seems highly plausible that in many cases you won't be able to reproduce what the brain does over a longer period of time, especially if you're trying to match its natural trajectory, as opposed to trying to hit some objectively measurable benchmark.
Given that ALBA was not meant to be a realistic aligned AI design in and of itself, but just a way to get insights into how to build a realistic aligned AI (which I hadn't entirely understood until now), I wonder if it makes sense to try to nail down all the details and arguments for it before checking to see if you generated any such insights. If we assume that aligned learning roughly looks like ALBA, what does that tell you about what a more realistic aligned AI looks like? It seems worth asking this, in case you, for example, spend a lot of time figuring out exactly how capability amplification could work, and then it ends up that capability amplification isn't even used in the final aligned AI design, or in case designing aligned AI out of individual AI components doesn't actually give you much insight into how to design more realistic aligned AI.
Thank you for writing this. I'm trying to better understand Paul's ideas, and it really helps to see an explanation from a different perspective. Also, I was thinking of publicly complaining that I know at least four people who have objections to Paul's approach that they haven't published anywhere. Now that's down to three. :)
I wonder if you can help answer some questions for me. (I'm directing these at Paul too, but since he's very busy I can't always expect an answer.)
Why does Paul think that learning needs to be "aligned" as opposed to just well-understood and well-behaved, so that it can be safely used as part of a larger aligned AI design that includes search, logic, etc.? He seems to be trying to design an entire aligned AI out of "learning", which makes it seem like his approach is an alternative to MIRI's (Daniel Dewey said this recently at EA Forums for example), while at the same time saying "But we can and should try to do the same for other AI components; I understand MIRI’s agent foundations agenda as (mostly) addressing the alignment of these other elements." If he actually thinks that his approach and MIRI's are complements, why didn't he correct Daniel? I'm pretty confused here.
ETA: I found a partial answer to the above here. To express my understanding of it, Paul is trying to build an aligned AI out of only learning because that seems easier than building a realistic aligned AI and may give him insights into how to do the latter. If he interprets MIRI as doing the analogous thing starting with other AI components (as he seems to according to the quote in the above paragraph), then he surely ought to view the two approaches as complementary, which makes it a bigger puzzle why he didn't contradict Daniel when Daniel said "if an approach along these lines is successful, it doesn't seem to me that much room would be left for HRAD to help on the margin". (Maybe he didn't read that part, or his interpretation of what MIRI is doing has changed?)
If Paul does not think ALBA is a realistic design of an entire aligned AI (since it doesn't include search/logic/etc.) what might a realistic design look like, roughly?
Why does Paul think learning "poses much harder safety problems than other AI techniques under discussion"?
Paul is beginning to do empirical work on capability amplification (as he told me recently via email). Do you think that's a good alternative to trying to make further theoretical progress?
Why does Paul think that learning needs to be “aligned” as opposed to just well-understood and well-behaved, so that it can be safely used as part of a larger aligned AI design that includes search, logic, etc.?
I mostly think it should be benign / corrigible / something like that. I think you'd need something like that whether you want to apply learning directly or to apply it as part of a larger system.
If Paul does not think ALBA is a realistic design of an entire aligned AI (since it doesn’t include search/logic/etc.) what might a realistic design look like, roughly?
You can definitely make an entire AI out of learning alone (evolution / model-free RL), and I think that's currently the single most likely possibility even though it's not particularly likely.
The alternative design would integrate whatever other useful techniques are turned up by the community, which will depend on what those techniques are. One possibility is search/planning. This can be integrated in a straightforward way into ALBA, I think the main obstacle is security amplification which needs to work for ALBA anyway and is closely related to empirical work on capability amplification. On the logic side it's harder to say what a useful technique would look like other than "run your agent for a while," which you can also do with ALBA (though it requires something like these ideas).
which makes it seem like his approach is an alternative to MIRI’s
My hope is to have safe and safely composable versions of each important AI ingredient. I would caricature the implicit MIRI view as "learning will lead to doom, so we need to develop an alternative approach that isn't doomed," which is a substitute in the sense that it's also trying to route around the apparent doomedness of learning but in a quite different way.
Thanks, so to paraphrase your current position, you think once we have aligned learning it doesn't seem as hard to integrate other AI components into the design, so aligning learning seems to be the hardest part. MIRI's work might help with aligning other AI components and integrating them into something like ALBA, but you don't see that as very hard anyway, so it perhaps has more value as a substitute than a complement. Is that about right?
One possibility is search/planning. This can be integrated in a straightforward way into ALBA
I don't understand ALBA well enough to easily see extensions to the idea that are obvious to you, and I'm guessing others may be in a similar situation. (I'm guessing Jessica didn't see it for example, or she wouldn't have said "ALBA competes with adversaries who use only learning" without noting that there's a straightforward extension that does more.) Can you write a post about this? (Or someone else please jump in if you do see what the "straightforward way" is.)
This post summarizes my thoughts on Paul Christiano's agenda in general and ALBA in particular.
(note: at the time of writing, I am not employed at MIRI)
(in general, opinions expressed here are strong and weakly held)
AI alignment research as strategy
Roughly, AI alignment is the problem of using a system of humans and computers to do a good thing. AI alignment research will tend to look at the higher levels of what the system of humans and computers is doing. Thus, it is strategy for doing good things with humans and computers.
"Doing good things with humans and computers" is a broad class. Computing machines have been around a long time, and systems of humans involving rules computed by humans have been around much longer. Looking at AI alignment as strategy will bring in intuitions from domains like history, law, economics, and political philosophy. I think these intuitions are useful for bringing AI strategy into near mode.
Paul Christiano's agenda as strategy
Paul Christiano's agenda "goes for the throat" in ways that the other agendas, such as the agent foundations agenda, do not. Thus, it yields an actual strategy, rather than a list of research questions about strategy. I will now analyze Paul Christiano's research agenda as strategy. Here I caricature Paul Christiano's strategic assumptions:
These assumptions are a caricature of Paul's assumptions in that, though they aren't completely reflective of Paul's actual views, they strongly state the background philosophy implied by Paul's research agenda.
From these assumptions, it is possible to derive that a decent strategy for good actors is to anticipate bad actors' expansion strategies and copy them while retaining the same payload. This corresponds to Paul's general approach: look at a proposed system that would do a bad thing, then create an equally powerful system that would instead do a good thing.
Here are some thoughts on these assumptions:
ALBA competes with adversaries who use only learning
Paul Christiano has acknowledged that ALBA only works for aligning systems that work through learning, and does not work for aligning systems that use hard-to-learn forms of cognition (e.g. search, logic). Roughly, ALBA will copy the part of a system's strategy that is explainable by the system's learning, i.e. that is explaining by the system modelling the world as a set of nested feedback loops. Object recognition is fine, because there's feedback for that; philosophy is not fine, because there isn't feedback for that. (Humans can provide feedback about philosophy, but human philosophy is not fully explained by this feedback, e.g. the human giving the feedback needs their own strategy).
The world isn't a set of nested feedback loops. Agents don't just learn, they reason. The philosophy of empiricism is more appropriate for an investor, who tries to pick up on signs of a successful project based on limited information and reasoning, than for an entrepreneur, who engages in object-level reasoning in order to actually run a successful project.
To be more concrete, there are multiple ways humans could believe that some AI system would be quite powerful without this belief being justified solely on the basis that the AI system does learning:
None of these are the kind of "ML technology" that ALBA uses, so they could not be analyzed effectively enough that ALBA could produce an equivalent.
The last time I talked with Paul, he was aware of these problems. My recollection of his thoughts on these problems are:
ALBA is not amenable to formal analysis
Some parts of ALBA can't be formally analyzed very well with our current level of philosophical sophistication. For example:
Notably, the kind of agents for which analyzing capability amplification makes sense are not the kind of agents that ALBA aligns. ALBA aligns learning systems, and humans' reasoning over time is deliberation rather than learning. I think this is the root of a lot of the difficulty of formally analyzing ALBA.
Plausibility arguments are hard to act on
ALBA is based on plausibility arguments: it's plausible that capability amplification works, it's plausible that informed oversight works, and it's plausible that red teams work. These questions are actually pretty hard to research.
Recently I have updated towards a generalization of this: plausibility arguments are very often hard to act on, since the belief that "X is plausible" does not usually come attached to a strong model implying X is probably true, and these strong models are necessary for generating proofs. (Sometimes, plausibility arguments are sufficient to act on, especially in "easy" domains such as mathematics and puzzle games where there exist search procedures for proving or disproving X. But AI alignment doesn't seem like one of these domains to me.)
Agreement with many of Paul's intuitions
This post so far gives the impression that I strongly disagree with Paul. I do disagree with Paul, but I also strongly agree with many of his intuitions:
Conclusion
I have presented my thoughts on ALBA. ALBA has significantly improved my conceptual understanding of AI alignment, but is seriously incomplete and difficult to make further theoretical progress on. I don't know how to make much additional theoretical progress on the alignment problem at this point, but perhaps taking some steps back from specific approaches and doing original seeing on alignment would yield new directions.