The main problem I see with hodge-podge-style strategies is that most alignment ideas fail in roughly-the-same cases, for roughly-the-same reasons. It's the same hard cases/hard subproblems which kill most plans. In particular, section B.2 (and to a lesser extent B.1 - B.3) of List of Lethalities covers "core problems" which strategies usually fail to handle.
I think there's an existing phrase called "defense in depth", which somehow feels... more like the right spirit? [This is related to the 'swiss cheese' model you bring up in the motivation section.] It's not that we're going to throw together a bunch of miscellaneous stuff and it'll work; it's that we're not going to trust any particular defense that we have enough that we don't also want other defenses.
The problem with the swiss cheese model here is illustrative of why this is unpromising as stated. In the swiss cheese model you start with some working system, and then the world throws unexpected accidents at you, and you need to protect the working system from being interrupted by an accident. This is not our position with respect to aligned AI - a misaligned AI is not well-modeled as an aligned AI plus some misaligning factors. That is living in the should-universe-plus-diff. If you prevent all "accidents," the AI will not revert to its normal non-accident home state of human-friendliness.
Yes, combining multiple safety features is done all the time, e.g. if you're designing a fusion reactor. But you don't design a working fusion reactor by taking twenty non-working designs and summing all their features. Such an approach to fusion-reactor design wouldn't work because:
ABSTRACT: What could AI alignment look like if we had 6000+ full-time researchers and software developers? I've spent a few days thinking about that question, and this report is my first-draft attempt to answer it. This report sketches a highly-scalable research agenda: studying the hodge-podge assemblage of various "alignment primitives". Please let me know in the comments which ideas are novel and useful (if any), and which aren't.
Introduction
What is hodge-podge alignment?
I think that in the Technical AI Safety community, the following beliefs are common:
Maybe those beliefs are correct — but in this report, I'll explore the alternative hypothesis. What if the only strategy that achieves P(doom)<90% is throwing every half-baked protocol we have at an AGI? Let's call this strategy "hodge-podge alignment".
Note that hodge-podge alignment differs from research diversification. The TAIS community is unanimously in favour of research diversification — i.e. they think we should explore many different research agendas. But their justification tends to be "we can't tell beforehand which research agenda will tell us the One Thing", rather than "we'll need to hodge-podge together the methods produced by each research agenda".
Give me an example.
There are dozens of "alignment primitives", but consider these four:
We can "hodge-podge" together these primitives to give an assemblage:
Will this work? I'm not sure.
I suspect that the assemblage would result in a safer AI than any of the four primitives used alone. But the goal of AI alignment is not to build a safe AI, but rather to build a safe and competent AI. (We already know how to build safe incompetent AI!) So the important question is: does the assemblage score higher than the primitives on the safety-competency trade-off? That's an open question.
Key Stages
My tentative proposal consists of the following key stages:
1. Exhaustively enumerate alignment primitives.
There are a few broad overviews of different alignment agendas.
But there isn't (to my knowledge) an exhaustive enumeration of the different methods produced by those agendas. In Stage 1 of hodge-podge alignment, we compile and publish this list.
Let's call the list of alignment primitives AP.
In my imagination, our list of alignment primitives should look something like the list of cryptographic primitives. Cryptographic primitives are well-established, low-level algorithms which are assembled together to build more complicated cryptographic algorithms. If we assume that the cryptographic primitives are implemented correctly (i.e. they behave in the way described by their documentation), then we can be sure that the cryptographic assemblage is also implemented correctly. Moreover, if we discover that a particular cryptographic primitive is "broken" (i.e. it doesn't behave in the way described by its documentation) then we can replace the broken primitive with another primitive rapidly and seamlessly.
2. Taxonomise primitives by their type-signature, documentation, and implementation.
For each alignment primitive,
(These three concepts will also extend to alignment assemblages.)
2a. Type-signature
In computer science, every function, subroutine or method has a "type-signature" defining its inputs and outputs. The type-signature doesn't say what the method does, but it says what type of object the method acts on and what type of object the method produces.
We can analogously talk about the "type-signature" of a particular alignment primitive. This type-signature specifies where in the AI-development pipeline the primitive could be implemented.
Here are some examples:
There are many different type-signatures that alignment primitives have:
In Stage 2a of hodge-podge alignment, we determine the type-signature Tp of each alignment primitive p∈AP. This is necessary for us because T constrains how the primitives can be assembled.
(But even ignoring hodge-podge alignment, it's useful to ask yourself: "What kind of object does this method act on? What kind of object does this method produce? What other methods could this method be composed with?" when thinking about particular alignment methods.)
2b. Implementation
In Stage 2b of hodge-podge alignment, we implement each primitive p as a string of Python code Mp. If the method isn't well-defined enough to be implemented as Python code, then we try to re-formalise it until we can implement it as Python code.
The code is written in a composable and modular way. So far, a lot of alignment methods have been implemented in code, but in a way that is entangled with the rest of the code. But we want the primitives to be implemented in a modular way so that we can hotswap then in our assemblages.
We might release these implementations as part of an open-source library, unless there are risks that this will exacerbate AI capabilities development. However, that risk is mitigated by the modular nature of the implementation. Although it would be risky to open-source the code of a sandboxed AI — it's not as risky to open-source the code of the sandbox itself.
2c. Documentation
The documentation Γp of a primitive is a set of sentences describing how it should behave. The documentation can be written in a formal language, or in plain English.
The following are (equivalent?) definitions of the documentation:
The documentation Γp determines how the primitive should behave, and the implementation Mp determines how the primitive does behave.
In Stage 2b of hodge-podge alignment, we write documentation for each primitive.
3. Enumerate potential alignment assemblages.
Once we know the type-signatures Tp of the alignment primitives, we can then (mechanically) enumerate the different alignment assemblages. Let's call this list AA.
To construct an assemblage, we start with some sequence of primitives p1,…,pn (not necessarily distinct), and then we "wire together" the outputs of some primitives to the inputs of others (so long as those wires are type-compatible).
More formally —
Note that AA is different from P(AP):
We call n the degree of the assemblage — I suspect that (on average) safety will increase as n increases, and competency will decrease.
4. Study assemblages theoretically and experimentally
In Stage 4 of hodge-podge alignment, the goal is to determine which alignment assemblages are reliable. An assemblage A is reliable if we are confident that the implementation MA satisfies the documentation ΓA.
This is of course the hard step. Fortunately, this process can be heavily parallelised.
Theory:
In cryptography, you start with various "cryptographic assumptions" that particular cryptographic primitives are reliable, and then you derive theorems/informal arguments that the assemblages are reliable.
Likewise, in alignment, we will start with various "alignment assumptions" that particular alignment primitives are reliable, and then we derive theorems/informal arguments that the assemblages are reliable. In other words, we want arguments of the form Γp1…Γpn⊢ΓA.
Note that we can construct these arguments before we've actually found implementations for the alignment primitives!
Experiments:
To determine whether an assemblage has been implemented correctly we run experiments, e.g. toy-models, block-worlds. We test the implementation MA in a wide range of likely situations, and check if ΓA is satisfied.
Reliability:
We classify an assemblage A as reliable whenever the following conditions hold:
5. Deploy any assemblage if it's adequate.
An alignment assemblage A is adequate if its documentation ΓA includes "everything goes well" and A is reliable. If we discover an adequate alignment assemblage then we deploy it, hoping to achieve a pivotal act with a low probability of doom.
In practice, searching for an adequate assemblage might look like this:
Or it might look like this:
This would be analogous to a nuclear power station adding and removing control rods to find a happy medium — but for us, we are seeking the optimal point in the safety-competency trade-off.
Motivation
Here are some reasons I think hodge-podge alignment is a reasonably promising research agenda.
1. Ex-ante Uncertainty
We might end up in a situation where we're confident that among our top 20 primitive methods there exists at least one method which reliably works, but we aren't sure beforehand which method is the reliable one. In this situation, it might be best to "assemble" them together.
Here's why I think this situation is likely:
2. Swiss Cheese
We might end up in a situation where we're confident that each of our top 20 methods is unreliable, but if we "roll the dice" 20 times then we'll be confident that at least one of the methods will stop the failure. Therefore we apply all of them at once.
Here's why I think this situation is likely:
3. Synergy
We might end up in a situation where among our top 20 methods, none of them works in isolation. But when we assemble the various methods, suddenly that works great because there's a weird synergy between the different methods.
I'm not confident this is true, but nonetheless:
4. Expanded Options
There is a canonical embedding of primitives into assemblages. So this proposal expands the set of potential methods without eliminating any. In fact, there's a combinatorial explosion in the number of potential options. If we expand our potential options, then our best option will (in general) improve.
5. Different talent-profile
Hodge-podge alignment doesn't invent the primitive ideas — instead, it converts these ideas into composable code and studies their assemblages. This process requires talent, but it requires a different talent profile than other alignment strategies — it needs more software engineers and fewer mathematicians.
6. Short-term gains
Hodge-podge alignment could make significant gains in the short-term because there's a lot of "theoretical overhang" in AI alignment — i.e. there are many ideas that haven't yet been implemented in Python.
7. Universality
Hodge-podge alignment is like the universal object in the category of alignment strategies. An "assemblage" captures what we mean (in the broadest sense) by a potential solution to the alignment problem.
8. Parallelisable and scalable
Hodge-podge alignment is highly parallelisable and scalable. Different primitive methods can be documented and implemented independently. Different assemblages can be studied independently.
Immediate next steps: If enough people say this is worthwhile (relative to how many say the opposite) then I'll explore this further. For the next step, I might post a distillation of David Jaz Myers Categorical systems theory which treats dynamic systems and their typed wirings as polymorphic lenses.