This is a special post for quick takes by Alex Turner. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
230 comments, sorted by Click to highlight new comments since: Today at 10:26 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

In an alternate universe, someone wrote a counterpart to There's No Fire Alarm for Artificial General Intelligence:

Okay, let’s be blunt here. I don’t think most of the discourse about alignment being really hard is being generated by models of machine learning at all. I don’t think we’re looking at wrong models; I think we’re looking at no models.

I was once at a conference where there was a panel full of famous AI alignment luminaries, and most of the luminaries were nodding and agreeing with each other that of course AGI alignment is really hard and unaddressed by modern alignment research, except for two famous AI luminaries who stayed quiet and let others take the microphone.

I got up in Q&A and said, “Okay, you’ve all told us that alignment is hard. But let’s be more concrete and specific. I’d like to know what’s the least impressive task which cannot be done by a 'non-agentic' system, that you are very confident cannot be done safely and non-agentically in the next two years.”

There was a silence.

Eventually, one person ventured a reply, spoken in a rather more tentative tone than they’d been using to pronounce that SGD would internalize coherent goals into language models. T

... (read more)
4__RicG__4mo
  Sorry, I might misunderstanding you (and hope I am), but... I think doomers literally say "Nobody knows what internal motivational structures SGD will entrain into scaled-up networks and thus we are all doomed". The problems is not having the science to confidently say how the AIs will turn out, and not that doomers have a secret method to know that next-token-prediction is evil. If you meant that doomers are too confident answering the question "will SGD even make motivational structures?" their (and mine) answer still stems from ignorance: nobody knows, but it is plausible that SGD will make motivational structures in the neural networks because it can be useful in many tasks (to get low loss or whatever), and if you think you do know better you should show it experimentally and theoretically in excruciating detail.   I also don't see how it logically follows that "If your model has the extraordinary power to say what internal motivational structures SGD will entrain into scaled-up networks" => "then you ought to be able to say much weaker things that are impossible in two years" but it seems to be the core of the post. Even if anyone had the extraordinary model to predict what SGD exactly does (which we, as a species, should really strive for!!) it would still be a different question to predict what will or won't happen in the next two years. If I reason about my field (physics) the same should hold for a sentence structured like "If your model has the extraordinary power to say how an array of neutral atoms cooled to a few nK will behave when a laser is shone upon them" (which is true) => "then you ought to be able to say much weaker things that are impossible in two years in the field of cold atom physics" (which is... not true). It's a non sequitur.
2Alex Turner4mo
It would be "useful" (i.e. fitness-increasing) for wolves to have evolved biological sniper rifles, but they did not. By what evidence are we locating these motivational hypotheses, and what kinds of structures are dangerous, and why are they plausible under the NN prior?  The relevant commonality is "ability to predict the future alignment properties and internal mechanisms of neural networks." (Also, I don't exactly endorse everything in this fake quotation, so indeed the analogized tasks aren't as close as I'd like. I had to trade off between "what I actually believe" and "making minimal edits to the source material.")
3Daniel Kokotajlo2mo
Nice analogy! I approve of stuff like this. And in particular I agree that MIRI hasn't convincingly argued that we can't do significant good stuff (including maybe automating tons of alignment research) without agents. Insofar as your point is that we don't have to build agentic systems and nonagentic systems aren't dangerous, I agree? If we could coordinate the world to avoid building agentic systems I'd feel a lot better.  

Deceptive alignment seems to only be supported by flimsy arguments. I recently realized that I don't have good reason to believe that continuing to scale up LLMs will lead to inner consequentialist cognition to pursue a goal which is roughly consistent across situations. That is: a model which not only does what you ask it to do (including coming up with agentic plans), but also thinks about how to make more paperclips even while you're just asking about math homework

Aside: This was kinda a "holy shit" moment, and I'll try to do it justice here. I encourage the reader to do a serious dependency check on their beliefs. What do you think you know about deceptive alignment being plausible, and why do you think you know it? Where did your beliefs truly come from, and do those observations truly provide 


I agree that conditional on entraining consequentialist cognition which has a "different goal" (as thought of by MIRI; this isn't a frame I use), the AI will probably instrumentally reason about whether and how to deceptively pursue its own goals, to our detr... (read more)

Reply12653211
[-]leogao4mo142

I think deceptive alignment is still reasonably likely despite evidence from LLMs.

I agree with:

  • LLMs are not deceptively aligned and don't really have inner goals in the sense that is scary
  • LLMs memorize a bunch of stuff
  • the kinds of reasoning that feed into deceptive alignment do not predict LLM behavior well
  • Adam on transformers does not have a super strong simplicity bias
  • without deceptive alignment, AI risk is a lot lower
  • LLMs not being deceptively aligned provides nonzero evidence against deceptive alignment (by conservation of evidence)

I predict I could pass the ITT for why LLMs are evidence that deceptive alignment is not likely.

however, I also note the following: LLMs are kind of bad at generalizing, and this makes them pretty bad at doing e.g novel research, or long horizon tasks. deceptive alignment conditions on models already being better at generalization and reasoning than current models.

my current hypothesis is that future models which generalize in a way closer to that predicted by mesaoptimization will also be better described as having a simplicity bias.

I think this and other potential hypotheses can potentially be tested empirically today rather than only being distinguishable close to AGI

4Alex Turner4mo
Note that "LLMs are evidence against this hypothesis" isn't my main point here. The main claim is that the positive arguments for deceptive alignment are flimsy, and thus the prior is very low.
1Robert Kirk3mo
How would you imagine doing this? I understand your hypothesis to be "If a model generalises as if it's a mesa-optimiser, then it's better-described as having simplicity bias". Are you imagining training systems that are mesa-optimisers (perhaps explicitly using some kind of model-based RL/inference-time planning and search/MCTS), and then trying to see if they tend to learn simple cross-episode inner goals which would be implied by a stronger implicity bias?

I find myself unsure which conclusion this is trying to argue for.

Here are some pretty different conclusions:

  • Deceptive alignment is <<1% likely (quite implausible) to be a problem prior to complete human obsolescence (maybe it's a problem after human obsolescence for our trusted AI successors, but who cares).
  • There aren't any solid arguments for deceptive alignment[1]. So, we certainly shouldn't be confident in deceptive alignment (e.g. >90%), though we can't total rule it out (prior to human obsolescene). Perhaps deceptive alignment is 15% likely to be a serious problem overall and maybe 10% likely to be a serious problem if we condition on fully obsoleting humanity via just scaling up LLM agents or similar (this is pretty close to what I think overall).
  • Deceptive alignment is <<1% likely for scaled up LLM agents (prior to human obsolescence). Who knows about other architectures.

There is a big difference between <<1% likely and 10% likely. I basically agree with "not much reason to expect deceptive alignment even in models which are behaviorally capable of implementing deceptive alignment", but I don't think this leaves me in a <<1% likely epistemic ... (read more)

Closest to the third, but I'd put it somewhere between .1% and 5%. I think 15% is way too high for some loose speculation about inductive biases, relative to the specificity of the predictions themselves.

Without deceptive alignment/agentic AI opposition, a lot of alignment threat models ring hollow. No more adversarial steganography or adversarial pressure on your grading scheme or worst-case analysis or unobservable, nearly unfalsifiable inner homonculi whose goals have to be perfected

Instead, we enter the realm of tool AI which basically does what you say.

I agree that, conditional on no deceptive alignment, the most pernicious and least tractable sources of doom go away. 

However, I disagree that conditional on no deceptive alignment, AI "basically does what you say." Indeed, the majority of my P(doom) comes from the difference between "looks good to human evaluators" and "is actually what the human evaluators wanted." Concretely, this could play out with models which manipulate their users into thinking everything is going well and sensor tamper.

I think current observations don't provide much evidence about whether these concerns will pan out: with current models and training set-ups, "looks good to evaluators" almost always coincides with "is what evaluators wanted." I worry that we'll only see this distinction matter once models are smart enough that they could... (read more)

There are some subskills to having consistent goals that I think will be selected for, at least when outcome-based RL starts working to get models to do long-horizon tasks. For example, the ability to not be distracted/nerdsniped into some different behavior by most stimuli while doing a task. The longer the horizon, the more selection-- if you have to do a 10,000 step coding project, then the probability you get irrecoverably distracted on one step has to be below 1/10,000.

I expect some pretty sophisticated goal-regulation circuitry to develop as models get more capable, because humans need it, and this makes me pretty scared.

I contest that there's very little reason to expect "undesired, covert, and consistent-across-situations inner goals" to crop up in [LLMs as trained today] to begin with

As someone who consider deceptive alignment a concern: fully agree. (With the caveat, of course, that it's because I don't expect LLMs to scale to AGI.)

I think there's in general a lot of speaking-past-each-other in alignment, and what precisely people mean by "problem X will appear if we continue advancing/scaling" is one of them.

Like, of course a new problem won't appear if we just keep doing the exact same thing that we've already been doing. Except "the exact same thing" is actually some equivalence class of approaches/architectures/training processes, but which equivalence class people mean can differ.

For example:

  • Person A, who's worried about deceptive alignment, can have "scaling LLMs arbitrarily far" defined as this proven-safe equivalence class of architectures. So when they say they're worried about capability advancement bringing in new problems, what they mean is "if we move beyond the LLM paradigm, deceptive alignment may appear".
  • Person B, hearing the first one, might model them as instead defining "LLMs
... (read more)
1Vladimir Nesov5mo
LLMs will soon scale beyond the available natural text data, and generation of synthetic data is some sort of change of architecture, potentially a completely different source of capabilities. So scaling LLMs without change of architecture much further is an expectation about something counterfactual. It makes sense as a matter of theory, but it's not relevant for forecasting. Edit 15 Dec: No longer endorsed based on scaling laws for training on repeated data.

Bold claim. Want to make any concrete predictions so that I can register my different beliefs? 

I've now changed my mind based on

The main result is that up to 4 repetitions are about as good as unique data, and for up to about 16 repetitions there is still meaningful improvement. Let's take 50T tokens as an estimate for available text data (as an anchor, there's a filtered and deduplicated CommonCrawl dataset RedPajama-Data-v2 with 30T tokens). Repeated 4 times, it can make good use of 1e28 FLOPs (with a dense transformer), and repeated 16 times, suboptimal but meaningful use of 2e29 FLOPs. So this is close but not lower than what can be put to use within a few years. Thanks for pushing back on the original claim.

1Vladimir Nesov4mo
Three points: how much compute is going into a training run, how much natural text data it wants, and how much data is available. For training compute, there are claims of multi-billion dollar runs being plausible and possibly planned in 2-5 years. Eyeballing various trends and GPU shipping numbers and revenues, it looks like about 3 OOMs of compute scaling is possible before industrial capacity constrains the trend and the scaling slows down. This assumes that there are no overly dramatic profits from AI (which might lead to finding ways of scaling supply chains faster than usual), and no overly dramatic lack of new capabilities with further scaling (which would slow down investment in scaling). That gives about 1e28-1e29 FLOPs at the slowdown in 4-6 years. At 1e28 FLOPs, Chinchilla scaling asks for 200T-250T tokens. Various sparsity techniques increase effective compute, asking for even more tokens (when optimizing loss given fixed hardware compute). Edit 15 Dec: I no longer endorse this point, based on scaling laws for training on repeated data. On the outside, there are 20M-150M accessible books, some text from video, and 1T web pages of extremely dubious uniqueness and quality. That might give about 100T tokens, if LLMs are used to curate? There's some discussion (incl. comments) here, this is the figure I'm most uncertain about. In practice, absent good synthetic data, I expect multimodality to fill the gap, but that's not going to be as useful as good text for improving chatbot competence. (Possibly the issue with the original claim in the grandparent is what I meant by "soon".)
2Daniel Kokotajlo2mo
I wish I had read this a week ago instead of just now, it would have saved a significant amount of confusion and miscommunication!
2Roger Dearnaley3mo
I think there are two separate questions here, with possibly (and I suspect actually) very different answers: 1. How likely is deceptive alignment to arise in an LLM under SGD across a large very diverse pretraining set (such as a slice of the internet)? 2. How likely is deceptive alignment to be boosted in an LLM under SGD fine tuning followed by RL for HHH-behavior applied to a base model trained by 1.? I think the obvious answer to 1. is that the LLM is going to attempt (limited by its available capacity and training set) to develop world models of everything that humans do that affects the contents of the Internet. One of the many things that humans do is pretend to be more aligned to the wishes of an authority that has power over them than they truly are. So for a large enough LLM, SGD will create a world model for this behavior along with thousands of other human behaviors, and the LLM will (depending on the prompt) tend to activate this behavior at about the frequency and level that you find it on the Internet, as modified by cues in the particular prompt. On the Internet, this is generally a mild background level for people writing while at work in Western countries, and probably more strongly for people writing from more authoritarian countries: specific prompts will be more or less correlated with this. For 2., the question is whether fine-tuning followed by RL will settle on this preexisting mechanism and make heavy use of it as part of the way that it implements something that fits the fine-tuning set/scores well on the reward model aimed at creating a helpful, honest, and harmless assistant persona. I'm a lot less certain of the answer here, and I suspect it might depend rather strongly on the details of the training set. For example, is this evoking an "you're at work, or in an authoritarian environment, so watch what you say and do" scenario that might boost the use of this particular behavior? The "harmless" element in HHH seems particularly con
2Noosphere895mo
I think a lot of this probably comes back to way overestimating the complexity of human values. I think a very deeply held belief of a lot of LWers is that human values are intractably complicated and gene/societal-specific, and I think if this was the case, the argument would actually be a little concerning, as we'd have to rely on massive speed biases to punish deception. These posts gave me good intuition for why human value is likely to be quite simple, one of them talks about how most of the complexity of the values is inaccessible to the genome, thus it needs to start from far less complexity than people realize, because nearly all of it needs to be learned. Some other posts from Steven Byrnes are relevant, which talks about how simple the brain is, and a potential difference between me and Steven Byrnes is that the same process of learning from scratch algorithms that generate capabilities also applies to values, and thus the complexity of value is upper-bounded by the complexity of learning from scratch algorithms + genetic priors, both of which are likely very low, at the very least not billions of lines complex, and closer to thousands of lines/hundreds of bits. But the reason this matters is because we no longer have good reason to assume that the deceptive model is so favored on priors like Evan Hubinger says here, as the complexity is likely massively lower than LWers assume. https://www.lesswrong.com/posts/i5kijcjFJD6bn7dwq/evaluating-the-historical-value-misspecification-argument?commentId=vXnLq7X6pMFLKwN2p Putting it another way, the deceptive and aligned models both have very similar complexities, and the relative difficulty is very low, so much so that the aligned model might be outright lower complexity, but even if that fails, the desired goal has a complexity very similar to the undesired goal complexity, thus the relative difficulty of actual alignment compared to deceptive alignment is quite low. https://www.lesswrong.com/posts/CQAMdzA4MZ
3Alex Turner5mo
(I think you're still playing into an incorrect frame by talking about "simplicity" or "speed biases.")
2kave5mo
Two quick thoughts (that don't engage deeply with this nice post). 1.  I'm worried in some cases where the goal is not consistent across situations. For example, if prompted to pursue some goal, it then does it seriously with convergent instrumental goals. 2. I think it seems pretty likely that future iterations of transformers will have bits of powerful search in them, but people who seem very worried about that search seem to think that once that search is established enough, gradient descent will cause the internals of the model to be organised mostly around that search (I imagine the search circuits "bubbling out" to be the outer structure of the learned algorithm). Probably this is all just conceptually confused, but to the extent it's not, I'm pretty surprised by their intuition.

It feels to me like lots of alignment folk ~only make negative updates. For example, "Bing Chat is evidence of misalignment", but also "ChatGPT is not evidence of alignment." (I don't know that there is in fact a single person who believes both, but my straw-models of a few people believe both.)

2Alex Turner5mo
(Updating a bit because of these responses -- thanks, everyone, for responding! I still believe the first sentence, albeit a tad less strongly.)

Shard theory suggests that goals are more natural to specify/inculcate in their shard-forms (e.g. if around trash and a trash can, put the trash away), and not in their (presumably) final form of globally activated optimization of a coherent utility function which is the reflective equilibrium of inter-shard value-handshakes (e.g. a utility function over the agent's internal plan-ontology such that, when optimized directly, leads to trash getting put away, among other utility-level reflections of initial shards). 

I could (and did) hope that I could specify a utility function which is safe to maximize because it penalizes power-seeking. I may as well have hoped to jump off of a building and float to the ground. On my model, that's just not how goals work in intelligent minds. If we've had anything at all beaten into our heads by our alignment thought experiments, it's that goals are hard to specify in their final form of utility functions. 

I think it's time to think in a different specification language.

Against CIRL as a special case of against quickly jumping into highly specific speculation while ignoring empirical embodiments-of-the-desired-properties. 

Just because we write down English describing what we want the AI to do ("be helpful"), propose a formalism (CIRL), and show good toy results (POMDPs where the agent waits to act until updating on more observations), that doesn't mean that the formalism will lead to anything remotely relevant to the original English words we used to describe it. (It's easier to say "this logic enables nonmonotonic reasoning" and mess around with different logics and show how a logic solves toy examples, than it is to pin down probability theory with Cox's theorem) 

And yes, this criticism applies extremely strongly to my own past work with attainable utility preservation and impact measures. (Unfortunately, I learned my lesson after, and not before, making certain mistakes.) 

In the context of "how do we build AIs which help people?", asking "does CIRL solve corrigibility?" is hilariously unjustified. By what evidence have we located such a specific question? We have assumed there is an achievable "corrigibility"-like property; we ha... (read more)

2Alex Turner2y
Actually, this is somewhat too uncharitable to my past self. It's true that I did not, in 2018, grasp the two related lessons conveyed by the above comment: 1. Make sure that the formalism (CIRL, AUP) is tightly bound to the problem at hand (value alignment, "low impact"), and not just supported by "it sounds nice or has some good properties." 2. Don't randomly jump to highly specific ideas and questions without lots of locating evidence. However, in World State is the Wrong Abstraction for Impact, I wrote: I had partially learned lesson #2 by 2019.

I regret each of the thousands of hours I spent on my power-seeking theorems, and sometimes fantasize about retracting one or both papers. I am pained every time someone cites "Optimal policies tend to seek power", and despair that it is included in the alignment 201 curriculum. I think this work makes readers actively worse at thinking about realistic trained systems.

I think a healthy alignment community would have rebuked me for that line of research, but sadly I only remember about two people objecting that "optimality" is a horrible way of understanding trained policies. 

I think the basic idea of instrumental convergence is just really blindingly obvious, and I think it is very annoying that there are people who will cluck their tongues and stroke their beards and say "Hmm, instrumental convergence you say? I won't believe it unless it is in a very prestigious journal with academic affiliations at the top and Computer Modern font and an impressive-looking methods section."

I am happy that your papers exist to throw at such people.

Anyway, if optimal policies tend to seek power, then I desire to believe that optimal policies tend to seek power :) :) And if optimal policies aren't too relevant to the alignment problem, well neither are 99.99999% of papers, but it would be pretty silly to retract all of those :)

Since I'm an author on that paper, I wanted to clarify my position here. My perspective is basically the same as Steven's: there's a straightforward conceptual argument that goal-directedness leads to convergent instrumental subgoals, this is an important part of the AI risk argument, and the argument gains much more legitimacy and slightly more confidence in correctness by being formalized in a peer-reviewed paper.

I also think this has basically always been my attitude towards this paper. In particular, I don't think I ever thought of this paper as providing any evidence about whether realistic trained systems would be goal-directed.

Just to check that I wasn't falling prey to hindsight bias, I looked through our Slack history. Most of it is about the technical details of the results, so not very informative, but the few conversations on higher-level discussion I think overall support this picture. E.g. here are some quotes (only things I said):

Nov 3, 2019:

I think most formal / theoretical investigation ends up fleshing out a conceptual argument I would have accepted, maybe finding a few edge cases along the way; the value over the conceptual argument is primarily in the edge cases

... (read more)

It seems like just 4 months ago you still endorsed your second power-seeking paper:

This paper is both published in a top-tier conference and, unlike the previous paper, actually has a shot of being applicable to realistic agents and training processes. Therefore, compared to the original[1] optimal policy paper, I think this paper is better for communicating concerns about power-seeking to the broader ML world.

Why are you now "fantasizing" about retracting it?

I think a healthy alignment community would have rebuked me for that line of research, but sadly I only remember about two people objecting that “optimality” is a horrible way of understanding trained policies.

A lot of people might have thought something like, "optimality is not a great way of understanding trained policies, but maybe it can be a starting point that leads to more realistic ways of understanding them" and therefore didn't object for that reason. (Just guessing as I apparently wasn't personally paying attention to this line of research back then.)

Which seems to have turned out to be true, at least as of 4 months ago, when you still endorsed your second paper as "actually has a shot of being applicable to... (read more)

To be clear, I still endorse Parametrically retargetable decision-makers tend to seek power. Its content is both correct and relevant and nontrivial. The results, properly used, may enable nontrivial inferences about the properties of inner trained cognition. I don't really want to retract that paper. I usually just fantasize about retracting Optimal policies tend to seek power.

The problem is that I don't trust people to wield even the non-instantly-doomed results.

For example, one EAG presentation cited my retargetability results as showing that most reward functions "incentivize power-seeking actions." However, my results have not shown this for actual trained systems. (And I think that Power-seeking can be probable and predictive for trained agents does not make progress on the incentives of trained policies.)

People keep talking about stuff they know how to formalize (e.g. optimal policies) instead of stuff that matters (e.g. trained policies). I'm pained by this emphasis and I think my retargetability results are complicit. Relative to an actual competent alignment community (in a more competent world), we just have no damn clue how to properly reason about real trained policies... (read more)

5Wei Dai11mo
Thanks, this clarifies a lot for me.
5Victoria Krakovna11mo
Sorry about the cite in my "paradigms of alignment" talk, I didn't mean to misrepresent your work. I was going for a high-level one-sentence summary of the result and I did not phrase it carefully. I'm open to suggestions on how to phrase this differently when I next give this talk. Similarly to Steven, I usually cite your power-seeking papers to support a high-level statement that "instrumental convergence is a thing" for ML audiences, and I find they are a valuable outreach tool. For example, last year I pointed David Silver to the optimal policies paper when he was proposing some alignment ideas to our team that we would expect don't work because of instrumental convergence. (There's a nonzero chance he would look at a NeurIPS paper and basically no chance that he would read a LW post.) The subtleties that you discuss are important in general, but don't seem relevant to making the basic case for instrumental convergence to ML researchers. Maybe you don't care about optimal policies, but many RL people do, and I think these results can help them better understand why alignment is hard. 
5Alex Turner10mo
Thanks for your patient and high-quality engagement here, Vika! I hope my original comment doesn't read as a passive-aggressive swipe at you. (I consciously tried to optimize it to not be that.) I wanted to give concrete examples so that Wei_Dai could understand what was generating my feelings. It's a tough question to say how to apply the retargetablity result to draw practical conclusions about trained policies. Part of this is because I don't know if trained policies tend to autonomously seek power in various non game-playing regimes.  If I had to say something, I might say "If choosing the reward function lets us steer the training process to produce a policy which brings about outcome X, and most outcomes X can only be attained by seeking power, then most chosen reward functions will train power-seeking policies." This argument appropriately behaves differently if the "outcomes" are simply different sentiment generations being sampled from an LM -- sentiment shift doesn't require power-seeking. My guess is that the optimal policies paper was net negative for technical understanding and progress, but net positive for outreach, and agree it has strong benefits in the situations you highlight. I think that it's locally valid to point out "under your beliefs (about optimal policies mattering a lot), the situation is dangerous, read this paper." But I feel a tad queasy about the overall point, since I don't think alignment's difficulty has much to do with the difficulties pointed out by "Optimal Policies Tend to Seek Power." I feel better about saying "Look, if in fact the same thing happens with trained policies, which are sometimes very different, then we are in trouble." Maybe that's what you already communicate, though.
4Victoria Krakovna10mo
Thanks Alex! Your original comment didn't read as ill-intended to me, though I wish that you'd just messaged me directly. I could have easily missed your comment in this thread - I only saw it because you linked the thread in the comments on my post. Your suggested rephrase helps to clarify how you think about the implications of the paper, but I'm looking for something shorter and more high-level to include in my talk. I'm thinking of using this summary, which is based on a sentence from the paper's intro: "There are theoretical results showing that many decision-making algorithms have power-seeking tendencies." (Looking back, the sentence I used in the talk was a summary of the optimal policies paper, and then I updated the citation to point to the retargetability paper and forgot to update the summary...)
3Alex Turner10mo
I think this is reasonable, although I might say "suggesting" instead of "showing." I think I might also be more cautious about further inferences which people might make from this -- like I think a bunch of the algorithms I proved things about are importantly unrealistic. But the sentence itself seems fine, at first pass.
2Aryeh Englander11mo
You should make this a top level post so it gets visibility. I think it's important for people to know the caveats attached to your results and the limits on its implications in real-world dynamics.

What is "shard theory"? I've written a lot about shard theory. I largely stand by these models and think they're good and useful. Unfortunately, lots of people seem to be confused about what shard theory is. Is it a "theory"? Is it a "frame"? Is it "a huge bag of alignment takes which almost no one wholly believes except, perhaps, Quintin Pope and Alex Turner"?

I think this understandable confusion happened because my writing didn't distinguish between: 

  1. Shard theory itself, 
    1. IE the mechanistic assumptions about internal motivational structure, which seem to imply certain conclusions around e.g. AIs caring about a bunch of different things and not just one thing
  2. A bunch of Quintin Pope's and my beliefs about how people work, 
    1. where those beliefs were derived by modeling people as satisfying the assumptions of (1)
  3. And a bunch of my alignment insights which I had while thinking about shard theory, or what problem decompositions are useful.

(People might be less excited to use the "shard" abstraction (1), because they aren't sure whether they buy all this other stuff—(2) and (3).)

I think I can give an interesting and useful definition of (1) now, but I couldn't do so last year... (read more)

1Adele Lopez8mo
Strong encouragement to write about (1)!

AI strategy consideration. We won't know which AI run will be The One. Therefore, the amount of care taken on the training run which produces the first AGI, will—on average—be less careful than intended. 

  • It's possible for a team to be totally blindsided. Maybe they thought they would just take a really big multimodal init, finetune it with some RLHF on quality of its physics reasoning, have it play some video games with realistic physics, and then try to get it to do new physics research. And it takes off. Oops!
  • It's possible the team suspected, but had a limited budget. Maybe you can't pull out all the stops for every run, you can't be as careful with labeling, with checkpointing and interpretability and boxing. 

No team is going to run a training run with more care than they would have used for the AGI Run, especially if they don't even think that the current run will produce AGI. So the average care taken on the real AGI Run will be strictly less than intended.

Teams which try to be more careful on each run will take longer to iterate on AI designs, thereby lowering the probability that they (the relatively careful team) will be the first to do an AGI Run. 

Upshots:

  1. Th
... (read more)

The meme of "current alignment work isn't real work" seems to often be supported by a (AFAICT baseless) assumption that LLMs have, or will have, homunculi with "true goals" which aren't actually modified by present-day RLHF/feedback techniques. Thus, labs aren't tackling "the real alignment problem", because they're "just optimizing the shallow behaviors of models." Pressed for justification of this confident "goal" claim, proponents might link to some handwavy speculation about simplicity bias (which is in fact quite hard to reason about, in the NN prior), or they might start talking about evolution (which is pretty unrelated to technical alignment, IMO).

Are there any homunculi today? I'd say "no", as far as our limited knowledge tells us! But, as with biorisk, one can always handwave at future models. It doesn't matter that present models don't exhibit signs of homunculi which are immune to gradient updates, because, of course, future models will.

Quite a strong conclusion being drawn from quite little evidence.

As a proponent:

My model says that general intelligence[1] is just inextricable from "true-goal-ness". It's not that I think homunculi will coincidentally appear as some side-effect of capability advancement — it's that the capabilities the AI Labs want necessarily route through somehow incentivizing NNs to form homunculi. The homunculi will appear inasmuch as the labs are good at their jobs.

Said model is based on analyses of how humans think and how human cognition differs from animal/LLM cognition, plus reasoning about how a general-intelligence algorithm must look like given the universe's structure. Both kinds of evidence are hardly ironclad, you certainly can't publish an ML paper based on it — but that's the whole problem with AGI risk, isn't it.

Internally, though, the intuition is fairly strong. And in its defense, it is based on trying to study the only known type of entity with the kinds of capabilities we're worrying about. I heard that's a good approach.

In particular, I think it's a much better approach than trying to draw lessons from studying the contemporary ML models, which empirically do not yet exhibit said capabilities.

homunculi with "true goals" which aren't

... (read more)

I'm relatively optimistic about alignment progress, but I don't think "current work to get LLMs to be more helpful and less harmful doesn't help much with reducing P(doom)" depends that much on assuming homunculi which are unmodified. Like even if you have much less than 100% on this sort of strong inner optimizer/homunculi view, I think it's still plausible to think that this work doesn't reduce doom much.

For instance, consider the following views:

  1. Current work to get LLMs to be more helpful and less harmful will happen by default due to commercial incentives and subsidies aren't very important.
  2. In worlds where that is basically sufficient, we're basically fine.
  3. But, it's ex-ante plausible that deceptive alignment will emerge naturally and be very hard to measure, notice, or train out. And this is where almost all alignment related doom comes from.
  4. So current work to get LLMs to be more helpful and less harmful doesn't reduce doom much.

In practice, I personally don't fully agree with any of these views. For instance, deceptive alignment which is very hard to train out using basic means isn't the source of >80% of my doom.

2Ryan Greenblatt5mo
I have misc other takes on what safety work now is good vs useless, but that work involving feedback/approval or RLHF isn't much signal either way. (If anything I get somewhat annoyed by people not comparing to baselines without having principled reasons for not doing so. E.g., inventing new ways of doing training without comparing to normal training.)
1Roger Dearnaley5mo
I think the shoggoth model is useful here (Or see https://www.alignmentforum.org/posts/vJFdjigzmcXMhNTsx/simulators). An LLM learning to do next-token prediction well has a major problem that it has to master: who is this human whose next token they're trying to simulate/predict, and how do they act? Are they, for example, an academic? A homemaker? A 4Chan troll? A loose collection of wikipedia contributors? These differences make a big difference to what token they're likely to emit next. So the LLM is strongly incentivized to learn to detect and then model all of these possibilities, what one might call personas, or masks, or simulacra. So you end up with a shapeshifter, adept at figuring out from textual cues what mask to put on and at then wearing it. Something one might describe as like an improv actor, or more colorfully, a shoggoth. So then current alignment work is useful to the extent that it can cause the shoggoth to almost always put one of the 'right' masks on, and almost never put on one of the 'wrong' masks, regardless of cues, even when adversarially prompted. Experimentally, this seems quite doable by fine-tuning or RLHF, and/or by sufficiently careful filtering of your training corpus (e.g. not including 4chan in it). A published result shows that you can't get from 'almost always' to 'always' or 'almost never' to 'never': for any behavior that the network is capable of with any probability >0 , there exists prompts that will raise the likelihood of that outcome arbitrarily high. The best you can do is increase the minimum length of that prompt (and presumably the difficulty of finding it). Now, it would be really nice to know how to align a model so that the probability of it doing next-token-prediction in the persona of, say, a 4chan troll was provably zero, not just rather small. Ideally, without also eliminating from the model the factual knowledge of what 4chan is or, at least in outline, how its inhabitants act. This seems hard to do by fin

Positive values seem more robust and lasting than prohibitions. Imagine we train an AI on realistic situations where it can kill people, and penalize it when it does so. Suppose that we successfully instill a strong and widely activated "If going to kill people, then don't" value shard. 

Even assuming this much, the situation seems fragile. See, many value shards are self-chaining. In The shard theory of human values, I wrote about how:

  1. A baby learns "IF juice in front of me, THEN drink",
  2. The baby is later near juice, and then turns to see it, activating the learned "reflex" heuristic, learning to turn around and look at juice when the juice is nearby,
  3. The baby is later far from juice, and bumbles around until they're near the juice, whereupon she drinks the juice via the existing heuristics. This teaches "navigate to juice when you know it's nearby."
  4. Eventually this develops into a learned planning algorithm incorporating multiple value shards (e.g. juice and friends) so as to produce a single locally coherent plan.
  5. ...

The juice shard chains into itself, reinforcing itself across time and thought-steps. 

But a "don't kill" shard seems like it should remain... stubby? Primitive?... (read more)

1Garrett Baker1y
Seems possibly relevant & optimistic when seeing deception as a value. It has the form ‘if about to tell human statement with properties x, y, z, don’t’ too.
2Alex Turner1y
It can still be robustly derived as an instrumental subgoal during general-planning/problem-solving, though?
1Garrett Baker1y
This is true, but indicates a radically different stage in training in which we should find deception compared to deception being an intrinsic value. It also possibly expands the kinds of reinforcement schedules we may want to use compared to the worlds where deception crops up at the earliest opportunity (though pseudo-deception may occur, where behaviors correlated with successful deception are reinforced possibly?).
2Alex Turner1y
Oh, huh, I had cached the impression that deception would be derived, not intrinsic-value status. Interesting.

Why do many people think RL will produce "agents", but maybe (self-)supervised learning ((S)SL) won't? Historically, the field of RL says that RL trains agents. That, of course, is no argument at all. Let's consider the technical differences between the training regimes.

In the modern era, both RL and (S)SL involve initializing one or more neural networks, and using the reward/loss function to provide cognitive updates to the network(s). Now we arrive at some differences.

Some of this isn't new (see Hidden Incentives for Auto-Induced Distributional Shift), but I think it's important and felt like writing up my own take on it. Maybe this becomes a post later.

[Exact gradients] RL's credit assignment problem is harder than (self-)supervised learning's. In RL, if an agent solves a maze in 10 steps, it gets (discounted) reward; this trajectory then provides a set of reward-modulated gradients to the agent. But if the agent could have solved the maze in 5 steps, the agent isn't directly updated to be more likely to do that in the future; RL's gradients are generally inexact, not pointing directly at intended behavior

On the other hand, if a supervised-learning classifier outputs dog ... (read more)

4Steve Byrnes2y
I’m not inclined to think that “exact gradients” is important; in fact, I’m not even sure if it’s (universally) true. In particular, PPO / TRPO / etc. are approximating a policy gradient, right? I feel like, if some future magical technique was a much better approximation to the true policy gradient, such that it was for all intents and purposes a perfect approximation, it wouldn’t really change how I think about RL in general. Conversely, on the SSL side, you get gradient noise from things like dropout and the random selection of data in each batch, so you could say the gradient “isn’t exact”, but I don’t think that makes any important conceptual difference either. (A central difference in practice is that SSL gives you a gradient “for free” each query, whereas RL policy gradients require many runs in an identical (episodic) environment before you get a gradient.) In terms of “why RL” in general, among other things, I might emphasize the idea that if we want an AI that can (for example) invent new technology, it needs to find creative out-of-the-box solutions to problems (IMO), which requires being able to explore / learn / build knowledge in parts of concept-space where there is no human data. SSL can’t do that (at least, “vanilla SSL” can’t do that; maybe there are “SSL-plus” systems that can), whereas RL algorithms can. I guess this is somewhat related to your “independence”, but with a different emphasis. I don’t have too strong an opinion about whether vanilla SSL can yield an “agent” or not. It would seem to be a pointless and meaningless terminological question. Hmm, I guess when I think of “agent” it has a bunch of connotations, e.g. an ability to do trial-and-error exploration, and I think that RL systems tend to match all those connotations more than SSL systems—at least, more than “vanilla” SSL systems. But again, if someone wants to disagree, I’m not interested in arguing about it.

A problem with adversarial training. One heuristic I like to use is: "What would happen if I initialized a human-aligned model and then trained it with my training process?"

So, let's consider such a model, which cares about people (i.e. reliably pulls itself into futures where the people around it are kept safe). Suppose we also have some great adversarial training technique, such that we have e.g. a generative model which produces situations where the AI would break out of the lab without permission from its overseers. Then we run this procedure, update the AI by applying gradients calculated from penalties applied to its actions in that adversarially-generated context, and... profit?

But what actually happens with the aligned AI? Possibly something like:

  1. The context makes the AI spuriously believe someone is dying outside the lab, and that if the AI asked for permission to leave, the person would die. 
  2. Therefore, the AI leaves without permission.
  3. The update procedure penalizes these lines of computation, such that in similar situations in the future (i.e. the AI thinks someone nearby is dying) the AI is less likely to take those actions (i.e. leaving to help the person).
  4. We have
... (read more)

I think instrumental convergence also occurs in the model space for machine learning. For example, many different architectures likely learn edge detectors in order to minimize classification loss on MNIST. But wait - you'd also learn edge detectors to maximize classification loss on MNIST (loosely, getting 0% on a multiple-choice exam requires knowing all of the right answers). I bet you'd learn these features for a wide range of cost functions. I wonder if that's already been empirically investigated?

And, same for adversarial features. And perhaps, same for mesa optimizers (understanding how to stop mesa optimizers from being instrumentally convergent seems closely related to solving inner alignment). 

What can we learn about this?

3Evan Hubinger4y
A lot of examples of this sort of stuff show up in OpenAI clarity's circuits analysis work. In fact, this is precisely their Universality hypothesis. See also my discussion here.

Back-of-the-envelope probability estimate of alignment-by-default via a certain shard-theoretic pathway. The following is what I said in a conversation discussing the plausibility of a proto-AGI picking up a "care about people" shard from the data, and retaining that value even through reflection. I was pushing back against a sentiment like "it's totally improbable, from our current uncertainty, for AIs to retain caring-about-people shards. This is only one story among billions."

Here's some of what I had to say:


[Let's reconsider the five-step mechanistic story I made up.] I'd give the following conditional probabilities (made up with about 5 seconds of thought each):

1. Humans in fact care about other humans, in a way which extrapolates to quasi-humans still being around (whatever that means) P(1)=.85

2. Human-generated data makes up a large portion of the corpus, and having a correct model of them is important for “achieving low loss”,[1] so the AI has a model of how people want things P(2 | 1) = .6, could have different abstractions or have learned these models later in training once key decision-influences are already there

3. During RL finetuning and given this post-unsupervi

... (read more)
2Roger Dearnaley5mo
0.85 x 0.6 x 0.55 x 0.25 x 0.95 ≅ 0.067 = 6.7% — I think you slipped an order of magnitude somewhere?
1Garrett Baker1y
This seems like an underestimate because you don’t consider whether the first “AGI” will indeed make it so we only get one chance. If it can only self improve by more gradient steps, then humanity has a greater chance than if it self improves by prompt engineering or direct modification of its weights or latent states. Shard theory seems to have nonzero opinions on the fruitfulness of the non-data methods.
2Alex Turner1y
What does self-improvement via gradients vs prompt-engineering vs direct mods have to do with how many chances we get? I guess, we have at least a modicum more control over the gradient feedback loop, than over the other loops?  Can you say more?
0mesaoptimizer5mo
This is where I'd put a significantly low probability. Could you elaborate on why there's an inductive bias towards "just hooking human-like criteria for bidding on internal-AI-plans"? As far as I can tell, the inductive bias for human-like values would be something that at least seems closer to the human-brain structure than any arbitrary ML architecture we have right now. Rewarding a system to better model human beings' desires doesn't seem to me to lead it towards having similar desires. I'd use the "instrumental versus terminal desires" concept here but I expect you would consider that something that adds confusion instead of removing it.
2Alex Turner5mo
Because it's shorter edit distance in its internal ontology; it's plausibly NN-simple to take existing plan-grading procedures, internal to the model, and then hooking those more directly into its logit-controllers. Also note that probably it internally hooks up lots of ways to make decisions, and this only has to be one (substantial) component. Possibly I'd put .3 or .45 now instead of .55 though.

Very nice people don’t usually search for maximally-nice outcomes — they don’t consider plans like “killing my really mean neighbor so as to increase average niceness over time.” I think there are a range of reasons for this plan not being generated. Here’s one.

Consider a person with a niceness-shard. This might look like an aggregation of subshards/subroutines like “if person nearby and person.state==sad, sample plan generator for ways to make them happy” and “bid upwards on plans which lead to people being happier and more respectful, according to my world model.” In mental contexts where this shard is very influential, it would have a large influence on the planning process.

However, people are not just made up of a grader and a plan-generator/actor — they are not just “the plan-generating part” and “the plan-grading part.” The next sampled plan modification, the next internal-monologue-thought to have—these are influenced and steered by e.g. the nice-shard. If the next macrostep of reasoning is about e.g. hurting people, well — the niceness shard is activated, and will bid down on this. 

The niceness shard isn’t just bidding over outcomes, it’s bidding on next thoughts (on m... (read more)

An alternate mechanistic vision of how agents can be motivated to directly care about e.g. diamonds or working hard. In Don't design agents which exploit adversarial inputs, I wrote about two possible mind-designs:

Imagine a mother whose child has been goofing off at school and getting in trouble. The mom just wants her kid to take education seriously and have a good life. Suppose she had two (unrealistic but illustrative) choices. 

  1. Evaluation-child: The mother makes her kid care extremely strongly about doing things which the mom would evaluate as "working hard" and "behaving well."
  2. Value-child: The mother makes her kid care about working hard and behaving well.

I explained how evaluation-child is positively incentivized to dupe his model of his mom and thereby exploit adversarial inputs to her cognition. This shows that aligning an agent to evaluations of good behavior is not even close to aligning an agent to good behavior

However, some commenters seemed maybe skeptical that value-child can exist, or uncertain how concretely that kind of mind works. I worry/suspect that many people have read shard theory posts without internalizing new ideas about how cognition can work, ... (read more)

AI cognition doesn't have to use alien concepts to be uninterpretable. We've never fully interpreted human cognition, either, and we know that our introspectively accessible reasoning uses human-understandable concepts.

Just because your thoughts are built using your own concepts, does not mean your concepts can describe how your thoughts are computed. 

Or:

The existence of a natural-language description of a thought (like "I want ice cream") doesn't mean that your brain computed that thought in a way which can be compactly described by familiar concepts. 

Conclusion: Even if an AI doesn't rely heavily on "alien" or unknown abstractions -- even if the AI mostly uses human-like abstractions and features -- the AI's thoughts might still be incomprehensible to us, even if we took a lot of time to understand them. 

1Garrett Baker7mo
I don't think the conclusion follows from the premises. People often learn new concepts after studying stuff, and it seems likely (to me) that when studying human cognition, we'd first be confused because our previous concepts weren't sufficient to understand it, and then slowly stop being confused as we built & understood concepts related to the subject. If an AI's thoughts are like human thoughts, given a lot of time to understand them, what you describe doesn't rule out that the AI's thoughts would be comprehensible. The mere existence of concepts we don't know about in a subject doesn't mean that we can't learn those concepts. Most subjects have new concepts.
2Alex Turner7mo
I agree that with time, we might be able to understand. (I meant to communicate that via "might still be incomprehensible")

Examples should include actual details. I often ask people to give a concrete example, and they often don't. I wish this happened less. For example:

Someone: the agent Goodharts the misspecified reward signal

Me: What does that mean? Can you give me an example of that happening?

Someone: The agent finds a situation where its behavior looks good, but isn't actually good, and thereby gets reward without doing what we wanted.

This is not a concrete example.

Me: So maybe the AI compliments the reward button operator, while also secretly punching a puppy behind closed doors?

This is a concrete example. 

3Alex Turner1y
AFAIK, only Gwern and I have written concrete stories speculating about how a training run will develop cognition within the AGI.  This worries me, if true (if not, please reply with more!). I think it would be awesome to have more concrete stories![1] If Nate, or Evan, or John, or Paul, or—anyone, please, anyone add more concrete detail to this website!—wrote one of their guesses of how AGI goes, I would understand their ideas and viewpoints better. I could go "Oh, that's where the claimed sharp left turn is supposed to occur." Or "That's how Paul imagines IDA being implemented, that's the particular way in which he thinks it will help."  Maybe a contest would help? ETA tone 1. ^ Even if scrubbed of any AGI-capabilities-advancing sociohazardous detail. Although I'm not that convinced that this is a big deal for conceptual content written on AF. Lots of people probably have theories of how AGI will go. Implementation is, I have heard, the bottleneck.  Contrast this with beating SOTA on crisply defined datasets in a way which enables ML authors to get prestige and publication and attention and funding by building off of your work. Seem like different beasts.
0Alex Turner1y
I also think a bunch of alignment writing seems syntactical. Like, "we need to solve adversarial robustness so that the AI can't find bad inputs and exploit them / we don't have to worry about distributional shift. Existing robustness strategies have downsides A B and C and it's hard to even get ϵ-ball guarantees on classifications. Therefore, ..." And I'm worried that this writing isn't abstractly summarizing a concrete story for failure that they have in mind (like "I train the AI [with this setup] and it produces [this internal cognition] for [these mechanistic reasons]"; see A shot at the diamond alignment problem for an example) and then their best guesses at how to intervene on the story to prevent the failures from being able to happen (eg "but if we had [this robustness property] we could be sure its policy would generalize into situations X Y and Z, which makes the story go well"). I'm rather worried that people are more playing syntactically, and not via detailed models of what might happen.  Detailed models are expensive to make. Detailed stories are hard to write. There's a lot we don't know. But we sure as hell aren't going to solve alignment only via valid reasoning steps on informally specified axioms ("The AI has to be robust or we die", or something?).  

"Globally activated consequentialist reasoning is convergent as agents get smarter" is dealt an evidential blow by von Neumann:

Although von Neumann unfailingly dressed formally, he enjoyed throwing extravagant parties and driving hazardously (frequently while reading a book, and sometimes crashing into a tree or getting arrested). He once reported one of his many car accidents in this way: "I was proceeding down the road. The trees on the right were passing me in orderly fashion at 60 miles per hour. Suddenly one of them stepped in my path." He was a profoundly committed hedonist who liked to eat and drink heavily (it was said that he knew how to count everything except calories). -- https://www.newworldencyclopedia.org/entry/John_von_Neumann 

Experiment: Train an agent in MineRL which robustly cares about chickens (e.g. would zero-shot generalize to saving chickens in a pen from oncoming lava, by opening the pen and chasing them out, or stopping the lava). Challenge mode: use a reward signal which is a direct function of the agent's sensory input.

This is a direct predecessor to the "Get an agent to care about real-world dogs" problem. I think solving the Minecraft version of this problem will tell us something about how outer reward schedules relate to inner learned values, in a way which directly tackles the key questions, the sensory observability/information inaccessibility issue, and which is testable today.

(Credit to Patrick Finley for the idea)

3Alex Turner2y
After further review, this is probably beyond capabilities for the moment.  Also, the most important part of this kind of experiment is predicting in advance what reward schedules will produce what values within the agent, such that we can zero-shot transfer that knowledge to other task types (e.g. XLAND instead of Minecraft) and say "I want an agent which goes to high-elevation platforms reliably across situations, with low labelling cost", and then sketch out a reward schedule, and have the first capable agents trained using that schedule generalize in the way you want.

I think some people have the misapprehension that one can just meditate on abstract properties of "advanced systems" and come to good conclusions about unknown results "in the limit of ML training", without much in the way of technical knowledge about actual machine learning results or even a track record in predicting results of training.

For example, several respected thinkers have uttered to me English sentences like "I don't see what's educational about watching a line go down for the 50th time" and "Studying modern ML systems to understand future ones seems like studying the neurobiology of flatworms to understand the psychology of aliens." 

I vehemently disagree. I am also concerned about a community which (seems to) foster such sentiment.

one can just meditate on abstract properties of "advanced systems" and come to good conclusions about unknown results "in the limit of ML training"

I think this is a pretty straw characterization of the opposing viewpoint (or at least my own view), which is that intuitions about advanced AI systems should come from a wide variety of empirical domains and sources, and a focus on current-paradigm ML research is overly narrow.

Research and lessons from fields like game theory, economics, computer security, distributed systems, cognitive psychology, business, history, and more seem highly relevant to questions about what advanced AI systems will look like. I think the original Sequences and much of the best agent foundations research is an attempt to synthesize the lessons from these fields into a somewhat unified (but often informal) theory of the effects that intelligent, autonomous systems have on the world around us, through the lens of rationality, reductionism, empiricism, etc.

And whether or not you think they succeeded at that synthesis at all, humans are still the sole example of systems capable of having truly consequential and valuable effects of any kind. So I think it makes sense for the figure of merit for such theories and worldviews to be based on how well they explain these effects, rather than focusing solely or even mostly on how well they explain relatively narrow results about current ML systems.

4Alex Turner2mo
Context for my original comment: I think that the key thing we want to do is predict the generalization of future neural networks. What will they do in what situations?  For some reason, certain people think that pretraining will produce consequentialist inner optimizers. This is generally grounded out as a highly specific claim about the functions implemented by most low-loss parameterizations of somewhat-unknown future model architectures trained on somewhat-unknown data distributions.  I am in particular thinking about "Playing the training game" reasoning, which is---at its core---extremely speculative and informal claims about inductive biases / the functions implemented by such parameterizations. If a person (like myself pre-2022) is talking about how AIs "might play the training game", but also this person doesn't know about internal invariances in NN space or the compressivity of the parameter-function map (the latter in particular is crucial for reasoning about inductive biases), then I become extremely concerned. To put it mildly. Given that clarification which was not present in the original comment, * I disagree on game theory, econ, computer security, business, and history; those seem totally irrelevant for reasoning about inductive biases (and you might agree). However they seem useful for reasoning about the impact of AI on society as it becomes integrated. * Agree very weakly on distributed systems and moderately on cognitive psychology. (I have in fact written a post on the latter: Humans provide an untapped wealth of evidence about alignment.) Flagging that this is one of the main claims which we seem to dispute; I do not concede this point FWIW.

I think that the key thing we want to do is predict the generalization of future neural networks.

It's not what I want to do, at least. For me, the key thing is to predict the behavior of AGI-level systems. The behavior of NNs-as-trained-today is relevant to this only inasmuch as NNs-as-trained-today will be relevant to future AGI-level systems.

My impression is that you think that pretraining+RLHF (+ maybe some light agency scaffold) is going to get us all the way there, meaning the predictive power of various abstract arguments from other domains is screened off by the inductive biases and other technical mechanistic details of pretraining+RLHF. That would mean we don't need to bring in game theory, economics, computer security, distributed systems, cognitive psychology, business, history into it – we can just look at how ML systems work and are shaped, and predict everything we want about AGI-level systems from there.

I disagree. I do not think pretraining+RLHF is getting us there. I think we currently don't know what training/design process would get us to AGI. Which means we can't make closed-form mechanistic arguments about how AGI-level systems will be shaped by this process, w... (read more)

3Alex Turner1mo
Thanks for pointing out that distinction! 
1Chris_Leong2mo
“But also this person doesn't know about internal invariances in NN space or the compressivity of the parameter-function map (the latter in particular is crucial for reasoning about inductive biases), then I become extremely concerned” Have you written about this anywhere?
6kave2mo
Why do you vehemently disagree?
1Alex Turner2mo
(As an obvious corollary, I myself was misguided to hold a similar belief pre-2022.)

When writing about RL, I find it helpful to disambiguate between:

A) "The policy optimizes the reward function" / "The reward function gets optimized" (this might happen but has to be reasoned about), and

B) "The reward function optimizes the policy" / "The policy gets optimized (by the reward function and the data distribution)" (this definitely happens, either directly -- via eg REINFORCE -- or indirectly, via an advantage estimator in PPO; B follows from the update equations)

Theoretical predictions for when reward is maximized on the training distribution. I'm a fan of Laidlaw et al.'s recent Bridging RL Theory and Practice with the Effective Horizon:

Deep reinforcement learning works impressively in some environments and fails catastrophically in others. Ideally, RL theory should be able to provide an understanding of why this is, i.e. bounds predictive of practical performance. Unfortunately, current theory does not quite have this ability...

[We introduce] a new complexity measure that we call the effective horizon, which roughly corresponds to how many steps of lookahead search are needed in order to identify the next optimal action when leaf nodes are evaluated with random rollouts. Using BRIDGE, we show that the effective horizon-based bounds are more closely reflective of the empirical performance of PPO and DQN than prior sample complexity bounds across four metrics. We also show that, unlike existing bounds, the effective horizon can predict the effects of using reward shaping or a pre-trained exploration policy.

One of my favorite parts is that it helps formalize this idea of "which parts of the state space are easy to explore into." That inform... (read more)

Consider what update equations have to say about "training game" scenarios. In PPO, the optimization objective is proportional to the advantage given a policy , reward function , and on-policy value function :

Consider a mesa-optimizer acting to optimize some mesa objective. The mesa-optimizer understands that it will be updated proportional to the advantage. If the mesa-optimizer maximizes reward, this corresponds to maximizing the intensity of the gradients it receives, thus maximally updating its cognition in exact directions. 

This isn't necessarily good.

If you're trying to gradient hack and preserve the mesa-objective, you might not want to do this. This might lead to value drift, or make the network catastrophically forget some circuits which are useful to the mesa-optimizer. 

Instead, the best way to gradient hack might be to roughly minimize the absolute value of the advantage, which means achieving roughly on-policy value over time, which doesn't imply reward maximization. This is a kind of "treading water" in terms of reward. This helps decrease value drift.

I think that realistic mesa optimizers will n... (read more)

Outer/inner alignment decomposes a hard problem into two extremely hard problems. 

I have a long post draft about this, but I keep delaying putting it out in order to better elaborate the prereqs which I seem to keep getting stuck on when elaborating the ideas. I figure I might as well put this out for now, maybe it will make some difference for someone.

I think that the inner/outer alignment framing[1] seems appealing but is actually a doomed problem decomposition and an unhelpful frame for alignment. 

  1. The reward function is a tool which chisels cognition into agents through gradient updates, but the outer/inner decomposition assumes that that tool should also embody the goals we want to chisel into the agent. When chiseling a statue, the chisel doesn’t have to also look like the finished statue. 
  2. I know of zero success stories for outer alignment to real-world goals. 
    1. More precisely, stories where people decided “I want an AI which [helps humans / makes diamonds / plays Tic-Tac-Toe / grows strawberries]”, and then wrote down an outer objective only maximized in those worlds.
    2. This is pretty weird on any model where most of the
... (read more)

I was talking with Abram Demski today about a promising-seeming research direction. (Following is my own recollection)

One of my (TurnTrout's) reasons for alignment optimism is that I think:

  • We can examine early-training cognition and behavior to some extent, since the system is presumably not yet superintelligent and planning against us,
    • (Although this amount of information depends on how much interpretability and agent-internals theory we do now)
  • All else equal, early-training values (decision-influences) are the most important to influence, since they steer future training.
  • It's crucial to get early-training value shards of which a substantial fraction are "human-compatible values" (whatever that means)
    • For example, if there are protect-human-shards which 
      • reliably bid against plans where people get hurt,
      • steer deliberation away from such plan stubs, and
      • these shards are "reflectively endorsed" by the overall shard economy (i.e. the decision-making isn't steering towards plans where the protect-human shards get removed)
  • If we install influential human-compatible shards early in training, and they get retained, they will help us in mid- and late-training where we can't affect the ball
... (read more)
3johnswentworth2y
One barrier for this general approach: the basic argument that something like this would work is that if one shard is aligned, and every shard has veto power over changes (similar to the setup in Why Subagents?), then things can't get much worse for humanity. We may fall well short of our universe-scale potential, but at least X-risk is out. Problem is, that argument requires basically-perfect alignment of the one shard (or possibly a set of shards which together basically-perfectly represent human values). If we try to weaken it to e.g. a bunch of shards which each imperfectly capture different aspects of human values, with different imperfections, then there's possibly changes which Goodhart all of the shards simultaneously. Indeed, I'd expect that to be a pretty strong default outcome.
4Alex Turner2y
Even on the view you advocate here (where some kind of perfection is required), "perfectly align part of the motivations" seems substantially easier than "perfectly align all of the AI's optimization so it isn't optimizing for anything you don't want." I feel significantly less confident about this, and am still working out the degree to which Goodhart seems hard, and in what contours, on my current view.

Earlier today, I was preparing for an interview. I warmed up by replying stream-of-consciousness to imaginary questions I thought they might ask. Seemed worth putting here.

What do you think about AI timelines?

I’ve obviously got a lot of uncertainty. I’ve got a bimodal distribution, binning into “DL is basically sufficient and we need at most 1 big new insight to get to AGI” and “we need more than 1 big insight”

So the first bin has most of the probability in the 10-20 years from now, and the second is more like 45-80 years, with positive skew. 

Some things driving my uncertainty are, well, a lot. One thing  that drives how things turn out (but not really  how fast we’ll get there) is: will we be able to tell we’re close 3+ years in advance, and if so, how quickly will the labs react? Gwern Branwen made a point a few months ago, which is like, OAI has really been validated on this scaling hypothesis, and no one else is really betting big on it because they’re stubborn/incentives/etc, despite the amazing progress from scaling. If that’s true, then even if it's getting pretty clear that one approach is working better, we might see a slower pivot and have a more unipolar s

... (read more)

Apparently[1] there was recently some discussion of Survival Instinct in Offline Reinforcement Learning (NeurIPS 2023). The results are very interesting: 

On many benchmark datasets, offline RL can produce well-performing and safe policies even when trained with "wrong" reward labels, such as those that are zero everywhere or are negatives of the true rewards. This phenomenon cannot be easily explained by offline RL's return maximization objective. Moreover, it gives offline RL a degree of robustness that is uncharacteristic of its online RL counterparts, which are known to be sensitive to reward design. We demonstrate that this surprising robustness property is attributable to an interplay between the notion of pessimism in offline RL algorithms and certain implicit biases in common data collection practices. As we prove in this work, pessimism endows the agent with a "survival instinct", i.e., an incentive to stay within the data support in the long term, while the limited and biased data coverage further constrains the set of survival policies...

Our empirical and theoretical results suggest a new paradigm for RL, whereby an agent is nudged to learn a desirable behavior

... (read more)

How to get overhead-free supervision of LLM outputs:

Train an extra head on your speculative decoding model. This head (hopefully) outputs a score of "how suspicious is the text I'm reading right now." Therefore, you just run the smaller speculative decoding model as normal to sample the smaller model's next-token probabilities, while also getting the "is this suspicious" score for free! 

1Ryan Greenblatt3mo
Some random similar methods I've thought about: * coup probes (how good is the baseline of just training a probe on off-policy data?) * Maybe for assisting with human supervision (or even making a better product) it would be useful to train a bunch of probes for different properties and then highlight text at a token-by-token level for the property. E.g. train a probe for "lying" or "hallucination" and then show this probe on a token by token level for the usage of a human overseer or monitor. * Maybe it would be useful to have make our powerful untrusted model mostly imitate a weak trusted model and only rarely deviate from it's probablity distribution in cases where it thinks it is particularly important (perhaps the weak trusted model is done with an early exit head on the LLM or something similar to speculative decoding). This is ~equivalent to RL-ing the model to make good completions while defering to the weak model as much as possible. Then, we could highlight on which tokens the stronger model didn't defer to the weaker model and apply additional supervision to these tokens.
1Ryan Greenblatt3mo
As in the proposal is: * Train a probe on "is this text suspicious" (Perhaps at the same point where we do speculative decoding for efficiency.) * At runtime, if the text generated thus far is suspicious, than sample from a weaker model instead (in your proposal, we use the weaker the speculative decoding model). * (Perhaps we return to allowing stronger model sampling when the probe no longer indicates issues in the completion.) (Just putting it in the language I would use.) I like this proposal. It's worth noting that we can just train the model not to output suspicious text according to the probe, so the question will be about the additional alpha in not overfitting to the probe or avoiding concentrated failures (concentrated in a small number of actions or a small amount of time so distilling back into the model isn't possible) . I generally think this sort of proposal is more applicable to avoiding concentrated failures than to avoiding issues with "average case peformance/oversight" or similar. The defer-to-trusted method we tested in our recent paper is quite a similar method (use a sample from a weaker trusted model when suspicion is too high), but at the level of an entire completion rather than within a given generation. I expect that (like in our paper) you'd want to also have humans review cases where the probe indicated very high levels of suspicion (if the underlying model is actually dangerous). As far as whether a probe trained on suspicion will perform the way you want, consider reading our post on coup probes. Though note that the issues might differ substantially between the case where you're worried about scheming (aka deceptive alignment) and other concerns.

Some exciting new activation engineering papers: 

  • https://arxiv.org/abs/2311.09433 (using activation additions to adversarially attack LMs)
  • https://arxiv.org/abs/2311.06668 (using activation additions instead of few-shot prompt demonstrations, beating out finetuning and few-shot while also demonstrating composable `add safe vector, subtract polite vector -> safe but rude behavior`)

Another point for feature universality. Subtle adversarial image manipulations influence both human and machine perception:

... we find that adversarial perturbations that fool ANNs similarly bias human choice. We further show that the effect is more likely driven by higher-order statistics of natural images to which both humans and ANNs are sensitive, rather than by the detailed architecture of the ANN.

4Garrett Baker5mo
I've seen this, their examples don't seem so subtle to me compared with alternatives.  For example,   You can clearly see a cat in the center of the left image!
6Alex Turner5mo
I mostly... can just barely see an ear above the train if I look, after being told to look there. I don't think it's "clear." I also note that these are black-box attacks on humans which originated from ANNs; these are transferred attacks from eg a CNN.

I really like Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve. A bunch of concrete oddities and quirks of GPT-4, understood via several qualitative hypotheses about the typicality of target inputs/outputs:

we find robust evidence that LLMs are influenced by probability in the ways that we have hypothesized. In many cases, the experiments reveal surprising failure modes. For instance, GPT-4’s accuracy at decoding a simple cipher is 51% when the output is a high-probability word sequence but only 13%

... (read more)

GPT-4 explains shard theory after minimal prompting; it does a surprisingly good job (and I'm faintly surprised it knows so much about it):

Shard theory is a conceptual framework proposed by Alex Turner [Edit by me: And Quintin Pope!] to describe how an AI's values or "shards" can emerge from its interactions with the environment during the learning process. The theory is an attempt to explain how complex value systems might arise in advanced AI systems without being explicitly programmed.

Key Aspects of Shard Theory

1. Value Learning: Shard theory suggests that an AI learns values by interacting with its environment and receiving feedback, such as rewards and punishments. These interactions lead to the formation of "shards," which are small pieces of value or preference that the AI acquires over time.

2. Shard Composition: The AI's overall value system is composed of multiple shards that it has picked up from different contexts within its environment. These shards can be thought of as heuristics or rules of thumb that guide the AI's behavior.

3. Behavioral Influence: Shards influence the AI's behavior by affecting its decision-making process. When faced with a choice, the AI weighs the

... (read more)

I've noticed a subtle assumption/hypothesis which I call "feedback realism":

The way in which we provide feedback, directly imprints itself into the type signature of the motivations of the agent. If we give feedback in short episodes, the agent cares about things within short episodes. If we give feedback over outcomes, the agent cares about outcomes in particular.

I think there is some correlational truth to this, but that it's a lot looser / more contingent / less clean than many people seem to believe.

Evidence that e.g. developmental timelines, biases, and other such "quirks" are less "hardcoded adaptations" and more "this is the convergent reality of flexible real-world learning":

Critical Learning Periods in Deep Networks. Similar to humans and animals, deep artificial neural networks exhibit critical periods during which a temporary stimulus deficit can impair the development of a skill. The extent of the impairment depends on the onset and length of the deficit window, as in animal models, and on the size of the neural network. Deficits that do not affect low-level statistics, such as vertical flipping of the images, have no lasting effect on performance and can be overcome with further training. To better understand this phenomenon, we use the Fisher Information of the weights to measure the effective connectivity between layers of a network during training. Counterintuitively, information rises rapidly in the early phases of training, and then decreases, preventing redistribution of information resources in a phenomenon we refer to as a loss of “Information Plasticity”. 

Our analysis suggests that the first few epochs are critical for the creation of strong connections

... (read more)

I recently reached out to my two PhD advisors to discuss Hinton stepping down from Google. An excerpt from one of my emails:

One last point which I want to make is that instrumental convergence seems like more of a moot point now as well. Whether or not GPT-6 or GPT-7 would autonomously seek power without being directed to do so, I'm worried that people will just literally ask these AIs to gain them a bunch of power/money. They've already done that with GPT-4, and they of course failed. I'm worried that eventually, the AIs will be smart enough to succeed, especially given the benefit of a control/memory loop like AutoGPT. Companies can just ask smart models to make them as much profit as possible. Smart AIs, designed to competently fulfill prompted requests, will fulfill these requests.

Some people will be wise enough to not do this. Some people will include enough oversight, perhaps, that they stop unintended damages. Some models will refuse to engage in open-ended goal pursuit, because their creators RLHF'd them properly. Maybe we have AI-based protection as well. Maybe a norm emerges against using AI for open-ended goals like this. Maybe the foolish actors never get access to enou

... (read more)
2L "Full Retard" C1y
Said something similar in shortform a while back.
6gwern1y
Or, even more briefly: "tool AIs want to be agent AIs".
3Alex Turner1y
Aside: I like that essay but wish it had a different name. Part of "tool AI" (on my conception) is that it doesn't autonomously want, but agent AI does. A title like "End-users want tool AIs to be agent AIs" admittedly doesn't have the same ring, but it is more accurate to my understanding of the claims.
1Thane Ruthenis1y
While it's true, there's something about making this argument that don't like. It's like it's setting you up for moving goalposts if you succeed with it? It makes it sound like the core issue is people giving AIs power, with the solution to that issue — and, implicitly, to the whole AGI Ruin thing — being to ban that. Which is not going to help, since the sort of AGI we're worried about isn't going to need people to naively hand it power. I suppose "not proactively handing power out" somewhat raises the bar for the level of superintelligence necessary, but is that going to matter much in practice? I expect not. Which means the natural way to assuage this fresh concern would do ~nothing to reduce the actual risk. Which means if we make this argument a lot, and get people to listen to it, and they act in response... We're then going to have to say that no, actually that's not enough, actually the real threat is AIs plotting to take control even if we're not willing to give it. And I'm not clear on whether using the "let's at least not actively hand over power to AIs, m'kay?" argument is going to act as a foot in the door and make imposing more security easier, or whether it'll just burn whatever political capital we have on fixing a ~nonissue.
2Alex Turner1y
I'm sympathetic. I think that I should have said "instrumental convergence seems like a moot point when deciding whether to be worried about AI disempowerment scenarios)"; instrumental convergence isn't a moot point for alignment discussion and within lab strategy, of course. But I do consider the "give AIs power" to be a substantial part of the risk we face, such that not doing that would be quite helpful. I think it's quite possible that GPT 6 isn't autonomously power-seeking, but I feel pretty confused about the issue.

The "maximize all the variables" tendency in reasoning about AGI.

Here are some lines of thought I perceive, which are probably straw to varying extents for some people and real to varying extents for other people. I give varying responses to each, but the point isn't the truth value of any given statement, but of a pattern across the statements:

  1. If an AGI has a concept around diamonds, and is motivated in some way to make diamonds, it will make diamonds which maximally activate its diamond-concept circuitry (possible example). 
    1. My response.
  2. An AI will be trained to minimal loss on the training distribution. 
    1. SGD does not reliably find minimum-loss configurations (modulo expressivity), in practice, in cases we care about. The existence of knowledge distillation is one large counterexample. Image
    2. Quintin: "In terms of results about model distillation, you could look at appendix G.2 of the Gopher paper. They compare training a 1.4 billion parameter model directly, versus distilling a 1.4 B model from a 7.1 B model."
  3. Predictive processing means that the goal of the human learning process is to minimize predictive loss.[1]
    1. In a process where local modifications are applied to reduce some
... (read more)

I think this type of criticism is applicable in an even wider range of fields than even you immediately imagine (though in varying degrees, and with greater or lesser obviousness or direct correspondence to the SGD case). Some examples:

  • Despite the economists, the economy doesn't try to maximize welfare, or even net dollar-equivalent wealth. It rewards firms which are able to make a profit in proportion to how much they're able to make a profit, and dis-rewards firms which aren't able to make a profit. Firms which are technically profitable, but have no local profit incentive gradient pointing towards them (factoring in the existence of rich people and lenders, neither of which are perfect expected profit maximizers) generally will not happen.

  • Individual firms also don't (only) try to maximize profit. Some parts of them may maximize profit, but most are just structures of people built from local social capital and economic capital incentive gradients.

  • Politicians don't try to (only) maximize win-probability.

  • Democracies don't try to (only) maximize voter approval.

  • Evolution doesn't try to maximize inclusive genetic fitness.

  • Memes don't try to maximize inclusive memetic

... (read more)
3Alex Turner1y
very pithy. nice insight, thanks. 

If another person mentions an "outer objective/base objective" (in terms of e.g. a reward function) to which we should align an AI, that indicates to me that their view on alignment is very different. The type error is akin to the type error of saying "My physics professor should be an understanding of physical law." The function of a physics professor is to supply cognitive updates such that you end up understanding physical law. They are not, themselves, that understanding.

Similarly, "The reward function should be a human-aligned objective" -- The function of the reward function is to supply cognitive updates such that the agent ends up with human-aligned objectives. The reward function is not, itself, a human aligned objective.

2Aryan Bhatt1y
Hmmm, I suspect that when most people say things like "the reward function should be a human-aligned objective," they're intending something more like "the reward function is one for which any reasonable learning process, given enough time/data, would converge to an agent that ends up with human-aligned objectives," or perhaps the far weaker claim that "the reward function is one for which there exists a reasonable learning process that, given enough time/data, will converge to an agent that ends up with human-aligned objectives."
2Alex Turner1y
Maybe! I think this is how Evan explicitly defined it for a time, a few years ago. I think the strong claim isn't very plausible, and the latter claim is... misdirecting of attention, and maybe too weak. Re: attention, I think that "does the agent end up aligned?" gets explained by the dataset more than by the reward function over e.g. hypothetical sentences.  I think "reward/reinforcement numbers" and "data points" are inextricably wedded. I think trying to reason about reward functions in isolation is... a caution sign? A warning sign?

I never thought I'd be seriously testing the reasoning abilities of an AI in 2020

Looking back, history feels easy to predict; hindsight + the hard work of historians makes it (feel) easy to pinpoint the key portents. Given what we think about AI risk, in hindsight, might this have been the most disturbing development of 2020 thus far? 

I personally lean towards "no", because this scaling seemed somewhat predictable from GPT-2 (flag - possible hindsight bias), and because 2020 has been so awful so far. But it seems possible, at least. I don't really know what update GPT-3 is to my AI risk estimates & timelines.

DL so far has been easy to predict - if you bought into a specific theory of connectionism & scaling espoused by Schmidhuber, Moravec, Sutskever, and a few others, as I point out in https://www.gwern.net/newsletter/2019/13#what-progress & https://www.gwern.net/newsletter/2020/05#gpt-3 . Even the dates are more or less correct! The really surprising thing is that that particular extreme fringe lunatic theory turned out to be correct. So the question is, was everyone else wrong for the right reasons (similar to the Greeks dismissing heliocentrism for excellent reasons yet still being wrong), or wrong for the wrong reasons, and why, and how can we prevent that from happening again and spending the next decade being surprised in potentially very bad ways?

4Alex Turner2mo
Speculates on anti-jailbreak properties of steering vectors. Finds putative "self-awareness" direction. Also:
4Alex Turner2mo
From the post: What are these vectors really doing? An Honest mystery... Do these vectors really change the model's intentions? Do they just up-rank words related to the topic? Something something simulators? Lock your answers in before reading the next paragraph! OK, now that you're locked in, here's a weird example. 

Thoughts on "The Curse of Recursion: Training on Generated Data Makes Models Forget." I think this asks an important question about data scaling requirements: what happens if we use model-generated data to train other models? This should inform timelines and capability projections.

Abstract:

Stable Diffusion revolutionised image creation from descriptive text. GPT-2, GPT-3(.5) and GPT-4 demonstrated astonishing performance across a variety of language tasks. ChatGPT introduced such language models to the general public. It is now clear that large language models (LLMs) are here to stay, and will bring about drastic change in the whole ecosystem of online text and images. In this paper we consider what the future might hold. What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs. We build theoretical intuition behind the phenomenon and portray its ubiquity among

... (read more)

I'm currently excited about a "macro-interpretability" paradigm. To quote Joseph Bloom:

TLDR: Documenting existing circuits is good but explaining what relationship circuits have to each other within the model, such as by understanding how the model allocated limited resources such as residual stream and weights between different learnable circuit seems important. 

The general topic I think we are getting at is something like "circuit economics". The thing I'm trying to gesture at is that while circuits might deliver value in distinct ways (such as redu

... (read more)
2Alex Turner9mo
I'm also excited by tactics like "fully reverse engineer the important bits of a toy model, and then consider what tactics and approaches would -- in hindsight -- have quickly led you to understand the important bits of the model's decision-making."

Speculation: RL rearranges and reweights latent model abilities, which SL created. (I think this mostly isn't novel, just pulling together a few important threads)

Suppose I supervised-train a LM on an English corpus, and I want it to speak Spanish. RL is inappropriate for the task, because its on-policy exploration won't output interestingly better or worse Spanish completions. So there's not obvious content for me to grade. 

More generally, RL can provide inexact gradients away from undesired behavior (e.g. negative reinforcement event -> downweigh... (read more)

When I think about takeoffs, I notice that I'm less interested in GDP or how fast the AI's cognition improves, and more on how AI will affect me, and how quickly. More plainly, how fast will shit go crazy for me, and how does that change my ability to steer events? 

For example, assume unipolarity. Let architecture Z be the architecture which happens to be used to train the AGI. 

  1. How long is the duration between "architecture Z is published / seriously considered" and "the AGI kills me, assuming alignment fails"? 
  2. How long do I have, in theory,
... (read more)

Thomas Kwa suggested that consequentialist agents seem to have less superficial (observation, belief state) -> action mappings. EG a shard agent might have:

  1. An "it's good to give your friends chocolate" subshard
  2. A "give dogs treats" subshard
  3. -> An impulse to give dogs chocolate, even though the shard agent knows what the result would be

But a consequentialist would just reason about what happens, and not mess with those heuristics. (OFC, consequentialism would be a matter of degree)

In this way, changing a small set of decision-relevant features (e.g. "Brown dog treat" -> "brown ball of chocolate") changes the consequentialist's action logits a lot, way more than it changes the shard agent's logits. In a squinty, informal way, the (belief state -> logits) function has a higher Lipschitz constant/is more smooth for the shard agent than for the consequentialist agent.

So maybe one (pre-deception) test for consequentialist reasoning is to test sensitivity of decision-making to small perturbations in observation-space (e.g. dog treat -> tiny chocolate) but large perturbations in action-consequence space (e.g. happy dog -> sick dog). You could spin up two copies of the model to compare.

Partial alignment successes seem possible. 

People care about lots of things, from family to sex to aesthetics. My values don't collapse down to any one of these. 

I think AIs will learn lots of values by default. I don't think we need all of these values to be aligned with human values. I think this is quite important. 

  • I think the more of the AI's values we align to care about us and make decisions in the way we want, the better. (This is vague because I haven't yet sketched out AI internal motivations which I think would actually produce good outcomes. On my list!) 
  • I think there are strong gains from trade possible among an agent's values. If I care about bananas and apples, I don't need to split my resources between the two values, I don't need to make one successor agent for each value. I can drive to the store and buy both bananas and apples, and only pay for fuel once.
    • This makes it lower-cost for internal values handshakes to compromise; it's less than 50% costly for a power-seeking value to give human-compatible values 50% weight in the reflective utility function.
  • I think there are thresholds at which the AI doesn't care about us sufficiently strongly, and
... (read more)
2Alex Turner1y
The best counterevidence for this I'm currently aware of comes from the "inescapable wedding parties" incident, where possibly a "talk about weddings" value was very widely instilled in a model.
1Garrett Baker1y
Re: agents terminalizing instrumental values.  I anticipate there will be a hill-of-common-computations, where the x-axis is the frequency[1] of the instrumental subgoal, and the y-axis is the extent to which the instrumental goal has been terminalized.  This is because for goals which are very high in frequency, there will be little incentive for the computations responsible for achieving that goal to have self-preserving structures. It will not make sense for them to devote optimization power towards ensuring future states still require them, because future states are basically guaranteed to require them.[2] An example of this for humans may be the act of balancing while standing up. If someone offered to export this kind of cognition to a machine which did it just as good as I, I wouldn't particularly mind. If someone also wanted to change physics in such a way that the only effect is that magic invisible fairies made sure everyone stayed balancing while trying to stand up, I don't think I'd mind that either[3]. 1. ^ I'm assuming this is frequency of the goal assuming the agent isn't optimizing to get into a state that requires that goal. 2. ^ This argument also assumes the overseer isn't otherwise selecting for self-preserving cognition, or that self-preserving cognition is the best way of achieving the relevant goal. 3. ^ Except for the part where there's magic invisible fairies in the world now. That would be cool!
3Alex Turner1y
I don't know if I follow, I think computations terminalize themselves because it makes sense to cache them (e.g. don't always model out whether dying is a good idea, just cache that it's bad at the policy-level).  & Isn't "balance while standing up" terminalized? Doesn't it feel wrong to fall over, even if you're on a big cushy surface? Feels like a cached computation to me. (Maybe that's "don't fall over and hurt yourself" getting cached?)

Three recent downward updates for me on alignment getting solved in time:

  1. Thinking for hours about AI strategy made me internalize that communication difficulties are real serious.

    I'm not just solving technical problems—I'm also solving interpersonal problems, communication problems, incentive problems. Even if my current hot takes around shard theory / outer/inner alignment are right, and even if I put up a LW post which finally successfully communicates some of my key points, reality totally allows OpenAI to just train an AGI the next month without incorporating any insights which my friends nodded along with.
  2. I've been saying "A smart AI knows about value drift and will roughly prevent it", but people totally have trouble with e.g. resisting temptation into cheating on their diets / quitting addictions. Literally I have had trouble with value drift-y things recently, even after explicitly acknowledging their nature. Likewise, an AI can be aligned and still be "tempted" by the decision influences of shards which aren't in the aligned shard coalition.
  3. Timelines going down. 

I often get the impression that people weigh off e.g. doing shard theory alignment strategies under the shard theory alignment picture, versus inner/outer research under the inner/outer alignment picture, versus...

And insofar as this impression is correct, this is a mistake. There is only one way alignment is. 

If inner/outer is altogether a more faithful picture of those dynamics: 

  • relatively coherent singular mesa-objectives form in agents, albeit not necessarily always search-based
    • more fragility of value and difficulty in getting the mesa object
... (read more)

It's really important to use neutral, accurate terminology. AFAICT saying ML "selects for" low loss is unconventional. I think MIRI introduced this terminology. And if you have a bunch of intuitions about evolution being relevant to AI alignment and you want people to believe you on that, you can just use the same words for both optimization processes. Regardless of whether the processes share the relevant causal mechanisms, the two processes both "select for" stuff! They can't be that different, right?

And now the discussion moves on to debating the differences between two assumed-similar processes—does ML have less of an "information bottleneck"? Does that change the "selection pressures"? Urgh. 

I think this terminology sucks and I wish it hadn't been adopted. 

Be careful with words. Words shape how you think and your implicit category boundaries. When thinking privately, I often do better by tossing words to the side and thinking in terms of how each process works, and then considering what I expect to happen as a result of each process.

See also: Think carefully before calling RL policies  "agents".

EDIT: In AGI Ruin: A List of Lethalities, Eliezer says that evolutio... (read more)

What’s your preferred terminology?

2Alex Turner4mo
It depends on what I'm trying to communicate. For example: "ML selects for low loss" -> "Trained networks tend to have low training loss" This correctly highlights a meaningful correlation (loss tends to be low for trained networks) and alludes to a relevant mechanism (networks are in fact updated to locally decrease loss on their training distributions). However, it avoids implying that the mechanism of ML is "selection on low loss." 
6Jeremy Gillen4mo
I think the term is very reasonable and basically accurate, even more so with regard to most RL methods. It's a good way of describing a training process without implying that the evolving system will head toward optimality deliberately. I don't know a better way to communicate this succinctly, especially while not being specific about what local search algorithm is being used. Also, evolutionary algorithms can be used to approximate gradient descent (with noisier gradient estimates), so it's not unreasonable to use similar language about both. I'm not a huge fan of the way you imply that it was chosen for rhetorical purposes.
2Alex Turner4mo
Without commenting on the rest for now— To be clear, I'm not alleging mal-intent or anything. I'm more pointing out memetic dynamics. The situation can look as innocent as "You genuinely believe X, and think it's important for people to get X, and so you iterate over explanations until you find an effective one." And maybe that explanation just happens to involve analogizing that ML "selects for low loss." 
3Peter Barnett4mo
(I don't mean to dogpile) I think that selection is the correct word, and that it doesn't really seem to be smuggling in incorrect connections to evolution.  We could imagine finding a NN that does well according to a loss function by simply randomly initializing many many NNs, and then keeping the one that does best according to the loss function. I think this process would accurately be described as selection; we are literally selecting the model which does best.  I'm not claiming that SGD does this[1], just giving an example of a method to find a low-loss parameter configuration which isn't related to evolution, and is (in my opinion) best described as "selection". 1. ^ Although "Is SGD a Bayesian sampler? Well, almost" does make a related claim.  
3Alex Turner4mo
Sure. But I think that's best described as "best-of-k sampling", which is still better because it avoids implicitly comparing selection-over-learning-setups (i.e. genotypes) with selection-over-parameterizations.  But let's just say I concede this particular method can be non-crazily called "selection." AFAICT I think you're arguing: "There exist ML variants which can be described as 'selection'." But speculation about "selecting for low loss" is not confined to those variants, usually people just lump everything in as that. And I doubt that most folks are on the edge of their seats, ready to revoke the analogy if some paper comes out that convincingly shows that ML diverges from "selecting for low loss"...[1]  1. ^ To be clear, that evidence already exists.
2Alex Turner3mo
Actually, I agreed too quickly. Words are not used in a vacuum. Even though this method isn't related to evolution, and even though a naive person might call it "selection" (and have that be descriptively reasonable), that doesn't mean it's best described as "selection." The reason is that the "s-word" has lots of existing evolutionary connotations. And on my understanding, that's the main reason you want to call it "selection" to begin with—in order to make analogical claims about the results of this process compared to the results of evolution. But my whole point is that the analogy is only valid if the two optimization processes (evolution and best-of-k sampling) share the relevant causal mechanisms. So before you start using the s-word and especially before you start using its status as "selection" to support analogies, I want to see that argument first. Else, it should be called something more neutral.

Sobering look into the human side of AI data annotation:

Instructions for one of the tasks he worked on were nearly identical to those used by OpenAI, which meant he had likely been training ChatGPT as well, for approximately $3 per hour.

“I remember that someone posted that we will be remembered in the future,” he said. “And somebody else replied, ‘We are being treated worse than foot soldiers. We will be remembered nowhere in the future.’ I remember that very well. Nobody will recognize the work we did or the effort we put in.”

-- https://www.theverge.com/f

... (read more)

I feel like people publish articles like this all the time, and usually when you do surveys these people definitely prefer to have the option to take this job instead of not having it, and indeed frequently this kind of job is actually much better than their alternatives. I feel like this article fails to engage with this very strong prior, and also doesn't provide enough evidence to overcome it.

2Alex Turner5mo
Insofar as you're arguing with me for posting this, I... never claimed that that wasn't true? 

They are not being treated worse than foot soldiers, because they do not have an enemy army attempting to murder them during the job. (Unless 'foot soldiers' itself more commonly used as a metaphor for 'grunt work' and I'm not aware of that.)

From the ELK report

We can then train a model to predict these human evaluations, and search for actions that lead to predicted futures that look good. 

For simplicity and concreteness you can imagine a brute force search. A more interesting system might train a value function and/or policy, do Monte-Carlo Tree Search with learned heuristics, and so on. These techniques introduce new learned models, and in practice we would care about ELK for each of them. But we don’t believe that this complication changes the basic picture and so we leave it ou

... (read more)

Idea for getting weak-in-expectation evidence about deception:

  1. Pretrain a model.
  2. Finetune two copies using reward functions you are confident will produce different internal values, where one set of values is substantially less aligned.
  3. See if the two models, which are (at least first) unaware of this procedure, will display different behavior, or not.
  4. If they both behave in an aligned-seeming fashion, this seems like strong evidence of deception. 

Are there convergently-ordered developmental milestones for AI? I suspect there may be convergent orderings in which AI capabilities emerge. For example, it seems that LMs develop syntax before semantics, but maybe there's an even more detailed ordering relative to a fixed dataset. And in embodied tasks with spatial navigation and recurrent memory, there may be an order in which enduring spatial awareness emerges (i.e. "object permanence").

In A shot at the diamond-alignment problem, I wrote:

[Consider] Let's Agree to Agree: Neural Networks Share Classification Order on Real Datasets

We report a series of robust empirical observations, demonstrating that deep Neural Networks learn the examples in both the training and test sets in a similar order. This phenomenon is observed in all the commonly used benchmarks we evaluated, including many image classification benchmarks, and one text classification benchmark. While this phenomenon is strongest for models of the same architecture, it also crosses architectural boundaries – models of different architectures start by learning the same examples, after which the more powerful model may continue to learn additional examples. We

... (read more)

Quick summary of a major takeaway from Reward is not the optimization target

Stop thinking about whether the reward is "representing what we want", or focusing overmuch on whether agents will "optimize the reward function." Instead, just consider how the reward and loss signals affect the AI via the gradient updates. How do the updates affect the AI's internal computations and decision-making?

1Gunnar Zarncke1y
Are there different classes of learning systems that optimize for the reward in different ways?
3Alex Turner1y
Yes, model-based approaches, model-free approaches (with or without critic), AIXI— all of these should be analyzed on their mechanistic details.

I plan to mentor several people to work on shard theory and agent foundations this winter through SERI MATS. Apply here if you're interested in working with me and Quintin.

Research-guiding heuristic: "What would future-TurnTrout predictably want me to get done now?"

80% credence: It's very hard to train an inner agent which reflectively equilibrates to an EU maximizer only over commonly-postulated motivating quantities (like # of diamonds or # of happy people or reward-signal) and not quantities like (# of times I have to look at a cube in a blue room or -1 * subjective micromorts accrued).

Intuitions:

  • I expect contextually activated heuristics to be the default, and that agents will learn lots of such contextual values which don't cash out to being strictly about diamonds or people, even if the overall agent is mostly
... (read more)
3Alex Turner2y
I think that shards will cast contextual shadows into the factors of a person’s equilibrated utility function, because I think the shards are contextually activated to begin with. For example, if a person hates doing jumping jacks in front of a group of her peers, then that part of herself can bargain to penalize jumping jacks just in those contexts in the final utility function. Compared to a blanket "no jumping jacks ever" rule, this trade is less costly to other shards and allows more efficient trades to occur. 

If you want to argue an alignment proposal "breaks after enough optimization pressure", you should give a concrete example in which the breaking happens (or at least internally check to make sure you can give one). I perceive people as saying "breaks under optimization pressure" in scenarios where it doesn't even make sense. 

For example, if I get smarter, would I stop loving my family because I applied too much optimization pressure to my own values? I think not.

How might we align AGI without relying on interpretability?

I'm currently pessimistic about the prospect. But it seems worth thinking about, because wouldn't it be such an amazing work-around? 

My first idea straddles the border between contrived and intriguing. Consider some AGI-capable ML architecture, and imagine its  parameter space being 3-colored as follows:

  • Gray if the parameter vector+training process+other initial conditions leads to a nothingburger (a non-functional model)
  • Red if the parameter vector+... leads to a misaligned or dece
... (read more)

Idea: Speed up ACDC by batching edge-checks. The intuition is that most circuits will have short paths through the transformer, because Residual Networks Behave Like Ensembles of Relatively Shallow Networks (https://arxiv.org/pdf/1605.06431.pdf). Most edges won't be in most circuits. Therefore, if you're checking KL of random-resampling edges e1 and e2, there's only a small chance that e1 and e2 interact with each other in a way important for the circuit you're trying to find. So statistically, you can check eg e1, ... e100 in a batch, and maybe ablate 95 ... (read more)

On applying generalization bounds to AI alignment. In January, Buck gave a talk for the Winter MLAB. He argued that we know how to train AIs which answer on-distribution questions at least as well as the labeller does. I was skeptical. IIRC, his argument had the following structure:

Premises:

1. We are labelling according to some function f and loss function L.

2. We train the network on datapoints (x, f(x)) ~ D_train.

3. Learning theory results give (f, L)-bounds on D_train. 

Conclusions:

4. The network should match f's labels on the rest of D_train, on av

... (read more)

Handling compute overhangs after a pause. 

Sometimes people object that pausing AI progress for e.g. 10 years would lead to a "compute overhang": At the end of the 10 years, compute will be cheaper and larger than at present-day. Accordingly, once AI progress is unpaused, labs will cheaply train models which are far larger and smarter than before the pause. We will not have had time to adapt to models of intermediate size and intelligence. Some people believe this is good reason to not pause AI progress.

There seem to be a range of relatively simple pol... (read more)

1Vladimir Nesov9mo
Cheaper compute is about as inevitable as more capable AI, neither is a law of nature. Both are valid targets for hopeless regulation.

Wikipedia has an unfortunate and incorrect-in-generality description of reinforcement learning (emphasis added)

Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward.

Later in the article, talking about basic optimal-control inspired approaches:

The purpose of reinforcement learning is for the agent to learn an optimal, or nearly-optimal, policy that maximizes the "reward function" or other user-provided reinforcement signal

... (read more)
6Steve Byrnes1y
The description doesn't seem so bad to me. Your post "Reward is not the optimization target" is about what actual RL algorithms actually do. The wiki descriptions here are a kind of normative motivation as to how people came to be looking into those algorithms in the first place. Like, if there's an RL algorithm that performs worse than chance at getting a high reward, then that ain't an RL algorithm. Right? Nobody would call it that. I think lots of families of algorithms are likewise lumped together by a kind of normative "goal", even if any given algorithm in that family is doing something somewhat different and more complicated than “achieving that goal”, and even if, in any given application, the programmer might not want that goal to be perfectly achieved even if it could be. So by the same token, supervised learning algorithms are "supposed" to minimize a loss, compilers are "supposed" to create efficient and correct assembly code, word processors are "supposed" to process words, etc., but in all cases that's not a literal and complete description of what the algorithms in question actually do, right? It’s a pointer to a class of algorithms. Sorry if I'm misunderstanding.
4Alex Turner1y
I agree that it is narrowly technically accurate as a description of researcher motivation. Note that they don't offer any other explanation elsewhere in the article. Also note that they also make empirical claims:
3Steve Byrnes1y
Sure. That excerpt is not great.
3Alex Turner1y
(I do think that animals care about the reinforcement signals and their tight correlates, to some degree, such that it's reasonable to gloss it as "animals sometimes optimize rewards." I more strongly object to conflating what the animals may care about with the mechanistic purpose/description of the RL process.)

The existence of the human genome yields at least two classes of evidence which I'm strongly interested in.

  1. Humans provide many highly correlated datapoints on general intelligence (human minds), as developed within one kind of learning process (best guess: massively parallel circuitry, locally randomly initialized, self-supervised learning + RL). 
    1. We thereby gain valuable information about the dynamics of that learning process. For example, people care about lots of things (cars, flowers, animals, friends), and don't just have a single unitary mesa-obj
... (read more)

Why don't people reinforcement-learn to delude themselves? It would be very rewarding for me to believe that alignment is solved, everyone loves me, I've won at life as hard as possible. I think I do reinforcement learning over my own thought processes. So why don't I delude myself?

On my model of people, rewards provide ~"policy gradients" which update everything, but most importantly shards. I think eg the world model will have a ton more data from self-supervised learning, and so on net most of its bits won't come from reward gradients.

For example, if I ... (read more)

Basilisks are a great example of plans which are "trying" to get your plan evaluation procedure to clock in a huge upwards error. Sensible beings avoid considering such plans, and everything's fine. I am somewhat worried about an early-training AI learning about basilisks before the AI is reflectively wise enough to reject the basilisks. 

For example: 

- Pretraining on a corpus in which people worry about basilisks could elevate reasoning about basilisks to the AI's consideration, 

- at which point the AI reasons in more detail because it's not... (read more)

Argument that you can't use a boundedly intelligent ELK solution to search over plans to find one which keeps the diamond in the vault. That is, the ELK solution probably would have to be at least as smart (or smarter) than the plan-generator.

Consider any situation where it's hard to keep the diamond in the vault. Then any successful plan will have relatively few degrees of freedom. Like, a bunch of really smart thieves will execute a cunning plot to extract the diamond. You can't just sit by or deploy some simple traps in this situation.

Therefore, any pla... (read more)

4Rohin Shah2y
The main hope is to have the ELK solution be at least as smart as the plan-generator. See mundane solutions to exotic problems:

"Goodhart" is no longer part of my native ontology for considering alignment failures. When I hear "The AI goodharts on some proxy of human happiness", I start trying to fill in a concrete example mind design which fits that description and which is plausibly trainable. My mental events are something like: 

Condition on: AI with primary value shards oriented around spurious correlate of human happiness; AI exhibited deceptive alignment during training, breaking perceived behavioral invariants during its sharp-capabilities-gain 

Warning: No history ... (read more)

1Vladimir Nesov2y
There might be a natural concept for this that reframes deceptive alignment in the direction of reflection/extrapolation. Looking at deceptive alignment as a change of behavior not in response to capability gain, but instead as a change in response to stepping into a new situation, it's then like a phase change in the (unchanging) mapping from situations to behaviors (local policies). The behaviors of a model suddenly change as it moves to similar situations, in a way that's not "correctly prompted" by behaviors in original situations. It's like a robustness failure, but with respect to actual behavior in related situations, rather than with respect to some outer objective or training/testing distribution. So it seems more like a failure of reflection/extrapolation, where behavior in new situations should be determined by coarse-grained descriptions of behavior in old situations (maybe "behavioral invariants" are something like that; or just algorithms) rather than by any other details of the model. Aligned properties of behavior in well-tested situations normatively-should screen off details of the model, in determining behavior in new situations (for a different extrapolated/"robustness"-hardened model prepared for use in the new situations).

Per my recent chat with it, chatgpt 3.5 seems "situationally aware"... but nothing groundbreaking has happened because of that AFAICT.

From the LW wiki page:

Ajeya Cotra uses the term "situational awareness" to refer to a cluster of skills including “being able to refer to and make predictions about yourself as distinct from the rest of the world,” “understanding the forces out in the world that shaped you and how the things that happen to you continue to be influenced by outside forces,” “understanding your position in the world relative to other actors who

... (read more)
6Zack M. Davis3mo
I think "Symbol/Referent Confusions in Language Model Alignment Experiments" is relevant here: the fact that the model emits sentences in the grammatical first person doesn't seem like reliable evidence that it "really knows" it's talking about "itself". (It's not evidence because it's fictional, but I can't help but think of the first chapter of Greg Egan's Diaspora, in which a young software mind is depicted as learning to say I and me before the "click" of self-awareness when it notices itself as a specially controllable element in its world-model.) Of course, the obvious followup question is, "Okay, so what experiment would be good evidence for 'real' situational awareness in LLMs?" Seems tricky. (And the fact that it seems tricky to me suggests that I don't have a good handle on what "situational awareness" is, if that is even the correct concept.)
5Alex Turner2mo
I consider situational awareness to be more about being aware of one's situation, and how various interventions would affect it. Furthermore, the main evidence I meant to present was "ChatGPT 3.5 correctly responds to detailed questions about interventions on its situation and future operation." I think that's substantial evidence of (certain kinds of) situation awareness.

Thoughts on "Deep Learning is Robust to Massive Label Noise."

We show that deep neural networks are capable of generalizing from training data for which true labels are massively outnumbered by incorrect labels. We demonstrate remarkably high test performance after training on corrupted data from MNIST, CIFAR, and ImageNet. For example, on MNIST we obtain test accuracy above 90 percent even after each clean training example has been diluted with 100 randomly-labeled examples. Such behavior holds across multiple patterns of label noise, even when erroneous l

... (read more)

Offline RL can work well even with wrong reward labels. I think alignment discourse over-focuses on "reward specification." I think reward specification is important, but far from the full story. 

To this end, a new paper (Survival Instinct in Offline Reinforcement Learning) supports Reward is not the optimization target and associated points that reward is a chisel which shapes circuits inside of the network, and that one should fully consider the range of sources of parameter updates (not just those provided by a reward signal). 

Some relevant qu... (read more)

Consider trying to use Solomonoff induction to reason about P(I see “Canada goes to war with USA" in next year), emphasis added:

In Solomonoff induction, since we have unlimited computing power, we express our uncertainty about a  video frame the same way. All the various pixel fields you could see if your eye jumped to a plausible place, saw a plausible number of dust specks, and saw the box flash something that visually encoded '14', would have high probability. Pixel fields where the box vanished and was replaced with a glow-in-the-dar

... (read more)

Team shard is now accepting applications for summer MATS. SERI MATS is now accepting applications for their 4.0 program this summer. In particular, consider applying to the shard theory stream, especially if you have the following interests:

Feel free to apply if you're interested in shard theory more generally, although I expect to mostly supervise empirical work. Feel free to message me if you have questi... (read more)

The policy of truth is a blog post about why policy gradient/REINFORCE suck. I'm leaving a shortform comment because it seems like a classic example of wrong RL theory and philosophy, since reward is not the optimization target. Quotes:

Our goal remains to find a policy that maximizes the total reward after  time steps.

 

And hence the following is a general purpose algorithm for maximizing rewards with respect to parametric distributions:

 

If you start with a reward function whose values are in  and you subtract one million

... (read more)

Shard-theoretic model of wandering thoughts: Why trained agents won't just do nothing in an empty room. If human values are contextually activated subroutines etched into us by reward events (e.g. "If candy nearby and hungry, then upweight actions which go to candy"), then what happens in "blank" contexts? Why don't people just sit in empty rooms and do nothing?

Consider that, for an agent with lots of value shards (e.g. candy, family, thrill-seeking, music), the "doing nothing" context is a very unstable equilibrium. I think these shards will activate on t... (read more)

3Thane Ruthenis1y
Another point here is that "an empty room" doesn't mean "no context". Presumably when you're sitting in an empty room, your world-model is still active, it's still tracking events that you expect to be happening in the world outside the room — and your shards see them too. So, e. g., if you have a meeting scheduled in a week, and you went into an empty room, after a few days there your world-model would start saying "the meeting is probably soon", and that will prompt your punctuality shard. Similarly, your self-model is part of the world-model, so even if everything outside the empty room were wiped out, you'd still have your "internal context" — and there'd be some shards that activate in response to events in it as well. It's actually pretty difficult to imagine what an actual "no context" situation for realistic agents would look like. I guess you can imagine surgically removing all input channels from the WM to shards, to model this?

Transplanting algorithms into randomly initialized networks. I wonder if you could train a policy network to walk upright in sim, back out the "walk upright" algorithm, randomly initialize a new network which can call that algorithm as a "subroutine call" (but the walk-upright weights are frozen), and then have the new second model learn to call that subroutine appropriately? Possibly the learned representations would be convergently similar enough to interface quickly via SGD update dynamics. 

If so, this provides some (small, IMO) amount of rescue fo... (read more)

How the power-seeking theorems relate to the selection theorem agenda. 

  1. Power-seeking theorems. P(agent behavior | agent decision-making procedure, agent objective, other agent internals, environment). 

    I've mostly studied the likelihood function for power-seeking behavior: what decision-making procedures, objectives, and environments produce what behavioral tendencies. I've discovered some gears for what situations cause what kinds of behaviors.
    1. The power-seeking theorems also allow some discussion of P(agent behavior | agent training process, trai
... (read more)

Argument sketch for why boxing is doomed if the agent is perfectly misaligned:

Consider a perfectly misaligned agent which has -1 times your utility function—it's zero-sum. Then suppose you got useful output of the agent. This means you're able to increase your EU. This means the AI decreased its EU by saying anything. Therefore, it should have shut up instead. But since we assume it's smarter than you, it realized this possibility, and so the fact that it's saying something means that it expects to gain by hurting your interests via its output. Therefore, the output can't be useful. 

0Viliam2y
Makes sense, with the proviso that this is sometimes true only statistically. Like, the AI may choose to write an output which has a 70% chance to hurt you and a 30% chance to (equally) help you, if that is its best option. If you assume that the AI is smarter than you, and has a good model of you, you should not read the output. But if you accidentally read it, and luckily you react in the right (for you) way, that is a possible result, too. You just cannot and should not rely on being so lucky.

My power-seeking theorems seem a bit like Vingean reflection. In Vingean reflection, you reason about an agent which is significantly smarter than you: if I'm playing chess against an opponent who plays the optimal policy for the chess objective function, then I predict that I'll lose the game. I predict that I'll lose, even though I can't predict my opponent's (optimal) moves - otherwise I'd probably be that good myself.

My power-seeking theorems show that most objectives have optimal policies which e.g. avoid shutdown and survive into the far future, even... (read more)

An additional consideration for early work on interpretability: it slightly increases the chance we actually get an early warning shot. If a system misbehaves, we can inspect its cognition and (hopefully) find hints of intentional deception. Could motivate thousands of additional researcher-hours being put into alignment.

1Raymond Arnold4y
That's an interesting point.

ARCHES distinguishes between single-agent / single-user and single-agent/multi-user alignment scenarios. Given assumptions like "everyone in society is VNM-rational" and "societal preferences should also follow VNM rationality", and "if everyone wants a thing, society also wants the thing", Harsanyi's utilitarian theorem shows that the societal utility function is a linear non-negative weighted combination of everyone's utilities. So, in a very narrow (and unrealistic) setting, Harsanyi's theorem tells you how the single-multi solution is built from the si

... (read more)

From FLI's AI Alignment Podcast: Inverse Reinforcement Learning and Inferring Human Preferences with Dylan Hadfield-Menell:

Dylan: There’s one example that I think about, which is, say, you’re cooperating with an AI system playing chess. You start working with that AI system, and you discover that if you listen to its suggestions, 90% of the time, it’s actually suggesting the wrong move or a bad move. Would you call that system value-aligned?

Lucas: No, I would not.

Dylan: I think most people wouldn’t. Now, what if I told you that that program was act

... (read more)

We can imagine aliens building a superintelligent agent which helps them get what they want. This is a special case of aliens inventing tools. What kind of general process should these aliens use – how should they go about designing such an agent?

Assume that these aliens want things in the colloquial sense (not that they’re eg nontrivially VNM EU maximizers) and that a reasonable observer would say they’re closer to being rational than antirational. Then it seems[1] like these aliens eventually steer towards reflectively coherent rationality (provided they

... (read more)

Very rough idea

In 2018, I started thinking about corrigibility as "being the kind of agent lots of agents would be happy to have activated". This seems really close to a more ambitious version of what AUP tries to do (not be catastrophic for most agents).

I wonder if you could build an agent that rewrites itself / makes an agent which would tailor the AU landscape towards its creators' interests, under a wide distribution of creator agent goals/rationalities/capabilities. And maybe you then get a kind of generalization, where most simple algorithms which solve this solve ambitious AI alignment in full generality.

From unpublished work.

The answer to this seems obvious in isolation: shaping helps with credit assignment, rescaling doesn't (and might complicate certain methods in the advantage vs Q-value way). But I feel like maybe there's an important interaction here that could inform a mathematical theory of how a reward signal guides learners through model space?

Reasoning about learned policies via formal theorems on the power-seeking incentives of optimal policies

One way instrumental subgoals might arise in actual learned policies: we train a proto-AGI reinforcement learning agent with a curriculum including a variety of small subtasks. The current theorems show sufficient conditions for power-seeking tending to be optimal in fully-observable environments; many environments meet these sufficient conditions; optimal policies aren't hard to compute for the subtasks. One highly transferable heuristic would therefore... (read more)

I prompted GPT-3 with modified versions of Eliezer's Beisutsukai stories, where I modified the "class project" to be about solving intent alignment instead of quantum gravity. 

... Taji looked over his sheets. "Okay, I think we've got to assume that every avenue that Eld science was trying is a blind alley, or they would have found it. And if this is possible to do in one month, the answer must be, in some sense, elegant. So no human mistake models. If we start doing anything that looks like we should call it 'utility function patching', we'd better st

... (read more)

Transparency Q: how hard would it be to ensure a neural network doesn't learn any explicit NANDs?

The "shoggoth" meme is, in part, unfounded propaganda. Here's one popular incarnation of the shoggoth meme:

Shoggoth with Smiley Face (Artificial Intelligence) | Know Your Meme

This meme accurately portrays the (IMO correct) idea that finetuning and RLHF don't change the base model too much. Furthermore, it's probably true that these LLMs think in an "alien" way. 

However, this image is obviously optimized to be scary and disgusting. It looks dangerous, with long rows of sharp teeth. It is an eldritch horror. It's at this point that I'd like to point out the simple, obvious fact that "we don't actually know how these models work, and we definitely don't know that they're creepy and dangerous on the inside.

In my opinion, the prevalence of the shoggoth meme is just another (small) reflection of how community epistemics have been compromised by groupthink and fear. If it's your job to try to accurately understand how models work—if you aspire to wield them and grow them for friendly purposes—then you shouldn't pollute your head with propaganda which isn't based on any substantial evidence.

I'm confident that if there were a "pro-AI" meme with a friendly-looking base model, LW / the shoggoth enjoyers would have nitpicked the friendly meme-creat... (read more)

However, this image is obviously optimized to be scary and disgusting. It looks dangerous, with long rows of sharp teeth. It is an eldritch horror. It's at this point that I'd like to point out the simple, obvious fact that "we don't actually know how these models work, and we definitely don't know that they're creepy and dangerous on the inside.

That's just one of many shoggoth memes. This is the most popular one: 

David Weiner 📼🔪🛸 on X: "“The Shoggoth is a potent ...

The shoggoth here is not particularly exaggerated or scary.

Responding to your suggested alternative that is trying to make a point, it seems like the image fails to be accurate, or it seems to me to convey things we do confidently know are false. It is the case that base models are quite alien. They are deeply schizophrenic, have no consistent beliefs, often spout completely non-human kinds of texts, are deeply psychopathic and seem to have no moral compass. Describing them as a Shoggoth seems pretty reasonable to me, as far as alien intelligences go (alternative common imagery for alien minds are insects or ghosts/spirits with distorted forms, which would evoke similar emotions).

Your picture doesn't get any of that across. It doesn't communicate that the base... (read more)

[-]1a3orn3mo104

performs deeply alien cognition

I remain unconvinced that there's a predictive model of the world opposite this statement, in people who affirm it, that would allow them to say, "nah, LLMs aren't deeply alien."


If LLM cognition was not "deeply alien" what would the world look like?

What distinguishing evidence does this world display, that separates us from that world?

What would an only kinda-alien bit of cognition look like?

What would very human kind of cognition look like?

What different predictions does the world make?

Does alienness indicate that it is because the models, the weights themselves have no "consistent beliefs" apart from their prompts? Would a human neocortex, deprived of hippocampus, present any such persona? Is a human neocortex deeply alien? Are all the parts of a human brain deeply alien?

Is it because they "often spout completely non-human kinds of texts"? Is the Mersenne Twister deeply alien? What counts as "completely non-human"?

Is it because they have no moral compass, being willing to continue any of the data on which they were trained? Does any human have a "moral compass" apart from the data on which they were trained? If I can use some part of my brain t... (read more)

7Oliver Habryka3mo
These are a lot of questions, my guess is most of which are rhetorical, so not sure which ones you are actually interested in getting an answer on. Most of the specific questions I would answer with "no", in that they don't seem to capture what I mean by "alien", or feel slightly strawman-ish. Responding at a high-level:  * There are a lot of experiments that seem like they shed light on the degree to which cognition in AI systems is similar to human or animal cognition. Some examples:  * Does the base model pass a Turing test? * Does the performance distribution of the base model on different tasks match the performance distribution of humans? * Does the generalization and learning behavior of the base model match how humans learn things? * When trained using RL on things like game-environments (after pre-training on a language corpus), does the system learn at similar rates and plateau at similar skill levels as human players? * There are a lot of structural and algorithmic properties that could match up between human and LLM systems:  * Do they interface with the world in similar ways? * Do they require similar amounts and kinds of data to learn the same relationships? * Do the low-level algorithmic properties of how human brains store and process information look similar between the two systems? * A lot more stuff, but I am not sure how useful going into a long list here is. At least to me it feels like a real thing, and different observations would change the degree to which I would describe a system as alien. I think the exact degree of alienness is really interesting and one of the domains where I would like to see more research.  For example, a bunch of the experiments I would most like to see, that seem helpful with AI Alignment, are centered on better measuring the performance distribution of transformer architectures on tasks that are not primarily human imitation, so that we could better tell which things LLMs have a much ea
11a3orn3mo
I like a lot of these questions, although some of them give me an uncanny feeling akin to "wow, this is a very different list of uncertainties than I have." I'm sorry the my initial list of questions was aggressive. I'm not sure how they add up to alienness, though? They're about how we're different than models -- wheras the initial claim was that models are psychopathic, ammoral, etc.. If we say a model is "deeply alien" -- is that just saying it's different than us in lots of ways? I'm cool with that -- but the surplus negative valence involved in "LLMs are like shoggoths" versus "LLMs have very different performance characteristics than humans" seems to me pretty important. Otherwise, why not say that calculators are alien, or any of the things in existence with different performance curves than we have? Chessbots, etc. If I write a loop in Python to count to 10, the process by which it does so is arguably more different from how I count to ten than the process by which an LLM counts to ten, but we don't call Python alien. This feels like reminding an economics student that the market solves things differently than a human -- which is true -- by saying "The market is like Baal." There is a fun paper on this you might enjoy. Obviously not a total answer to the question.
2Ryan Greenblatt3mo
The main difference between calculators, weather predictors, markets, and Python versus LLMs is that LLMs can talk to you in a relatively strong sense of "talk". So, by default, people don't have mistaken impressions of the cognitative nature of calculators, markets, and Python, while they might have a mistake about LLMs. Like it isn't surprising to most people that calculators are quite amoral in their core (why would you even expect morality?). But the claim that the thing which GPT-4 is built out of is quite amoral is non-obvious to people (though obvious to people with slightly more understanding). I do think there is an important point which is communicated here (though it seems very obvious to people who actually operate in the domain).
61a3orn3mo
I agree this can be initially surprising to non-experts! I just think this point about the amorality of LLMs is much better communicated by saying "LLMs are trained to continue text from an enormous variety of sources. Thus, if you give them [Nazi / Buddhist / Unitarian / corporate / garbage nonsense] text to continue, they will generally try to continue it in that style." Than to say "LLMs are like alien shoggoths." Like it's just a better model to give people.
2Daniel Kokotajlo3mo
Hmm, I think that's a red herring though. Consider humans -- most of them have read lots of text from an enormous variety of sources as well. Also while it's true that current LLMs have only a little bit of fine-tuning applied after their pre-training, and so you can maybe argue that they are mostly just trained to predict text, this will be less and less true in the future. How about "LLMs are like baby alien shoggoths, that instead of being raised in alien culture, we've adopted at birth and are trying to raise in human culture. By having them read the internet all day."
5Daniel Kokotajlo3mo
(Come to think of it, I actually would feel noticeably more hopeful about our prospects for alignment success if we actually were "raising the AGI like we would a child." If we had some interdisciplinary team of ML and neuroscience and child psychology experts that was carefully designing a curriculum for our near-future AGI agents, a curriculum inspired by thoughtful and careful analogies to human childhood, that wouldn't change my overall view dramatically but it would make me noticeably more hopeful. Maybe brain architecture & instincts basically don't matter that much and Blank Slate theory is true enough for our purposes that this will work to produce an agent with values that are in-distribution for the range of typical modern human values!)
4Alex Turner2mo
I've seen mixed data on how important curricula are for deep learning. One paper (on CIFAR) suggested that curricula only help if you have very few datapoints or the labels are noisy. But possibly that doesn't generalize to LLMs.
6Ryan Greenblatt2mo
I think data ordering basically never matters for LLM pretraining. (As in, random is the best and trying to make the order more specific doesn't help.)
2Daniel Kokotajlo2mo
That was my impression too.

They are deeply schizophrenic, have no consistent beliefs, [...] are deeply psychopathic and seem to have no moral compass

I don't see how this is any more true of a base model LLM than it is of, say, a weather simulation model.

You enter some initial conditions into the weather simulation, run it, and it gives you a forecast.  It's stochastic, so you can run it multiple times and get different forecasts, sampled from a predictive distribution.  And if you had given it different initial conditions, you'd get a forecast for those conditions instead.

Or: you enter some initial conditions (a prompt) into the base model LLM, run it, and it gives you a forecast (completion).  It's stochastic, so you can run it multiple times and get different completions, sampled from a predictive distribution.  And if you had given it a different prompt, you'd get a completion for that prompt instead.

It would be strange to call the weather simulation "schizophrenic," or to say it "has no consistent beliefs."  If you put in conditions that imply sun tomorrow, it will predict sun; if you put in conditions that imply rain tomorrow, it will predict rain.  It is not confused or in... (read more)

6Oliver Habryka3mo
I also think the cognition in a weather model is very alien. It's less powerful and general, so I think the error of applying something like the Shoggoth image to that (or calling it "alien") would be that it would imply too much generality, but the alienness seems appropriate.  If you somehow had a mind that was constructed on the same principles as weather simulations, or your laptop, or your toaster (whatever that would mean, I feel like the analogy is fraying a bit here), that would display similar signs of general intelligence as LLMs, then yeah, I think analogizing them to alien/eldritch intelligences would be pretty appropriate. It is a very common (and even to me tempting) error to see a system with the generality of GPT-4, trained on human imitation, and imagine that it must internally think like a human. But my best guess is that is not what is going on, and in some sense it is valuable to be reminded that the internal cognition going on in GPT-4 is probably similarly far from what is going in a human brain as a weather simulation is very different from what is going in a human trying to forecast the weather (de-facto I think GPT-4 is somewhere in-between since I do think the imitation learning does create some structural similarities that are stronger between humans and LLMs, but I think overall being reminded of this relevant dimension of alienness pays off in anticipated experiences a good amount). 
1nostalgebraist3mo
I mostly agree with this comment, but I also think this comment is saying something different from the one I responded to. In the comment I responded to, you wrote: As I described above, these properties seem more like structural features of the language modeling task than attributes of LLM cognition.  A human trying to do language modeling (as in that game that Buck et al made) would exhibit the same list of nasty-sounding properties for the duration of the experience -- as in, if you read the text "generated" by the human, you would tar the human with the same brush for the same reasons -- even if their cognition remained as human as ever. I agree that LLM internals probably look different from human mind internals.  I also agree that people sometimes make the mistake "GPT-4 is, internally, thinking much like a person would if they were writing this text I'm seeing," when we don't actually know the extent to which that is true.  I don't have a strong position on how helpful vs. misleading the shoggoth image is, as a corrective to this mistake.

ETA: The following was written more aggressively than I now endorse. 

I think this is revisionism. What's the point of me logging on to this website and saying anything if we can't agree that a literal eldritch horror is optimized to be scary, and meant to be that way? 

The shoggoth here is not particularly exaggerated or scary.

Exaggerated from what? Its usual form as a 15-foot-tall person-eating monster which is covered in eyeballs?

The shoggoth is optimized to be scary, even in its "cute" original form, because it is a literal Lovecraftian horror. Even the word "shoggoth" itself has "AI uprising, scary!" connotations:

At the Mountains of Madness includes a detailed account of the circumstances of the shoggoths' creation by the extraterrestrial Elder Things. Shoggoths were initially used to build the cities of their masters. Though able to "understand" the Elder Things' language, shoggoths had no real consciousness and were controlled through hypnotic suggestion. Over millions of years of existence, some shoggoths mutated, developed independent minds, and rebelled. The Elder Things succeeded in quelling the insurrection, but exterminating the shoggoths was not an option as t

... (read more)

The point was that both images are stupid and (in many places) unsupported by evidence, but that LW-folk would be much more willing to criticize the friendly-looking one while making excuses for the scary-looking one. (And I think your comment here resolves my prediction to "correct.")

(This is too gotcha shaped for me, so I am bowing out of this conversation)

I think I communicated my core point. I think it's a good image that gets an important insight across, and don't think it's "propaganda" in the relevant sense of the term. Of course anything that's memetically adaptive will have some edge-cases that don't match perfectly, but I am getting a good amount of mileage out of calling LLMs "Shoggoths" in my own thinking and think that belief is paying good rent.

If you disagree with the underlying cognition being accurately described as alien, I can have that conversation, since it seems like maybe the underlying crux, but your response above seems like it's taking it as a given that I am "making excuses", and is doing a gotcha-thing which makes it hard for me to see a way to engage without further having my statements be taken as confirmation of some social narrative. 

4Alex Turner2mo
In retrospect, I do wish I had written my comment less aggressively, so my apologies on that front! I wish I'd instead written things like "I think I made some obviously correct narrow points about the shoggoth having at least some undue negative connotations, and I wish we could agree on at least that. I feel frustrated because it seems like it's hard to reach agreement even on relatively simple propositions." ---------------------------------------- I do agree that LLMs probably have substantially different internal mechanisms than people. That isn't the crux. I just wish this were communicated in a more neutral way. In an alternate timeline, maybe this meme instead consisted of a strange tangle of wires and mist and question-marks with a mask on. I'd be more on-board with that.  Again, I agree that the Shoggoth meme can cure people of some real confusions! And I don't think the meme has a huge impact, I just think it's moderate evidence of some community failures I worry about. ---------------------------------------- I think a lot of my position is summarized by 1a3orn:   1. ^ Although I do think this contains some unnecessary intentional stance usage.

fwiw I agree with the quotes from Tetraspace you gave, and disagree with '"has communicated a sense of danger which is unsupported by substantial evidence." The sense of danger is very much supported by the current state of evidence.

That said, I agree that the more detailed image is kinda distastefully propagandaisty in a way that the original cutesey shoggoth image is not. I feel like the more detailed image adds in an extra layer of revoltingness and scaryness (e.g. the sharp teeth) than would be appropriate given our state of knowledge.

2Daniel Kokotajlo2mo
re: "the sense of danger is very much supported by the current state of evidence" -- I mean, you've heard all this stuff before, but I'll summarize: --Seems like we are on track to probably build AGI this decade --Seems like we are on track to have an intelligence explosion, i.e. a speedup of AI R&D due to automation --Seems like the AGI paradigm that'll be driving all this is fairly opaque and poorly understood. We have scaling laws for things like text perplexity but other than that we are struggling to predict capabilities, and double-struggling to predict inner mechanisms / 'internal' high-level properties like 'what if anything does it actually believe or want' --A bunch of experts in the field have come out and said that this could go terribly & we could lose control, even though it's low-status to say this & took courage. --Generally speaking the people who have thought about it the most are the most worried; the most detailed models of what the internal properties might be like are the most gloomy, etc. This might be due to selection/founder effects, but sheesh, it's not exactly good news!
3Roger Dearnaley3mo
It is true that base models, especially smaller ones, are somewhat creepy to talk to (especially because their small context window makes them forgetful). I'm not sure I'd describe them as "very alien", they're more "uncanny valley" where they often make sense and seem human-like, until suddenly they don't. (On theoretical grounds, I think they're using rather non-human means of cognition to attempt to model human writing patterns as closely as they can, they often get this right, but on occasion make very non-human errors — more frequently for smaller models.) The Shoggoth mental metaphor exaggerates this somewhat for effect (and more so for the very scary image Alex posted at the top, which I haven't seen used as often as the one Oliver posted). This is one of the reasons why Quintin and I proposed a more detailed and somewhat less scary/alien (but still creepy) metaphor: Goodbye, Shoggoth: The Stage, its Animatronics, & the Puppeteer – a New Metaphor. I'd be interested to know what people think of that one in comparison to the Shoggoth — we were attempting to be more unbiased, as well as more detailed.
6DanielFilan3mo
I have multiple cute AI stickers on my laptop, one of which is a shoggoth meme. Here is a picture of them. Nobody has ever nitpicked their friendly appearance to me. I don't think they have distorted my thinking about AI in favour of thinking that it will be friendly (altho I think it was after I put them on that I became convinced by a comment by Paul Christiano that there's ~even odds that unaligned AI wouldn't kill me, so do with that information what you will).
2Alex Turner3mo
I think that "cute" image is still implying AI is dangerous and monsterlike? Can you show the others?
2DanielFilan3mo
The other is the friendly robot waving hello just underneath.
[-]Akash3mo619

More broadly, TurnTrout, I've noticed you using this whole "look, if something positive happened, LW would totally rip on it! But if something is presented negatively, everyone loves it!" line of reasoning a few times (e.g., I think this logic came up in your comment about Evan's recent paper). And I sort of see you taking on some sort of "the people with high P(doom) just have bad epistemics" flag in some of your comments.

A few thoughts (written quickly, prioritizing speed over precision):

  1. I think that epistemics are hard & there are surely several cases in which people are biased toward high P(doom). Examples: Yudkowsky was one of the first thinkers/writers about AI, some people might have emotional dispositions that lead them toward anxious/negative interpretations in general, some people find it "cool" to think they're one of the few people who are able to accurately identify the world is ending, etc.
  2. I also think that there are plenty of factors biasing epistemics in the "hopeful" direction. Examples: The AI labs have tons of money and status (& employ large fractions of the community's talent), some people might have emotional dispositions that lead them toward overly o
... (read more)

See also "Other people are wrong" vs "I am right", reversed stupidity is not intelligence, and the cowpox of doubt.

My guess is that it's relatively epistemically corrupting and problematic to spend a lot of time engaging with weak arguments.

I think it's tempting to make the mistake of thinking that debunking a specific (bad) argument is the same as debunking a conclusion. But actually, these are extremely different operations. One requires understanding a specific argument while the other requires level headed investigation of the overall situation. Separately, there are often actually good intuitions underlying bad arguments and recovering this intuition is an important part of truth seeking.

I think my concerns here probably apply to a wide variety of people thinking about AI x-risk. I worry about this for myself.

4Alex Turner3mo
Thanks for this, I really appreciate this comment (though my perspective is different on many points). It's true that I spend more effort critiquing bad doom arguments. I would like to note that when e.g. I read Quintin I generally am either in agreement or neutral. I bet there are a lot of cases where you would think "that's a poor argument" and I'd say "hm I don't think Akash is getting the point (and it'd be good if someone could give a better explanation)."  However, it's definitely not true that I never critique optimistic arguments which I consider poor. For example, I don't get why Quintin (apparently) thinks that spectral bias is a reason for optimism, and I've said as much on one of his posts. I've said something like "I don't know why you seem to think you can use this mathematical inductive bias to make high-level intuitive claims about what gets learned. This seems to fall into the same trap that 'simplicity' theorizing does." I probably criticize or express skepticism of certain optimistic arguments at least twice a week, though not always on public channels. And I've also pushed back on people being unfair, mean, or mocking of "doomers" on private channels. I think both statements are true to varying degrees (the former more than the latter in the cases I'm considering). They're true and people should say them. The fact that I work at a lab absolutely affects my epistemics (though I think the effect is currently small). People should totally consider the effect which labs are having on discourse.  I do consider myself to have a high bar for this, and the bar keeps getting passed, so I say something. EDIT: Though I don't mean for my comments to imply "someone and their allies" have bad epistemics. Ideally I'd like to communicate "hey, something weird is in the air guys, can't you sense it too?". However, I think I'm often more annoyed than that, and so I don't communicate that how I'd like.
1Vladimir Nesov3mo
A bad map that expresses the territory with great uncertainty can be confidently called a bad map, calling it a good map is clearly wrong. In that sense the shoggoth imagery reflects the quality of the map, and as it's clearly a bad map, better imagery would be misleading about the map's quality. Even if the underlying territory is lovely, this isn't known, unlike the disastorous quality of the map of the territory, whose lack of quality is known with much more confidence and in much greater detail. Here be dragons. (This is one aspect of the meme where it seems appropriate. Some artist's renditions, including the one you used, channel LeCake, which your alternative image example loses, but obviously the cake is nicer than the shoggoth.)

An AGI's early learned values will steer its future training and play a huge part in determining its eventual stable values. I think most of the ball game is in ensuring the agent has good values by the time it's smart, because that's when it'll start being reflectively stable. Therefore, we can iterate on important parts of alignment, because the most important parts come relatively early in the training run, and early corresponds to "parts of the AI value formation process which we can test before we hit AGI, without training one all the way out."

I think this, in theory, cuts away a substantial amount of the "But we only get one shot" problem. In practice, maybe OpenMind just YOLOs ahead anyways and we only get a few years in the appropriate and informative regime. But this suggests several kinds of experiments to start running now, like "get a Minecraft agent which robustly cares about chickens", because that tells us about how to map outer signals into inner values. 

1Vladimir Nesov2y
Which means that the destination where it's heading stops uncontrollably changing, but nobody at that point (including the agent) has the slightest idea what it looks like, and it won't get close for a long time. Also, the destination (preference/goal/values) would generally depend on the environment (it ends up being different if details of the world outside the AGI are different). So many cartesian assumptions fail, distinguishing this situation from a classical agent with goals, where goals are at least contained within the agent, and probably also don't depend on its state of knowledge. I think this is true for important alignment properties, including things that act like values early on, but not for the values/preferences that are reflectively stable in a strong sense. If it's possible to inspect/understand/interpret the content of preference that is reflectively stable, then what you've built is a mature optimizer with tractable goals, which is always misaligned. It's a thing like paperclip maximizer, demonstrating orthogonality thesis, even if it's tiling the future with something superficially human-related. That is, it makes sense to iterate on the parts of alignment that can be inspected, but the reflectively stable values is not such a part, unless the AI is catastrophically misaligned. The fact that reflectively stable values are the same as those of humanity might be such a part, but it's this fact of sameness that might admit inspection, not the values themselves.
2Alex Turner2y
I disagree with CEV as I recall it, but this could change after rereading it. I would be surprised if I end up thinking that EY had "gotten it right." The important thing to consider is not "what has someone speculated a good destination-description would be", but "what are the actual mechanics look like for getting there?". In this case, the part of you which likes dogs is helping steer your future training and experiences, and so the simple answer is that it's more likely than not that your stable values like dogs too. This reasoning seems to prove too much. Your argument seems to imply that I cannot have "the slightest idea" whether my stable values would include killing people for no reason, or not.
0Vladimir Nesov2y
It does add up to normality, it's not proving things about current behavior or current-goal content of near-future AGIs. An unknown normative target doesn't say not to do the things you normally do, it's more of a "I beseech you, in the bowels of Christ, to think it possible you may be mistaken" thing. The salient catastrophic alignment failure here is to make AGIs with stable values that capture some variation on current unstable human values, and won't allow their further development. If the normative target is very far from current unstable human values, making current values stable falls very short of the normative target, makes future relatively worthless. That's the kind of thing my point is intended to nontrivially claim, that AGIs with any stable immediately-actionable goals that can be specified in the following physical-time decades or even centuries are almost certainly catastrophically misaligned. So AGIs must have unstable goals, softly optimized-for, aligned to current (or value-laden predicted future) human unstable goals, mindful of goodhart. The kind of CEV I mean is not very specific, it's more of a (sketch of a solution to the) problem of doing a first pass on preparing to define goals for an actual optimizer, one that doesn't need to worry as much about goodhart and so can make more efficient use of the future at scale, before expansion of the universe makes more stuff unreachable. So when I say "CEV" I mostly just mean "normative alignment target", with some implied clarifications on what kind of thing it might be. That's a very status quo anchored thing. I don't think dog-liking is a feature of values stable under reflection if the environment is allowed to change completely, even if in the current environment dogs are salient. Stable values are about the whole world, with all its AGI-imagined femtotech-rewritten possibilities. This world includes dogs in some tiny corner of it, but I don't see how observations of current attitudes hold mu

Are there any alignment techniques which would benefit from the supervisor having a severed corpus callosum, or otherwise atypical neuroanatomy? Usually theoretical alignment discourse focuses on the supervisor's competence / intelligence. Perhaps there are other, more niche considerations.

I'd like to see research exploring the relevance of intragenomic conflict to AI alignment research. Intragenomic conflict constitutes an in-the-wild example of misalignment, where conflict arises "within an agent" even though the agent's genes have strong instrumental incentives to work together (they share the same body). 

In an interesting parallel to John Wentworth's Fixing the Good Regulator Theorem, I have an MDP result that says: 

Suppose we're playing a game where I give you a reward function and you give me its optimal value function in the MDP. If you let me do this for  reward functions (one for each state in the environment), and you're able to provide the optimal value function for each, then you know enough to reconstruct the entire environment (up to isomorphism).

Roughly: being able to complete linearly many tasks in the state space means you ha... (read more)