By default, humans only care about variables they could (in principle) easily optimize or comprehend.
I think this is incorrect. I think humans have values which are essentially provisional. In other words, they're based on pointers which are supposed to be impossible to fully dereference. Examples:
If something is too hard to optimize/comprehend, people couldn't possibly optimize/comprehend it in the past, so it couldn't be a part of human values.
I don't understand why this claim would be true.
Take the human desire for delicious food; humans certainly didn't understand the chemistry of food and the human brain well enough to comprehend it or directly optimize it, but for millennia we picked foods that we liked more, explored options, and over time cultural and culinary processes improved on this poorly understood goal.
Yes, some value judgements (e.g. "this movie is good", "this song is beautiful", or even "this is a conscious being") depend on inscrutable brain machinery, the machinery which creates experience. The complexity of our feelings can be orders of magnitude greater than the complexity of our explicit reasoning. Does it kill the proposal in the post? I think not, for the following reason:
We aren't particularly good at remembering exact experiences, we like very different experiences, we can't access each other's experiences, and we have very limited ways of controlling experiences. So, there should be pretty strict limitations on how much understanding of the inscrutable machinery is required for respecting the current human values. Defining corrigible behavior ("don't kill everyone", "don't seek power", "don't mess with human brains") shouldn't require answering many specific, complicated machinery-dependent questions ("what separates good and bad movies?", "what separates good and bad life?", "what separates conscious and unconscious beings?").
Also, some thoughts about your specific counterexample (I generalized it to being about experiences in general):
Does any of the above help to find the crux of the disagreement or understand the intuitions behind my claim?
I think the crux might be that I think the ability to sample from a distribution at points we can reach does not imply that we know anything else about the distribution.
So I agree with you that we can sample and evaluate. We can tell whether a food we have made is good or bad, and can have aesthetic taste(, but I don't think that this is stationary, so I'm not sure how much it helps, not that this is particularly relevant to our debate.) And after gather that data, (once we have some idea about what the dimensions are,) we can even extrapolate, in either naive or complex ways.
But unless values are far simpler than I think they are, I will claim that the naive extrapolation from the sampled points fails more and more as we extrapolate farther from where we are, which is a (or the?) central problem with AI alignment.
Are you talking about value learning? My proposal doesn't tackle advanced value learning. Basically, my argument is "if (A) human values are limited by human ability to comprehend/optimize things and (B) the factors which make something easier or harder to comprehend/optimize are simple, then the AI can avoid accidentally messing up human values — so we can define safe impact measures and corrigibility". My proposal is not supposed to make the AI learn human values in great detail or extrapolate them out of distribution. My argument is "if A and B hold, then we can draw a box around human values and tell the AI to not mess up the contents of the box — without making the AI useless; yet the AI might not know what exact contents of the box count as 'human values'".[1]
The problem with B is that humans have very specialized and idiosyncratic cognitive machinery (the machinery generating experiences) which is much more advanced than human general ability to comprehend things. I interpreted you as making this counterargument in the top level comment. My reply is that I think human values depend on that machinery in a very limited way, so B is still true enough. But I'm not talking about extrapolating something out of distribution. Unless I'm missing your point.
Why those things follow from A and B is not obvious and depends on a non-trivial argument. I tried to explain it in the first section of the post, but might've failed.
But
Based on your comments, I can guess that something below is the crux:
Could you confirm or clarify the crux? Your messages felt ambiguous to me. In what specific way is A false?
To pursue their values, humans should be able to reason about them. To form preferences about a thing, humans should be able to consider the thing. Therefore, human ability to comprehend should limit what humans can care about.
You're conflating can and should! I agree that it would be ideal if this were the case, but am skeptical it is. That's what I meant when I said I think A is false.
- If learning values is possible at all, there should be some simplicity biases which help to learn them. Wouldn't it be strange if those simplicity biases were absolutely unrelated to simplicity biases of human cognition?
That's a very big "if"! And simplicity priors are made questionable, if not refuted, by the fact that we haven't gotten any convergence about human values despite millennia of philosophy trying to build such an explanation.
You define "values" as ~"the decisions humans would converge to after becoming arbitrarily more knowledgeable".
No, I think it's what humans actually pursue today when given the options. I'm not convinced that these values are static, or coherent, much less that we would in fact converge.
You say that values depend on inscrutable brain machinery. But can't we treat the machinery as a part of "human ability to comprehend"?
No, because we don't comprehend them, we just evaluate what we want locally using the machinery directly, and make choices based on that. (Then we apply pretty-sounding but ultimately post-hoc reasoning to explain it - as I tweeted partly thinking about this conversation.)
I want to show a philosophical principle which, I believe, has implications for many alignment subproblems. If the principle is valid, it might allow to
This post clarifies and expands on ideas from here and here. Reading the previous posts is not required.
The principle and its most important consequences:
Justification:
There are value systems for which the principle is false. In that sense, it's empirical. However, I argue that it's a priori true for humans, no matter how wrong our beliefs about the world are. So the principle is not supposed to be an "assumption" or "hypothesis", like e.g. the Natural Abstraction hypothesis.
You can find a more detailed explanation of the principle in the appendix.
How do we define easiness of comprehension? We choose variables describing our sensory data. We choose what properties () of those variables count as "easily comprehensible". Now when we consider any variable (observable or latent), we check how much its behavior fits the properties. We can order all variables from the most comprehensible to the least comprehensible ().
Let's give a specific example of properties. Imagine a green ball in your visual field. What properties would make this stimulus easier to comprehend? Continuous movement (the ball doesn't teleport from place to place), smooth movement (the ball doesn't abruptly change direction), low speed (the ball doesn't change too fast compared to other stimuli), low numerosity (the ball doesn't have countless distinct parts). That's the kind of properties we need to abstract and capture when defining .
How do we define easiness of optimization? Some variables are variables describing actions (), ordered from the most comprehensible to the least comprehensible actions (). We can check if changes in an variable are correlated with changes in a variable. If yes, can optimize . Easiness of optimization is given by the index of . This is an incomplete definition, but it conveys the main idea.
Formalizing all of this precisely won't be trivial. But I'll give intuition pumps for why it's a very general idea which doesn't require getting anything exactly right on the first try.
Consider these things:
According to the principle:
This makes sense. Why would we rebind our utility to something which we couldn't meaningfully interact with, perceive or understand previously?
Let's see how the principle applies to a universe very different from our own. A universe called Super Mario Bros.
When playing the game, it's natural to ask: what variables (observable or latent) change in ways which are the easiest to comprehend? Which of those changes are correlated with simple actions of the playable character or with simple inputs?
Let's compare a couple of things from the game:
According to the principle:
This makes sense. If you care about playing the game, it's hard to care about things which are tangential or detrimental to the main gameplay.
There's a fun and simple way to hack computer programs, based on searching and filtering variables stored in a program's memory.
For example, do you want to get infinite lives in a video game? Then do this:
Oftentimes you'll end up with at least two variables: one controlling the actual number of lives and the other controlling the number of lives displayed on the screen. Here's a couple of tutorial videos about this type of hacking: Cheat Engine for Idiots, Unlocking the Secrets of my Favorite Childhood Game.
It's a very general approach to reverse engineering a program. And the idea behind my principle is that the variables humans care about can be discovered in a similar way, by filtering out all variables which don't change according to certain simple rules.
If you still struggle to understand what "easiness of optimization/comprehension" means, check out additional examples in the appendix.
(This is a vague philosophical point intended to explain what kind of "move" I'm trying to make by introducing my principle.)
There are Copernican revolutions and Anti-Copernican revolutions.
Copernican revolutions say "external things matter more than our perspective". The actual Copernican revolution is an example.
Anti-Copernican revolutions say "our perspective matters more than external things". The anthropic principle is an example: instead of asking "why are we lucky to have this universe?" we ask "why is this universe lucky to have us?". What Immanuel Kant called his "Copernican revolution" is another example: instead of saying "mental representations should conform to external objects" he said "external objects should conform to mental representations".[5] Arguably, Policy Alignment is also an example ("human beliefs, even if flawed, are more important than AI's galaxy-brained beliefs").
With my principle, I'm trying to make an Anti-Copernican revolution too. My observation is the following: for our abstractions to be grounded in anything at all, reality has to have certain properties — therefore, we can deduce properties of reality from introspective information about our abstractions.
The green bubble is all aspects of reality humans can optimize or comprehend. It's a cradle of simplicity in a potentially infinite sea of complexity. The core of the bubble is , the outer layer is .
The outer layer contains, among other things, the last theory of physics which has some intuitive sense. The rest of the universe, not captured by the theory, is basically just "noise".
We care about the internal structure of the bubble (its internals are humanly comprehensible concepts). We don't care about the internal structure of the "noise". Though we do care about predicting the noise, since the noise might accidentally accumulate into a catastrophic event.
The bubble has a couple of nice properties. It's humanly comprehensible and it has a gradual progression from easier concepts to harder concepts (just like in school). We know that the bubble exists, no matter how wrong our beliefs are. Because if it doesn't exist, then all our values are incoherent and the world is incomprehensible or uncontrollable.
Note that the bubble model applies to 5 somewhat independent things: the laws of physics, ethics, cognition & natural language, conscious experience, and mathematics.
Idea 1. Is "objects that are easy to manipulate with the hand" a natural abstraction? I don't know. But imagine I build an AI with a mechanical hand. Now it should be a natural abstraction for the AI, because "manipulating objects with the hand" is one of the simplest actions the AI can perform. This suggests that it would be nice to have an AI which interprets reality in terms of the simplest actions it can take. Because it would allow us to build a common ontology between humans and the AI.
Idea 2. The simplest explanation of the reward is often unsafe because it's "too smart". If you teach a dumber AI to recognize dogs, it might learn the shape and texture of a dog; meanwhile a superintelligent AI will learn a detailed model of the training process and Goodhart it. This suggests that it would be nice to have an AI which doesn't just search for the simplest explanation with all of its intelligence, but looks for the simplest explanations at different levels of intelligence — and is biased towards "simpler and dumber" explanations.
The principle combines both of those ideas and gives them additional justification. It's a new measure of simplicity.
Here I explain how the principle relates to the following problems: the pointers problem; the diamond maximizer problem; environmental goals; identifying causal goal concepts from sensory data; ontology identification problem; eliciting latent knowledge.
According to the principle, we can order all variables by how easy they are to optimize/comprehend (). We can do this without abrupt jumps in complexity or empty classes. can have greater predictive power than , because it has less constraints.
That implies the following:
The natural abstraction hypothesis says that (...) a wide variety of cognitive architectures will learn to use approximately the same high-level abstract objects/concepts to reason about the world. (from Testing The Natural Abstraction Hypothesis: Project Intro)
This claim says that this ability to learn natural abstractions applies more broadly: general-purpose cognitive systems (like humans or AGI) can in principle learn all natural abstractions. (...) This claim says that humans and ML models are part of the large class of cognitive systems that learn to use natural abstractions. Note that there is no claim to the converse: not all natural abstractions are used by humans. But given claim 1c, once we do encounter the thing described by some natural abstraction we currently don't use, we will pick up that natural abstraction too, unless it is too complex for our brain. (from Natural Abstractions: Key claims, Theorems, and Critiques)
If NAH is true, referents of human concepts have relatively simple definitions.
However, my principle implies that referents of human concepts have a relatively simple definition even if human concepts are not universal (i.e. it's not true that "a wide variety of cognitive architectures will learn to use approximately the same high-level abstract objects/concepts to reason about the world").
One article by Eliezer Yudkowsky kinda implies that there could be a language for describing any possible universe on multiple levels, a language in which defining basic human goals would be pretty easy (no matter what kind of universe humans live in):
Given some transparent prior, there would exist a further problem of how to actually bind a preference framework to that prior. One possible contributing method for pinpointing an environmental property could be if we understand the prior well enough to understand what the described object ought to look like — the equivalent of being able to search for ‘things W made of six smaller things X near six smaller things Y and six smaller things Z, that are bound by shared Xs to four similar things W in a tetrahedral structure’ in order to identify carbon atoms and diamond. (from Ontology identification problem: Matching environmental categories to descriptive constraints)
But why would such a language be feasible to figure out? It seems like creating it could require considering countless possible universes.
My principle explains "why" and proposes a relatively feasible method of creating it.
The predictor might internally represent the world in such a way that the underlying state of the world is not a continuous function of its activations. For example, the predictor might describe the world by a set of sentences, for which syntactically small changes (like inserting the word “not”) could correspond to big changes in the underlying state of the world. When the predictor has this structure, the direct translator is highly discontinuous and it is easy for human simulators to be closer to continuous.
We might try to fix this by asking the predictor to learn a “more continuous” representation, e.g. a representation such that observations are a continuous function or such that time evolution is continuous. One problem is that it’s unclear whether such a continuous parametrization even exists in general. But a more straightforward problem is that when evaluated quantitatively these approaches don’t seem to address the problem, because the properties we might try to use to enforce continuity can themselves be discontinuous functions of the underlying latent state. (from ELK prize results, Counterexample: the predictor’s latent space may not be continuous)
The principle could be used to prove that the properties for enforcing continuity can't themselves be discontinuous functions of the underlying latent state (unless something really weird is going on, in which case humans should be alerted). If we use properties to define "continuity".
A proposal by Derek Shiller, Beth Barnes and Nate Thomas, Oam Patel:
Rather than trying to learn a reporter for a complex and alien predictor, we could learn a sequence of gradually more complex predictors with corresponding reporters . Then instead of encouraging to be simple, we can encourage the difference between and to be simple.
(...) Intuitively, the main problem with this proposal is that there might be multiple fundamentally different ways to predict the world, and that we can’t force the reporter to change continuously across those boundaries. (from ELK prize results, Strategy: train a sequence of reporters for successively more powerful predictors)
The principle could be used to prove that we can force predictors to not be "fundamentally different" from each other, so we can force the reporter to change continuously.
Here I explain how the principle relates to Impact Regularization.
When we say “paint all cars pink” or “cure cancer” there’s some implicit set of consequences that we think are allowable and should definitely not be prevented, such as people noticing that their cars are pink, or planetary death rates dropping. We don’t want the AI trying to obscure people’s vision so they can’t notice the car is pink, and we don’t want the AI killing a corresponding number of people to level the planetary death rate. We don’t want these bad offsetting actions which would avert the consequences that were the point of the plan in the first place. (from Low impact: Allowed consequences vs. offset actions)
We can order all variables by how easy they are to optimize/comprehend (). We could use it to differentiate between "impacts explainable by more coarse-grained variables ()" and "impacts explainable by more fine-grained variables ()". According to the principle, the latter impacts are undesirable by default. For example:
Some hard to optimize/comprehend variables () are "contained" within easy to optimize/comprehend variables (). For example:
We could use this fact to search for unusual sources of complicated impacts.
According to the principle, we can order all variables by how easy they are to optimize/comprehend (). If we could map the variables inside an AI to this order, we could measure how much optimization the AI does. Which is relevant to the Goodhart problem.
The intuition here is that to become substantially better at optimizing something, you need to perform progressively more complicated actions and take into account progressively more incomprehensible considerations; you can become better by just trying different combinations of the same kind of actions/considerations (), but then you're fundamentally limited. However, there's also a stronger claim:
An agent exploiting a certain kind of variables () can't game an outer objective, unless something starts exploiting less optimizable/comprehensible variables () — which the agent can notice or stop. Assuming the outer objective is chosen with minimal competence and the agent's environment is somewhat stable.
Imagine a superintelligent robot tasked to place stones into a pile. A couple of cameras observe the robot. "Being seen collecting stones into a pile" is the outer objective. "Actually collecting stones into a pile" is the intent behind the outer objective.
The robot could get reward by going around and grabbing stones (the robot's and stones' movement can be described by variables). The robot could also hack the cameras to produce fake video, but that would require manipulating variables which are substantially harder to optimize/comprehend (). If the robot sticks to manipulating , everything should be fine.
Another agent could hack the cameras. Less likely, the environment itself could transform into a state where the cameras are hacked. But any of that would imply that variables have changed in a way directly related to how variables can optimize the outer objective. (If cameras are hacked, collecting real stones suddenly becomes completely useless for optimizing the outer objective.) The robot can report or stop that.
In some part of the environment getting stones could be as hard as hacking the cameras. For example, maybe we need to hack the enemy's cameras to steal their stones. In such case we could whitelist exploiting variables there. The robot can ensure that optimization doesn't "spill" into the rest of the world.
Imagine you measure "how good one can play a video game" (the intent) by "the speed of completing the game" (the outer objective).
This outer objective can be Goodharted with glitches (anomalously fast movement, teleportation, invincibility, getting score points out of nothing). However, some of the below will be true:
If the player sticks to , Goodharting the outer objective is impossible. But expert performance is still possible.
An agent which desperately and monomaniacally wants to optimize the mathematical (plan/state/trajectory) (evaluation) "grader" function is not aligned to the goals we had in mind when specifying/training the grader (e.g. "make diamonds"), the agent is aligned to the evaluations of the grader (e.g. "a smart person's best guess as to how many diamonds a plan leads to").
I believe the point of "Don't align agents to evaluations of plans" can be reformulated as:
Make agents terminally value easy to optimize/comprehend variables (), so they won't Goodhart by manipulating hard to optimize/comprehend variables ().
My principle supports this point.
More broadly, a big aspect of Shard Theory can be reformulated as:
Early in training, Reinforcement Learning agents learn to terminally value easy to optimize/comprehend variables ("shards" are simple computations about simple variables)... that's why they're unlikely to Goodhart their own values by manipulating hard to optimize/comprehend variables.
If Shard Theory is true, the principle should give insight into how shards behave in all RL agents. Because the principle is true for all agents whose intelligence & values develop gradually and who don't completely abandon their past values.
See glider example, strawberry example, Boolean circuit example, diamond example.
I believe the idea of Mechanistic Anomaly Detection can be described like this:
Any model () has "layers of structure" and therefore can be split into versions, ordered from versions with less structure to versions with more structure (). When we find the version with the least structure which explains the most instances[7] of a phenomenon we care about (), it defines the latent variables we care about.
This is very similar to the principle, but more ambitious (makes stronger claims about all possible models) and more abstract (doesn't leverage even the most basic properties of human values).
Claim A. Say you can comprehend variables, but not . You still can understand what variable is the most similar to a variable (and if the former causes the latter); if a change of a variable harms or helps your values (and if the change is necessary or unnecessary); if a variable is contained within a particular part of your world-model or not. According to the principle, this knowledge can be obtained automatically.
Claim B. Take an ontology (which describes real things) and a simpler ontology (which might describe nonexistent things). Whatever ontology describes, we can automatically check if there's anything in that corresponds to it OR if searching for a correspondence is too costly.
This is relevant to interpretability.
Imagine you can't comprehend how the human body works. Сonsider those statements by your doctor:
Despite not understanding it all, you understand everything relevant to your values. For example, from the last statement you understand that the doctor doesn't respect your values.
Now, imagine you're the doctor. You have a very uneducated patient. The patient might say stuff like "inside my body <something> moves from one of my hands to another" or "inside my body <something> keeps expanding below my chest". Whatever they'll describe, you'll know if you know any scientific explanation of that OR if searching for an explanation is too costly.
The above is similar to Ramsification.
Claims A and B suggest a hypothetical interpretability method. I'll describe it with a metaphor:
In this metaphor, Einstein = an incomprehensible AI. Village idiot = an easily interpretable AI. It's like the broken telephone game, except we're fixing broken links.
If some assumptions hold (it's cheap to translate brains into models made of variables; describing Einstein's cognition doesn't require variables more complex than ; producing a non-insane non-bizarre clone doesn't take forever), the proposal above gives a solution to ELK.
Consider this:
If my principle is formalized, we might obtain a bounded solution to outer and inner alignment. (I mean Task-directed AGI level of outer alignment.) Not saying the procedure is gonna be practical.
Here are some additional examples of comparing variables based on their properties.
Consider what's easier to optimize/comprehend:
Here are the answers:
The analysis above is made from the human perspective and considers "normal" situations (e.g. the situations from the training data, like in MAD).
Some value judgements (e.g. "this movie is good", "this song is beautiful", or even "this is a conscious being") depend on inscrutable brain machinery, the machinery which creates experience. This contradicts the idea that "easiness of optimization/comprehension can be captured by a few relatively simple mathematical properties". But I think this contradiction is not fatal, for the following reason:
We aren't particularly good at remembering exact experiences, we like very different experiences, we can't access each other's experiences, and we have very limited ways of controlling experiences. So, there should be pretty strict limitations on how much understanding of the inscrutable brain machinery is required for respecting the current human values. Therefore, defining corrigible behavior ("don't kill everyone", "don't seek power", "don't mess with human brains") shouldn't require answering many specific, complicated machinery-dependent questions ("what separates good and bad movies?", "what separates good and bad life?", "what separates conscious and unconscious beings?").
Imagine you have a set of some properties (). Let's say it's just one property, "speed". You can assign speed to any "simple" variable (observable or latent). However, you can combine multiple simple variables, with drastically different speeds, into a single "complex" variable. As a result, you get a potentially infinite-dimensional space (a complex variable made out of simple variables is an -dimensional object). You reduce the high-dimensional space into a low-dimensional space . For simplicity, let's say it's a one-dimensional space. Inevitably, such reduction requires making arbitrary choices and losing information.
Here are the most important consequences of the principle, along with some additional justifications:
There are value systems for which claims 1-5 aren't true. In that sense, they're empirical. However, I argue that the claims are a priori true for humans, no matter how wrong our beliefs about the world are.
Some June edits, before 11/06/25: added a little bit of content and made a couple of small edits.
"Could" is important here. You can optimize/comprehend a thing (in principle) even if you aren't aware of its existence. For example: cave people could easily optimize "the amount of stone knives made of quantum waves" without knowing what quantum waves are; you could in principle easily comprehend typical behavior of red-lipped batfishes even if you never decide to actually do it.
An important objection to this claim involves inscrutable brain machinery.
For example, I care about subtle forms of pleasure because they're similar to simpler forms of pleasure. I care about more complex notions of "fairness" and "freedom" because they're similar to simpler notions of "fairness" and "freedom". I care about the concept of "real strawberries" because it's similar to the concept of "sensory information about strawberries". Etc.
Or consider prehistoric people. Even by today's standards, they had a lot of non-trivial positive values ("friendship", "love", "adventure", etc.) and could've easily lived very moral lives, if they avoided violence. Giant advances in knowledge and technology didn't change human values that much. Humans want to have relatively simple lives. Optimizing overly complex variables would make life too chaotic, uncontrollable, and unpleasant.
Note that it would be pretty natural to care about the existence of the pseudorandom number generator, but "the existence of the PRNG" is a much more comprehensible variable than "the current value of the PRNG".
Also, as far as I'm aware, Super Mario Bros. doesn't actually have a pseudorandom number generator. But just imagine that it does.
"Up to now it has been assumed that all our cognition must conform to the objects; but all attempts to find out something about them a priori through concepts that would extend our cognition have, on this presupposition, come to nothing. Hence let us once try whether we do not get farther with the problems of metaphysics by assuming that the objects must conform to our cognition, which would agree better with the requested possibility of an a priori cognition of them, which is to establish something about objects before they are given to us. This would be just like the first thoughts of Copernicus, who, when he did not make good progress in the explanation of the celestial motions if he assumed that the entire celestial host revolves around the observer, tried to see if he might not have greater success if he made the observer revolve and left the stars at rest." (c.) The Critique of Pure Reason, by Immanuel Kant, Bxvi–xviii
I mean, we do care about that model being inner-aligned. But this is a separate problem.
the most instances in training