AI ALIGNMENT FORUM
AF

abramdemski's Shortform

by abramdemski
10th Sep 2020
1 min read
35

6

This is a special post for quick takes by abramdemski. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
abramdemski's Shortform
29abramdemski
4Vladimir_Nesov
2abramdemski
2Vladimir_Nesov
3Cole Wyeth
3Vanessa Kosoy
2Cole Wyeth
4Vanessa Kosoy
2abramdemski
9 comments, sorted by
top scoring
Click to highlight new comments since: Today at 3:00 AM
[-]abramdemski5mo29-1

Here's what seem like priorities to me after listening to the recent Dwarkesh podcast featuring Daniel Kokotajlo:

1. Developing the safer AI tech (in contrast to modern generative AI) so that frontier labs have an alternative technology to switch to, so that it is lower cost for them to start taking warning signs of misalignment of their current tech tree seriously. There are several possible routes here, ranging from small tweaks to modern generative AI, to scaling up infrabayesianism (existing theory, totally groundbreaking implementation) to starting totally from scratch (inventing a new theory). Of course we should be working on all routes, but prioritization depends in part on timelines.

  • I see the game here as basically: look at the various existing demos of unsafety and make a counter-demo which is safer on multiple of these metrics without having gamed the metrics.

2. De-agentify the current paradigm or the new paradigm:

  • Don't directly train on reinforcement across long chains of activity. Find other ways to get similar benefits.
  • Move away from a model where the AI is personified as a distinct entity (eg, chatbot model). It's like the old story about building robot arms to help feed disabled people -- if you mount the arm across the table, spoonfeeding the person, it's dehumanizing; if you make it a prosthetic, it's humanizing.
    • I don't want AI to write my essays for me. I want AI to help me get my thoughts out of my head. I want super-autocomplete. I think far faster than I can write or type or speak. I want AI to read my thoughts & put them on the screen.
      • There are many subtle user interface design questions associated with this, some of which are also safety issues, eg, exactly what objective do you train on?
    • Similarly with image generation, etc.
    • I don't necessarily mean brain-scanning tech here, but of course that would be the best way to achieve it.
    • Basically, use AI to overcome human information-processing bottlenecks instead of just trying to replace humans. Putting humans "in the loop" more and more deeply instead of accepting/assuming that humans will iteratively get sidelined. 
Reply31
[-]Vladimir_Nesov5mo43

prioritization depends in part on timelines

Any research rebalances the mix of currently legible research directions that could be handed off to AI-assisted alignment researchers or early autonomous AI researchers whenever they show up. Even hopelessly incomplete research agendas could still be used to prompt future capable AI to focus on them, while in the absence of such incomplete research agendas we'd need to rely on AI's judgment more completely. So it makes sense to still prioritize things that have no hope at all of becoming practical for decades (with human effort), to make as much partial progress as possible in developing (and deconfusing) them in the next few years.

In this sense current human research, however far from practical usefulness, forms the data for alignment of the early AI-assisted or AI-driven alignment research efforts. The judgment of human alignment researchers who are currently working makes it possible to formulate more knowably useful prompts for future AIs that nudge them in the direction of actually developing practical alignment techniques.

Reply
[-]abramdemski5mo20

This sort of approach doesn't make so much sense for research explicitly aiming at changing the dynamics in this critical period. Having an alternative, safer idea almost ready-to-go (with some explicit support from some fraction of the AI safety community) is a lot different from having some ideas which the AI could elaborate. 

Reply
[-]Vladimir_Nesov5mo20

With AI assistance, the degree to which an alternative is ready-to-go can differ a lot compared to its prior human-developed state. Also, an idea that's ready-to-go is not yet an edifice of theory and software that's ready-to-go in replacing 5e28 FLOPs transformer models, so some level of AI assistance is still necessary with 2 year timelines. (I'm not necessarily arguing that 2 year timelines are correct, but it's the kind of assumption that my argument should survive.)

The critical period includes the time when humans are still in effective control of the AIs, or when vaguely aligned and properly incentivised AIs are in control and are actually trying to help with alignment, even if their natural development and increasing power would end up pushing them out of that state soon thereafter. During this time, the state of current research culture shapes the path-dependent outcomes. Superintelligent AIs that are reflectively stable will no longer allow path dependence in their further development, but before that happens the dynamics can be changed to an arbitrary extent, especially with AI efforts as leverage in implementing the changes in practice.

Reply
[-]Cole Wyeth5mo30

It seems to me that an "implementation" of something like Infra-Bayesianism which can realistically compete with modern LLMs would ultimately look a lot like a semi-theoretically-justified modification to the loss function or optimizer of agentic fine-tuning / RL or possibly its scaffolding to encourage it to generalize conservatively. This intuition comes in two parts:

1: The pre-training phase is already finding a mesa-optimizer that does induction in context. I usually think of this as something like Solomonoff induction with a good inductive bias, but probably you would expect something more like logical induction. I expect the answer to be somewhere in between. I'll try to test this empirically at ARENA this May. The point is that I struggle to see how IB applies here, on the level of pure prediction, in practice. It's possible that this is just a result of my ignorance or lack of creativity.

2: I'm pessimistic about learning results for MDPs or environments "without traps" having anything to do with building a safe LLM agent.

If IB is only used in this heuristic way, we might expect fewer of the mathematical results to transfer, and instead just port over some sort of pessimism about uncertainty. In fact, Michael Cohen's work follows pretty much exactly this approach at times (I've read him mention IB about once, apparently as a source of intuition but not technical results).

None of this is really a criticism of IB; rather, I think it's important to keep in mind when considering which aspects of IB or IB-like theories are most worth developing.

Reply
[-]Vanessa Kosoy5mo30

(Summoned by @Alexander Gietelink Oldenziel)

I don't understand this comment. I usually don't think of "building a safer LLM agent" as a viable route to aligned AI. My current best guess about how to create aligned AI is Physicalist Superimitation. We can imagine other approaches, e.g. Quantilized Debate, but I am less optimistic there. More importantly, I believe that we need to complete the theory of agents first, before we can have strong confidence about which approaches are more promising.

As to heuristic implementations of infra-Bayesianism, this is something I don't want to speculate about in public, it seems exfohazardous.

Reply
[-]Cole Wyeth5mo23

I usually don't think of "building a safer LLM agent" as a viable route to aligned AI

I agree that building a safer LLM agent is an incredibly fraught path that probably doesn't work. My comment is in the context of Abram's first approach, developing safer AI tech that companies might (apparently voluntarily) switch to, and specifically the route of scaling up IB to compete with LLM agents. Note that Abram also seems to be discussing the AI 2027 report, which if taken seriously requires all of this to be done in about 2 years. Conditioning on this route, I suggest that most realistic paths look like what I described, but I am pretty pessimistic that this route will actually work. The reason is that I don't see explicitly Bayesian glass-box methods competing with massive black-box models at tasks like natural language prediction any time soon. But who knows, perhaps with the "true" (IB?) theory of agency in hand much more is possible. 

More importantly, I believe that we need to complete the theory of agents first, before we can have strong confidence about which approaches are more promising.

I'm not sure it's possible to "complete" the theory of agents, and I am particularly skeptical that we can do it any time soon. However, I think we agree locally / directionally, because it also seems to me that a more rigorous theory of agency is necessary for alignment.

As to heuristic implementations of infra-Bayesianism, this is something I don't want to speculate about in public, it seems exfohazardous.

Fair enough, but in that case, it seems impossible for this conversation to meaningfully progress here.

Reply
[-]Vanessa Kosoy5mo4-3

I think that in 2 years we're unlikely to accomplish anything that leaves a dent in P(DOOM), with any method, but I also think it's more likely than not that we actually have >15 years.

As to "completing" the theory of agents, I used the phrase (perhaps perversely) in the same sense that e.g. we "completed" the theory of information: the latter exists and can actually be used for its intended applications (communication systems). Or at least in the sense we "completed" the theory of computational complexity: even though a lot of key conjectures are still unproven, we do have a rigorous understanding of what computational complexity is and know how to determine it for many (even if far from all) problems of interest.

I probably should have said "create" rather than "complete".

Reply1
[-]abramdemski5mo20

The pre-training phase is already finding a mesa-optimizer that does induction in context. I usually think of this as something like Solomonoff induction with a good inductive bias, but probably you would expect something more like logical induction. I expect the answer to be somewhere in between.

I don't personally imagine current LLMs are doing approximate logical induction (or approximate solomonoff) internally. I think of the base model as resembling a circuit prior updated on the data. The circuits that come out on top after the update also do some induction of their own internally, but it is harder to think about what form of inductive bias they have exactly (it would seem like a coincidence if it also happened to be well-modeled as a circuit prior, but, it must be something highly computationally limited like that, as opposed to Solomonoff-like). 

I hesitate to call this a mesa-optimizer. Although good epistemics involves agency in principle (especially time-bounded epistemics), I think we can sensibly differentiate between mesa-optimizers and mere mesa-induction. But perhaps you intended this stronger reading, in support of your argument. If so, I'm not sure why you believe this. (No, I don't find "planning ahead" results to be convincing -- I feel this can still be purely epistemic in a relevant sense.)

Perhaps it suffices for your purposes to observe that good epistemics involves agency in principle?

Anyway, cutting more directly to the point:

I think you lack imagination when you say

[...] which can realistically compete with modern LLMs would ultimately look a lot like a semi-theoretically-justified modification to the loss function or optimizer of agentic fine-tuning / RL or possibly its scaffolding [...]

I think there are neural architectures close to the current paradigm which don't directly train whole chains-of-thought on a reinforcement signal to achieve agenticness. This paradigm is analogous to model-free reinforcement learning. What I would suggest is more analogous to model-based reinforcement learning, with corresponding benefits to transparency. (Super speculative, of course.)

Reply
Moderation Log
More from abramdemski
View more
Curated and popular this week
9Comments