As we develop new techniques for detecting deceptive alignment, ranging from action monitoring to Chain-of-Thought (CoT) or activations monitoring, we face a dilemma: once we detect scheming behaviour or intent, should we use that signal to "train the scheming out"?
On the one hand, leaving known misaligned behaviour / intent in the model is not marginally informative and possibly unsafe. On the other hand, training against a monitor might not actually fix the model's underlying motivations; it might simply provide selection pressure that favours more sophisticated, less detectable forms of deception.
This post outlines a simple framework formalising when the generalisation benefit of training outweighs the selection risk of creating a schemer. I assume that the choice is between training and not training on the incriminating examples, e.g. we...
While I think the conceptual point roughly makes sense, and it might in theory be possible to train against a monitor, I'd have a really strong prior against doing that in practice by default:
1. Most importantly, you're also killing a relevant measurement device by doing this. Unless you have at least one but probably more other measurement devices that you're confident in AND are orthogonal to the monitor you're training against, then maybe. But in practice we don't have these (e.g. probes and interp just isn't good enough atm).
1a. I think your proc...
We’ve significantly upgraded our timelines and takeoff model! It predicts when AIs will reach key capability milestones: for example, Automated Coder / AC (full automation of coding) and superintelligence / ASI (much better than the best humans at virtually all cognitive tasks). This post will briefly explain how the model works, present our timelines and takeoff forecasts, and compare it to our previous (AI 2027) models (spoiler: the AI Futures Model predicts longer timelines to full coding automation than our previous model by about 3-5 years, in significant part due to being less bullish on pre-full-automation AI R&D speedups).
If you’re interested in playing with the model yourself, the best way to do so is via this interactive website: aifuturesmodel.com.
If you’d like to skip the motivation for our...
1. Compute bottleneck
The model says experiment compute becomes the binding constraint once coding is fast. But are frontier labs actually compute-bottlenecked on experiments right now? Anthropic runs inference for millions of users while training models. With revenue growing, more investment coming in, and datacenters being built, couldn't they allocate eg. 2x more to research compute this year if they wanted?
2. Research taste improvement rate
The model estimates AI research taste improvement based on how quickly AIs have improved in a variety of metr...
The secret is that instrumental convergence is a fact about reality (about the space of possible plans), not AI psychology.
Zack M. Davis, group discussion
Such arguments flitter around the AI safety space. While these arguments contain some truth, they attempt to escape "AI psychology" but necessarily fail. To predict bad outcomes from AI, one must take a stance on how AI will tend to select plans.
This topic is a specialty of mine. Where does instrumental convergence come from? Since I did my alignment PhD on exactly this question, I'm well-suited to explain the situation.
In this article, I do not argue that building transformative AI is safe or that transformative AIs won't tend to select dangerous plans. I simply argue against the claim that "instrumental convergence arises from reality
It seems like the notion of "psychology" that you're invoking here isn't really about "how the AI will make decisions." On my read, you're defining "psychology" as "the prior over policies." This bakes in things like "hard constraints that a policy never takes an unsafe action (according to a perfect oracle)" by placing 0 probability on such policies in the prior. This notion of "psychology" isn't directly about internal computations or decision making. (Though, of course, some priors—e.g. the circuit depth prior on transformers—are most easily described in terms of internal computations.)
Yeah, that's fair. I guess my views on this are stronger because I think data filtering might be potentially negative rather than potentially sub-optimal.
Nice. Yeah I also am excited about coding uplift as a key metric to track that would probably make time horizons obsolete (or at least, constitute a significantly stronger source of evidence than time horizons). We at AIFP don't have capacity to estimate the trend in uplift over time (I mean we can do small-N polls of frontier AI company employees...) but we hope someone does.
The RL algorithm "EfficientZero" achieves better-than-human performance on Atari games after only 2 hours of gameplay experience. This seems like a major advance in sample efficiency for reinforcement learning. The post breaks down how EfficientZero works and what its success might mean.
I think it would help me if you explained wha... (read more)