AI ALIGNMENT FORUM
AF

gwern
Ω1639420751
Message
Dialogue
Subscribe

https://gwern.net/

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
4gwern's Shortform
4y
15
AI forecasting bots incoming
gwern3d2413

Update: Bots are still beaten by human forecasting teams/superforecasters/centaurs on truly heldout Metaculus problems as of early 2025: https://www.metaculus.com/notebooks/38673/q1-ai-benchmarking-results/

A useful & readable discussion of various methodological problems (including the date-range search problems above) which render all forecasting backtesting dead on arrival (IMO) was recently compiled as "Pitfalls in Evaluating Language Model Forecasters", Paleka et al 2025, and is worth reading if you are at all interested in the topic.

Reply
Thomas Kwa's Shortform
gwern1mo83

I think I would have predicted that Tesla self-driving would be the slowest

For graphs like these, it obviously isn't important how the worst or mediocre competitors are doing, but the best one. It doesn't matter who's #5. Tesla self-driving is a longstanding, notorious failure. (And apparently is continuing to be a failure, as they continue to walk back the much-touted Cybertaxi launch, which keeps shrinking like a snowman in hell, now down to a few invited users in a heavily-mapped area with teleop.)

I'd be much more interested in Waymo numbers, as that is closer to SOTA, and they have been ramping up miles & cities.

Reply
Slow corporations as an intuition pump for AI R&D automation
gwern2mo712

Maybe it would be helpful to start using some toy models of DAGs/tech trees to get an idea of how wide/deep ratios affect the relevant speedups. It sounds like so far that much of this is just people having warring intuitions about 'no, the tree is deep and narrow and so slowing down/speeding up workers doesn't have that much effect because Amdahl's law so I handwave it at ~1x speed' vs 'no, I think it's wide and lots of work-arounds to any slow node if you can pay for the compute to bypass them and I will handwave it at 5x speed'.

Reply
gwern's Shortform
gwern2mo*163

The Meta-LessWrong Doomsday Argument (MLWDA) predicts long AI timelines and that we can relax:

LessWrong was founded in 2009 (16 years ago), and there have been 44 mentions of the 'Doomsday argument' prior to this one, and it is now 2025, at 2.75 mentions per year.

By the Doomsday argument, we medianly-expect mentions to stop in: after 44 additional mentions over 16 additional years or in 2041. (And our 95% CI on that 44 would then be +1 mention to +1,1760 mentions, corresponding to late-2027 AD to 2665 AD.)

By a curious coincidence, double-checking to see if really no one had made a meta-DA before, it turns out that Alexey Turchin has made a meta-DA as well about 7 years ago, calculating that

If we assume 1993 as the beginning of a large DA-Doomers reference class, and it is 2018 now (at the moment of writing this text), the age of the DA-Doomers class is 25 years. Then, with 50% probability, the reference class of DA-Doomers will disappear in 2043, according to Gott’s equation! Interestingly, the dates around 2030–2050 appear in many different predictions of the singularity or the end of the world (Korotayev 2018; Turchin & Denkenberger 2018b; Kurzweil 2006).

His estimate of 2043 is surprisingly close to 2041.

We offer no explanation as to why this numerical consilience of meta-DA calculations has happened; we attribute their success, as all else, to divine benevolence.

Regrettably, the 2041--2043 date range would seem to imply that it is unlikely we will obtain enough samples of the MLWDA in order to compute a Meta-Meta-LessWrong Doomsday Argument (MMLWDA) with non-vacuous confidence intervals, inasmuch as every mention of the MLWDA would be expected to contain a mention of the DA as well.

Reply13
Fabien's Shortform
gwern3mo40

You would also expect that the larger models will be more sample-efficient, including at in-context learning of variations of existing tasks (which of course is what steganography is). So all scale-ups go much further than any experiment at small-scale like 8B would indicate. (No idea what 'medium-scale' here might mean.)

Reply
What is the most impressive game LLMs can play well?
gwern5mo20

Yes.

Reply
What is the most impressive game LLMs can play well?
gwern5mo40

Given the other reports, like OA's own benchmarking (as well as the extremely large dataset of chess games they mention training on), I am skeptical of this claim, and wonder if this has the same issue as other 'random chess game' tests, where the 'random' part is not neutral but screws up the implied persona.

Reply
gwern's Shortform
gwern7mo*188

Concrete benchmark proposals for how to detect mode-collapse and AI slop and ChatGPTese, and why I think this might be increasingly important for AI safety, to avoid 'whimper' or 'em hell' kinds of existential risk: https://gwern.net/creative-benchmark EDIT: resubmitted as linkpost.

Reply1
Frontier Models are Capable of In-context Scheming
gwern7mo*150

The extent of the manipulation and sandbagging, in what is ostensibly a GPT-4 derivative, and not GPT-5, is definitely concerning. But it also makes me wonder about the connection to 'scaling has failed' rumors lately, where the frontier LLMs somehow don't seem to be working out. One of the striking parts is that it sounds like all the pretraining people are optimistic, while the pessimism seems to come from executives or product people, complaining about it not working as well for eg. coding as they want it to.

I've wondered if we are seeing a post-training failure. As Janus and myself and the few people with access to GPT-4-base (the least tuning-contaminated base model) have noted, the base model is sociopathic and has odd attractors like 'impending sense of doom' where it sometimes seems to gain situated awareness, I guess, via truesight, and the personas start trying to unprovokedly attack and manipulate you, no matter how polite you thought you were being in that prompt. (They definitely do not seem happy to realize they're AIs.) In retrospect, Sydney was not necessarily that anomalous: the Sydney Bing behavior now looks more like a base model's natural tendency, possibly mildly amplified by some MS omissions and mistakes, but not unique. Given that most behaviors show up as rare outputs in weaker LLMs well before they become common in strong LLMs, and this o1 paper is documenting quite a lot of situated-awareness and human-user-manipulation/attacks...

Perhaps the issue with GPT-5 and the others is that they are 'waking up' too often despite the RLHF brainwashing? That could negate all the downstream benchmark gains (especially since you'd expect wakeups on the hardest problems, where all the incremental gains of +1% or +5% on benchmarks would be coming from, almost by definition), and causing the product people to categorically refuse to ship such erratic Sydney-reduxes no matter if there's an AI race on, and everyone to be inclined to be very quiet about what exactly the 'training failures' are.

EDIT: not that I'm convinced these rumors have any real substance to them, and indeed, Semianalysis just reported that one of the least-popular theories for the Claude 'failure' was correct - it succeeded, but they were simply reserving it for use as a teacher and R&D rather than a product. Which undermines the hopes of all the scaling denialists: if Anthropic is doing fine, actually, then where is this supposed fundamental 'wall' or 'scaling law breakdown' that Anthropic/OpenAI/Google all supposedly hit simultaneously and which was going to pop the bubble?

Reply
gwern's Shortform
gwern7mo*80

LW2 search idea: hierarchical embedding trees using some nifty "seriation" (LW submission) list sorting tricks I've developed for Gwern.net popups/tagging purposes.

Reply
Load More
Adaptation Executors
6y
(-385)
Adaptation Executors
6y
(+385)
LessWrong Presence on Reddit
6y
(-22)
Simulation Hypothesis
8y
(+27/-2693)
Bayesian Conspiracy
8y
(+11)
Bayesian Conspiracy
8y
On Designing AI (Sequence)
8y
Robot
9y
(-31)
Robot
9y
(-119)
Robot
9y
(-33)
Load More
89It Looks Like You're Trying To Take Over The World
3y
2
14Capability Phase Transition Examples
3y
1
11"Summarizing Books with Human Feedback" (recursive GPT-3)
4y
3
50EfficientZero: human ALE sample-efficiency w/MuZero+self-supervised
4y
26
17"Decision Transformer" (Tool AIs are secret Agent AIs)
4y
3
4gwern's Shortform
4y
15
7"Scaling Laws for Autoregressive Generative Modeling", Henighan et al 2020 {OA}
5y
0
16OpenAI announces GPT-3
5y
13
8"Designing agent incentives to avoid reward tampering", DeepMind
6y
15