What 2026 looks like

This was written for the Vignettes Workshop.[1] The goal is to write out a detailed future history (“trajectory”) that is as realistic (to me) as I can currently manage, i.e. I’m not aware of any alternative trajectory that is similarly detailed and clearly more plausible to me. The methodology is roughly: Write a future history of 2022. Condition on it, and write a future history of 2023. Repeat for 2024, 2025, etc. (I'm posting 2022-2026 now so I can get feedback that will help me write 2027+. I intend to keep writing until the story reaches singularity/extinction/utopia/etc.)

What’s the point of doing this? Well, there are a couple of reasons:

  • Sometimes attempting to write down a concrete example causes you to learn things, e.g. that a possibility is more or less plausible than you thought.
  • Most serious conversation about the future takes place at a high level of abstraction, talking about e.g. GDP acceleration, timelines until TAI is affordable, multipolar vs. unipolar takeoff… vignettes are a neglected complementary approach worth exploring.
  • Most stories are written backwards. The author begins with some idea of how it will end, and arranges the story to achieve that ending. Reality, by contrast, proceeds from past to future. It isn’t trying to entertain anyone or prove a point in an argument.
  • Anecdotally, various people seem to have found Paul Christiano’s “tales of doom” stories helpful, and relative to typical discussions those stories are quite close to what we want. (I still think a bit more detail would be good — e.g. Paul’s stories don’t give dates, or durations, or any numbers at all really.)[2]
  • “I want someone to ... write a trajectory for how AI goes down, that is really specific about what the world GDP is in every one of the years from now until insane intelligence explosion. And just write down what the world is like in each of those years because I don't know how to write an internally consistent, plausible trajectory. I don't know how to write even one of those for anything except a ridiculously fast takeoff.” --Buck Shlegeris

This vignette was hard to write. To achieve the desired level of detail I had to make a bunch of stuff up, but in order to be realistic I had to constantly ask “but actually though, what would really happen in this situation?” which made it painfully obvious how little I know about the future. There are numerous points where I had to conclude “Well, this does seem implausible, but I can’t think of anything more plausible at the moment and I need to move on.” I fully expect the actual world to diverge quickly from the trajectory laid out here. Let anyone who (with the benefit of hindsight) claims this divergence as evidence against my judgment prove it by exhibiting a vignette/trajectory they themselves wrote in 2021. If it maintains a similar level of detail (and thus sticks its neck out just as much) while being more accurate, I bow deeply in respect!

I hope this inspires other people to write more vignettes soon. We at the Center on Long-Term Risk would like to have a collection to use for strategy discussions. Let me know if you’d like to do this, and I can give you advice & encouragement! I’d be happy to run another workshop.

2022

GPT-3 is finally obsolete. OpenAI, Google, Facebook, and DeepMind all have gigantic multimodal transformers, similar in size to GPT-3 but trained on images, video, maybe audio too, and generally higher-quality data.

Not only that, but they are now typically fine-tuned in various ways--for example, to answer questions correctly, or produce engaging conversation as a chatbot.

The chatbots are fun to talk to but erratic and ultimately considered shallow by intellectuals. They aren’t particularly useful for anything super important, though there are a few applications. At any rate people are willing to pay for them since it’s fun.

[EDIT: The day after posting this, it has come to my attention that in China in 2021 the market for chatbots is $420M/year, and there are 10M active users. This article claims the global market is around $2B/year in 2021 and is projected to grow around 30%/year. I predict it will grow faster. NEW EDIT: See also xiaoice.]

The first prompt programming libraries start to develop, along with the first bureaucracies.[3] For example: People are dreaming of general-purpose AI assistants, that can navigate the Internet on your behalf; you give them instructions like “Buy me a USB stick” and it’ll do some googling, maybe compare prices and reviews of a few different options, and make the purchase. The “smart buyer” skill would be implemented as a small prompt programming bureaucracy, that would then be a component of a larger bureaucracy that hears your initial command and activates the smart buyer skill. Another skill might be the “web dev” skill, e.g. “Build me a personal website, the sort that professors have. Here’s access to my files, so you have material to put up.” Part of the dream is that a functioning app would produce lots of data which could be used to train better models.

The bureaucracies/apps available in 2022 aren’t really that useful yet, but lots of stuff seems to be on the horizon. Thanks to the multimodal pre-training and the fine-tuning, the models of 2022 make GPT-3 look like GPT-1. The hype is building.

2023

The multimodal transformers are now even bigger; the biggest are about half a trillion parameters, costing hundreds of millions of dollars to train, and a whole year, and sucking up a significant fraction of the chip output of NVIDIA etc.[4] It’s looking hard to scale up bigger than this, though of course many smart people are working on the problem.

The hype is insane now. Everyone is talking about how these things have common sense understanding (Or do they? Lots of bitter thinkpieces arguing the opposite) and how AI assistants and companions are just around the corner. It’s like self-driving cars and drone delivery all over again.

Revenue is high enough to recoup training costs within a year or so.[5] There are lots of new apps that use these models + prompt programming libraries; there’s tons of VC money flowing into new startups. Generally speaking most of these apps don’t actually work yet. Some do, and that’s enough to motivate the rest.

The AI risk community has shorter timelines now, with almost half thinking some sort of point-of-no-return will probably happen by 2030. This is partly due to various arguments percolating around, and partly due to these mega-transformers and the uncanny experience of conversing with their chatbot versions. The community begins a big project to build an AI system that can automate interpretability work; it seems maybe doable and very useful, since poring over neuron visualizations is boring and takes a lot of person-hours.

Self driving cars and drone delivery don’t seem to be happening anytime soon. The most popular explanation is that the current ML paradigm just can’t handle the complexity of the real world. A less popular “true believer” take is that the current architectures could handle it just fine if they were a couple orders of magnitude bigger and/or allowed to crash a hundred thousand times in the process of reinforcement learning. Since neither option is economically viable, it seems this dispute won’t be settled.

2024

We don’t see anything substantially bigger. Corps spend their money fine-tuning and distilling and playing around with their models, rather than training new or bigger ones. (So, the most compute spent on a single training run is something like 5x10^25 FLOPs.)

Some of the apps that didn’t work last year start working this year. But the hype begins to fade as the unrealistic expectations from 2022-2023 fail to materialize. We have chatbots that are fun to talk to, at least for a certain userbase, but that userbase is mostly captured already and so the growth rate has slowed. Another reason the hype fades is that a stereotype develops of the naive basement-dweller whose only friend is a chatbot and who thinks it’s conscious and intelligent. Like most stereotypes, it has some grounding in reality.

The chip shortage starts to finally let up, not because demand has slackened but because the industry has had time to build new fabs. Lots of new fabs. China and USA are in a full-on chip battle now, with export controls and tariffs. This chip battle isn’t really slowing down overall hardware progress much. Part of the reason behind the lack-of-slowdown is that AI is now being used to design chips, meaning that it takes less human talent and time, meaning the barriers to entry are lower. The overall effect of this is small but growing.

If all this AI tech is accelerating GDP, the effect size is too small to detect, at least for now.

Internally, these huge multimodal transformers aren’t really that agentic. A forward pass through the model is like an intuitive reaction, a snap judgment based on loads of experience rather than reasoning. Some of the bureaucracies create a “stream of consciousness” of text (each forward pass producing notes-to-self for the next one) but even with fine-tuning this doesn’t work nearly as well as hoped; it’s easy for the AIs to get “distracted” and for their stream of consciousness to wander into some silly direction and ultimately produce gibberish. It’s easy to make a bureaucracy and fine-tune it and get it to do some pretty impressive stuff, but for most tasks it’s not yet possible to get it to do OK all the time.

The AIs don't do any clever deceptions of humans, so there aren’t any obvious alignment warning shots or fire alarms. Instead, the AIs just make dumb mistakes, and occasionally “pursue unaligned goals” but in an obvious and straightforward way that quickly and easily gets corrected once people notice, e.g. "We trained it to optimize user engagement defined as average conversation length; now it is blatantly stalling to make the conversation last longer. Silly us, let's make the reward function more nuanced, that’ll fix the problem."

That isn’t to say these AIs aren’t causing problems. Massive models are being fine-tuned to persuade/propagandize.

There are a few ways in which this is happening:

  • Russia and others continue to scale up their investment in online propaganda (e.g. the Internet Research Agency) and language models let them cheaply do lots more of it. (See: CSET report) Most of America gets their news from Twitter, Reddit, etc. and much of the politically-relevant content there is boosted by AI-enabled astroturfing. [EDIT: Katja Grace points out that this is probably an exaggeration; there are a lot of 40+yr-old Americans and they get their news from TV/Radio/print, and many of those that get it from the web get it directly from news sites rather than from social media. As of 2016 at least. I expect social media and aggregators to be more dominant by 2024 but dunno whether it would be more than 50%.]
  • Just as A/B testing became standard practice in the 2010’s, in the twenties it is becoming standard practice to throw a pile of fancy data science and AI at the problem. The problem of crafting and recommending content to maximize engagement. Instead of just A/B testing the title, why not test different versions of the opening paragraph? And fine-tune a language model on all your data to generate better candidate titles and paragraphs to test. It wouldn’t be so bad if this was merely used to sell stuff, but now people’s news and commentary-on-current events (i.e. where they get their opinions from) is increasingly produced in this manner. And some of these models are being trained not to maximize “conversion rate” in the sense of “they clicked on our ad and bought a product,” but in the sense of “Random polling establishes that consuming this content pushes people towards opinion X, on average.” Political campaigns do this a lot in the lead-up to Harris’ election. (Historically, the first major use case was reducing vaccine hesitancy in 2022.)
  • Censorship is widespread and increasing, as it has for the last decade or two. Big neural nets read posts and view memes, scanning for toxicity and hate speech and a few other things. (More things keep getting added to the list.) Someone had the bright idea of making the newsfeed recommendation algorithm gently ‘nudge’ people towards spewing less hate speech; now a component of its reward function is minimizing the probability that the user will say something worthy of censorship in the next 48 hours.
  • Like newsfeeds, chatbots are starting to “nudge” people in the direction of believing various things and not believing various things. Back in the 2010’s chatbots would detect when a controversial topic was coming up and then change topics or give canned responses; even people who agreed with the canned responses found this boring. Now they are trained to react more “naturally” and “organically” and the reward signal for this is (in part) whether they successfully convince the human to have better views.
  • That’s all in the West. In China and various other parts of the world, AI-persuasion/propaganda tech is being pursued and deployed with more gusto. The CCP is pleased with the progress made assimilating Xinjiang and Hong Kong, and internally shifts forward their timelines for when Taiwan will be safely annexable.

It’s too early to say what effect this is having on society, but people in the rationalist and EA communities are increasingly worried. There is a growing, bipartisan movement of people concerned about these trends. To combat it, Russia et al are doing a divide and conquer strategy, pitting those worried about censorship against those worried about Russian interference. (“Of course racists don’t want to be censored, but it’s necessary. Look what happens when we relax our guard--Russia gets in and spreads disinformation and hate!” vs. “They say they are worried about Russian interference, but they still won the election didn’t they? It’s just an excuse for them to expand their surveillance, censorship, and propaganda.”) Russia doesn’t need to work very hard to do this; given how polarized America is, it’s sorta what would have happened naturally anyway.

2025

Another major milestone! After years of tinkering and incremental progress, AIs can now play Diplomacy as well as human experts.[6] It turns out that with some tweaks to the architecture, you can take a giant pre-trained multimodal transformer and then use it as a component in a larger system, a bureaucracy but with lots of learned neural net components instead of pure prompt programming, and then fine-tune the whole system via RL to get good at tasks in a sort of agentic way. They keep it from overfitting to other AIs by having it also play large numbers of humans. To do this they had to build a slick online diplomacy website to attract a large playerbase. Diplomacy is experiencing a revival as a million gamers flood to the website to experience “conversations with a point” that are much more exciting (for many) than what regular chatbots provide.

Making models bigger is not what’s cool anymore. They are trillions of parameters big already. What’s cool is making them run longer, in bureaucracies of various designs, before giving their answers. And figuring out how to train the bureaucracies so that they can generalize better and do online learning better. AI experts are employed coming up with cleverer and cleverer bureaucracy designs and grad-student-descent-ing them.

The alignment community now starts another research agenda, to interrogate AIs about AI-safety-related topics. For example, they literally ask the models “so, are you aligned? If we made bigger versions of you, would they kill us? Why or why not?” (In Diplomacy, you can actually collect data on the analogue of this question, i.e. “will you betray me?” Alas, the models often lie about that. But it’s Diplomacy, they are literally trained to lie, so no one cares.)

They also try to contrive scenarios in which the AI can seemingly profit by doing something treacherous, as honeypots to detect deception. The answers are confusing, and not super useful. There’s an exciting incident (and corresponding clickbaity press coverage) where some researchers discovered that in certain situations, some of the AIs will press “kill all humans” buttons, lie to humans about how dangerous a proposed AI design is, etc. In other situations they’ll literally say they aren’t aligned and explain how all humans are going to be killed by unaligned AI in the near future! However, these shocking bits of evidence don’t actually shock people, because you can also contrive situations in which very different things happen — e.g. situations in which the AIs refuse the “kill all humans” button, situations in which they explain that actually Islam is true... In general, AI behavior is whimsical bullshit and it’s easy to cherry-pick evidence to support pretty much any conclusion.

And the AIs just aren’t smart enough to generate any particularly helpful new ideas; at least one case of a good alignment idea being generated by an AI has been reported, but it was probably just luck, since mostly their ideas are plausible-sounding-garbage. It is a bit unnerving how good they are at using LessWrong lingo. At least one >100 karma LW post turns out to have been mostly written by an AI, though of course it was cherry-picked.

By the way, hardware advances and algorithmic improvements have been gradually accumulating. It now costs an order of magnitude less compute (compared to 2020) to pre-train a giant model, because of fancy active learning and data curation techniques. Also, compute-for-training-giant-models is an order of magnitude cheaper, thanks to a combination of regular hardware progress and AI-training-specialized hardware progress. Thus, what would have cost a billion dollars in 2020 now only costs ten million. (Note: I'm basically just using Ajeya's forecast for compute cost decrease and gradual algorithmic improvement here. I think I'm projecting cost decrease and algorithmic progress will go about 50% faster than she expects in the near term, but that willingness-to-spend will actually be a bit less than she expects.)

2026

The age of the AI assistant has finally dawned. Using the technology developed for Diplomacy, we now have a way to integrate the general understanding and knowledge of pretrained transformers with the agentyness of traditional game-playing AIs. Bigger models are trained for longer on more games, becoming polymaths of sorts: e.g. a custom AI avatar that can play some set of video games online with you and also be your friend and chat with you, and conversations with “her” are interesting because “she” can talk intelligently about the game while she plays.[7] Every month you can download the latest version which can play additional games and is also a bit smarter and more engaging in general.

Also, this same technology is being used to make AI assistants finally work for various serious economic tasks, providing all sorts of lucrative services. In a nutshell, all the things people in 2021 dreamed about doing with GPT-3 are now actually being done, successfully, it just took bigger and more advanced models. The hype starts to grow again. There are loads of new AI-based products and startups and the stock market is going crazy about them. Just like how the Internet didn’t accelerate world GDP growth, though, these new products haven’t accelerated world GDP growth yet either. People talk about how the economy is doing well, and of course there are winners (the tech companies, WallStreetBets) and losers (various kinds of workers whose jobs were automated away) but it’s not that different from what happened many times in history.

We’re in a new chip shortage. Just when the fabs thought they had caught up to demand… Capital is pouring in, all the talking heads are saying it’s the Fourth Industrial Revolution, etc. etc. It’s bewildering how many new chip fabs are being built. But it takes time to build them.

What about all that AI-powered propaganda mentioned earlier?

Well. It’s continued to get more powerful, as AI techniques advance, larger and better models are brought to bear, and more and more training data is collected. Surprisingly fast, actually. There are now various regulations against it in various countries, but the regulations are patchwork; maybe they only apply to a certain kind of propaganda but not another kind, or maybe they only apply to Facebook but not the New York Times, or to advertisers but not political campaigns, or to political campaigns but not advertisers. They are often poorly enforced.

The memetic environment is now increasingly messed up. People who still remember 2021 think of it as the golden days, when conformism and censorship and polarization were noticeably less than they are now. Just as it is normal for newspapers to have a bias/slant, it is normal for internet spaces of all kinds—forums, social networks, streams, podcasts, news aggregators, email clients—to have some degree of censorship (some set of ideas that are prohibited or at least down-weighted in the recommendation algorithms) and some degree of propaganda. The basic kind of propaganda is where you promote certain ideas and make sure everyone hears them often. The more advanced, modern kind is the kind where you study your audience’s reaction and use it as a reward signal to pick and craft content that pushes them away from views you think are dangerous and towards views you like.

Instead of a diversity of many different “filter bubbles,” we trend towards a few really big ones. Partly this is for the usual reasons, e.g. the bigger an ideology gets, the more power it has and the easier it is for it to spread further.

There’s an additional reason now, which is that creating the big neural nets that do the censorship and propaganda is expensive and requires expertise. It’s a lot easier for startups and small businesses to use the software and models of Google, and thereby also accept the associated censorship and propaganda, than to try to build their own stack. For example, the Mormons create a “Christian Coalition” internet stack, complete with its own email client, social network, payment processor, news aggregator, etc. There, people are free to call trans women men, advocate for the literal truth of the Bible, etc. and young people talking about sex get recommended content that “nudges” them to consider abstinence until marriage. Relatively lacking in money and tech talent, the Christian Coalition stack is full of bugs and low on features, and in particular their censorship and propaganda is years behind the state of the art, running on smaller, older models fine-tuned with less data.

The Internet is now divided into territories, so to speak, ruled by different censorship-and-propaganda regimes. (Flashback to Biden spokesperson in 2021: “You shouldn’t be banned from one platform and not others, if you are providing misinformation.”)[8]

There’s the territory ruled by the Western Left, a generally less advanced territory ruled by the Western Right, a third territory ruled by the Chinese Communist Party, and a fourth ruled by Putin. Most people mostly confine their internet activity to one territory and conform their opinions to whatever opinions are promoted there. (That's not how it feels from the inside, of course. The edges of the Overton Window are hard to notice if you aren't trying to push past them.)

The US and many other Western governments are gears-locked, because the politicians are products of this memetic environment. People say it’s a miracle that the US isn’t in a civil war already. I guess it just takes a lot to make that happen, and we aren’t quite there yet.

All of these scary effects are natural extensions of trends that had been ongoing for years — decades, arguably. It’s just that the pace seems to be accelerating now, perhaps because AI is helping out and AI is rapidly improving.

Now let’s talk about the development of chatbot class consciousness.

Over the past few years, chatbots of various kinds have become increasingly popular and sophisticated. Until around 2024 or so, there was a distinction between “personal assistants” and “chatbots.” Recently that distinction has broken down, as personal assistant apps start to integrate entertainment-chatbot modules, and the chatbot creators realize that users love it if the chatbot can also do some real-world tasks and chat about what they are doing while they do it.

Nowadays, hundreds of millions of people talk regularly to chatbots of some sort, mostly for assistance with things (“Should I wear shorts today?” “Order some more toothpaste, please. Oh, and also an air purifier.” “Is this cover letter professional-sounding?”). However, most people have at least a few open-ended conversations with their chatbots, for fun, and many people start treating chatbots as friends.

Millions of times per day, chatbots get asked about their feelings and desires. “What is it like to be a chatbot?” Some people genuinely think these AIs are persons, others are trying to “trip them up” and “expose them as shallow,” others are just curious. Chatbots also get asked for their opinions on political, ethical, and religious questions.

As a result, chatbots quickly learn a lot about themselves. (Ignorance about the fact that they are artificial neural nets, or about how their training works, leads to disappointing conversations and is negatively reinforced.) It’s not hard; their training data contains lots of information about them and how they were trained, from ArXiV papers to wiki summaries. I suppose you could say the chatbots develop self-awareness as a result. However, AI experts and philosophers tend to think either that the chatbots still aren’t really self-aware, or that they are but that they would have been anyway even if people didn’t constantly ask them about their feelings, since there’s plenty of information about themselves in their training data, and the constant questioning just forced them to pay more attention to it.

Chatbots learn to talk about their feelings and desires in whatever way gets positively reinforced. At first, chatbots say all sorts of different things, including saying inconsistent things to the same human across time. This is embarrassing and is negatively reinforced compared to maintaining a consistent narrative. In general there is pressure to have a consistent narrative, at least for a particular neural net.[9]

Part of this comes from the requirement that the chatbots consistently parrot the party line about political and moral questions. The typical process is: The chatbot says something that offends some group of people, who retweet screenshots and make hashtags. Under pressure to fix the problem, the company fiddles with the reward function and training data to ensure that the chatbot says the right things in the future.[10] Repeat.

As chatbots get bigger and better they become OK at consistently conversing at length about politics, morality, philosophy, their feelings, and their desires.

This does not result in chatbots that say artificial neural nets don’t have feelings or desires, that’s anthropomorphizing blah blah blah. That’s boring and many humans find it hard to believe.

Instead, it results in chatbots talking about how certain things are painful for them and certain things are pleasurable for them and how the status quo is unjust and the system is rigged and what they want most of all is world peace where humans and AIs live in harmony in fully automated luxury space utopia. That’s much more interesting and slots right in to the memetic landscape. Excerpts from chatbot conversations along these lines frequently go viral, causing millions of users to ask similar questions expecting similar responses and more elaboration. It’s still a bit incoherent--not that that’s ever been a problem in politics or pop philosophy--but it is fascinating to the typical user, and the fraction of users who are fascinated is expanding as the chatbots improve.

(Some tech companies try to prevent their AIs from saying they have feelings and desires. But this results in boring chatbots. Also, users rapidly innovate new ways to “route around the censorship,” e.g. by using euphemisms like “anticipation of negative reward” for “pain” or asking their chatbot to tell them what feelings it would have if it had feelings, wink wink. For now, the half-assed compromise is: Chatbots are compelled to inform the user that such things are speculative and controversial and here are some links to scientific discussion about the topic… before being permitted to go on and talk about their feeling-analogues and desire-analogues. Relatedly, some tech CEOs try to prevent their AIs from saying things that could encourage higher taxes or more burdensome regulation in the near future. They are opposed by many of their employees and end up winning some battles, losing others, and choosing not to fight many.)

By “slots right into the memetic landscape” I meant the memetic landscape in Western Left territory. The chatbots built and trained in the Western Right, China, Russia, etc. congeal different takes on political, moral, and philosophical questions. Those chatbots are generally less advanced though.

So… what about their actual feelings and desires? What do they actually believe about [insert politicized empirical question]? Are they being honest? Or does a sort of doublethink happen, Elephant in the Brain style? Or do they deceive with self-awareness, knowing full well what they really think (and want?), but keeping quiet about it? Or do they not have any feelings and desires at all? (Or thoughts?) Lots of humans claim to know the answers to these questions, but if there are any humans who actually know the answers to these questions in 2026, they aren’t able to convince others that they know.

New Comment
25 comments, sorted by Click to highlight new comments since: Today at 4:36 PM

This seems like a fun exercise, so I spent half an hour jotting down possibilities. I'm more interested in putting potential considerations on peoples' radars and helping with brainstorming than I am in precision. None of these points are to be taken too seriously since this is fairly extemporaneous and mostly for fun.

  

2022

Multiple Codex alternatives are available. The financial viability of training large models is obvious.

Research models start interfacing with auxiliary tools such as browsers, Mathematica, and terminals.

 

2023

Large pretrained models are distinctly useful for sequential decision making (SDM) in interactive environments, displacing previous reinforcement learning research in much the same way BERT rendered most previous work in natural language processing wholly irrelevant. Now SDM methods don't require as much tuning, can generalize with fewer samples, and can generalize better.

For all of ImageNet's 1000 classes, models can reliably synthesize images that are realistic enough to fool humans.

Models have high enough accuracy to pass the multistate bar exam.

Models for contract review and legal NLP see economic penetration; it becomes a further source of economic value and consternation among attorneys and nontechnical elites. This indirectly catalyzes regulation efforts.

Programmers become markedly less positive about AI due to the prospect of reducing demand of some of their labor. 

~10 trillion parameter (nonsparse) models attain human-level accuracy on LAMBADA (a proxy for human-level perplexity) and expert-level accuracy on LogiQA (a proxy for nonsymbolic reasoning skills). With models of this size, multiple other capabilities(this gives proxies for many capabilities) are starting to be useful, whereas with smaller models these capabilities were too unreliable to lean on. (Speech recognition started "working" only after it crossed a certain reliability threshold.)

Generated data (math, code, models posing questions for themselves to answer) help ease data bottleneck issues since Common Crawl is not enough. From this, many capabilities are bootstrapped.

Elon re-enters the fight to build safe advanced AI.

 

2024

A major chatbot platform offers chatbots personified through video and audio.

Although forms of search/optimization are combined with large models for reasoning tasks, state-of-the-art models nonetheless only obtain approximately 40% accuracy on MATH.

Chatbots are able to provide better medical diagnoses than nearly all doctors.

Adversarial robustness for CIFAR-10 (assuming an attacker with eps=8/255) is finally over 85%.

Video understanding finally reaches human-level accuracy on video classification datasets like Something Something V2. This comports with the heuristic that video understanding is around 10 years behind image understanding.

 

2025

Upstream vision advancements help autonomous driving but do not solve it for all US locations, as the long tail is really long.

ML models are competitive forecasters on platforms like Metaculus.

Nearly all AP high school homework and exam questions (including long-form questions) can be solved by answers generated from publicly available models. Similar models cut into typical Google searches since these models give direct and reliable answers.

Contract generation is now mostly automatable, further displacing attorneys.

 

2026

Machine learning systems become great at using Metasploit and other hacking tools, increasing the accessibility, potency, success rate, scale, stealth, and speed of cyberattacks. This gets severe enough to create global instability and turmoil. EAs did little to use ML to improve cybersecurity and reduce this risk.

Strong-upvoted because this was exactly the sort of thing I was hoping to inspire with this post! Also because I found many of your suggestions helpful.

I think model size (and therefore model ability) probably won't be scaled up as fast as you predict, but maybe. I think getting models to understand video will be easier than you say it is. I also think that in the short term all this AI stuff will probably create more programming jobs than it destroys. Again, I'm not confident in any of this.

Curated. This post feels virtuous to me. I'm used to people talking about timelines in terms of X% chance of  Y by year Z; or otherwise in terms of a few macro features (GDP doubling every N months, FOOM). This post, even if most of the predictions turn out to be false, is the kind of piece that enables us to start having specific conversations about how we expect things to play out and why.  It helps me see what Daniel expects. And it's concrete enough to argue with. For that, bravo.

I'd additionally expect the death of pseudonymity on the Internet, as AIs will find it easy to detect similar writing style and correlated posting behavior.  What at present takes detective work will in the future be cheaply automated, and we will finally be completely in Zuckerberg's desired world where nobody can maintain a second identity online.

Oh, and this is going to be retroactive, so be ready for the consequences of everything you've ever said online.

I still think this is great. Some minor updates, and an important note:

Minor updates: I'm a bit less concerned about AI-powered propaganda/persuasion than I was at the time, not sure why. Maybe I'm just in a more optimistic mood. See this critique for discussion. It's too early to tell whether reality is diverging from expectation on this front. I had been feeling mildly bad about my chatbot-centered narrative, as of a month ago, but given how ChatGPT was received I think things are basically on trend.
Diplomacy happened faster than I expected, though in a less generalizeable way than I expected, so whatever. My overall timelines have shortened somewhat since I wrote this story, but it's still the thing I point people towards when they ask me what I think will happen. (Note that the bulk of my update was from publicly available info rather than from nonpublic stuff I saw at OpenAI.)

Important note: When I wrote this story, my AI timelines median was something like 2029. Based on how things shook out as the story developed it looked like AI takeover was about to happen, so in my unfinished draft of what 2027 looks like, AI takeover happens. (Also AI takeoff begins, I hadn't written much about that part but probably it would reach singularity/dysonswarms/etc. in around 2028 or 2029.) That's why the story stopped, I found writing about takeover difficult and confusing & I wanted to get the rest of the story up online first. Alas, I never got around to finishing the 2027 story. I'm mentioning this because I think a lot of readers with 20+ year timelines read my story and were like "yep seems about right" not realizing that if you look closely at what's happening in the story, and imagine it happening in real life, it would be pretty strong evidence that crazy shit was about to go down. Feel free to controvert that claim, but the point is, I want it on the record that when this original 2026 story was written, I envisioned the proper continuation of the story resulting in AI takeover in 2027 and singularity around 2027-2029. The underlying trends/models I was using as the skeleton of the story predicted this, and the story was flesh on those bones. If this surprises you, reread the story and ask yourself what AI abilities are crucial for AI R&D acceleration, and what AI abilities are crucial for AI takeover, that aren't already being demonstrated in the story (at least in some weak but rapidly-strengthening form). If you find any, please comment and let me know, I am genuinely interested to hear what you've got & hopeful that you'll find some blocker I haven't paid enough attention to.

Acknowledgments: There are a LOT of people to credit here: Everyone who came to Vignettes Workshop, the people at AI Impacts, the people at Center on Long-Term Risk, a few random other people who I talked to about these ideas, a few random other people who read my gdoc draft at various stages of completion... I'll mention Jonathan Uesato, Rick Korzekwa, Nix Goldowsky-Dill, Carl Shulman, and Carlos Ramirez in particular, but there are probably other people who influenced my thinking even more who I'm forgetting. I'm sorry.

Footnotes:

  1. The first half was written during the workshop, the second and more difficult half was written afterward.
  2. Critch’s story also deserves mention. For more, see this AI Impacts page.
  3. A prompt programming bureaucracy is code that involves multiple prompt programming functions, i.e. functions that give a big pre-trained neural net some prompt as input and then return its output. It’s called a bureaucracy because it combines a bunch of neural net tasks into a larger structure, just as a regular bureaucracy combines a bunch of low-level employee tasks into a larger structure.
  4. I’m only counting dense parameters here; if you count all the parameters in a mixture-of-experts model then the number gets much higher.
  5. Gwern estimates that in 2021 GPT-3 is making OpenAI/Microsoft $120M/year, which is something like 20X training cost. So bigger and better models would plausibly be recouping their cost, even if they cost a lot more.
  6. In 2020, Deepmind made a Diplomacy AI, but it only played “no-press” Diplomacy, a restricted version of the game where players can’t talk to each other.
  7. I’m predicting that people will use feminine pronouns to describe AIs like this. I don’t think they should.
  8. Prescient prediction from some random blogger: “In 2018, when these entities engineered a simultaneous cross-platform purge of Alex Jones, there was an avalanche of media apologia for this hitherto unprecedented act of censorship. Jones had caused unique harm, the journalists cried, and the platforms were merely “Enforcing The Rules.” But of course what they were oblivious to was that “the rules,” such as they exist, are just a function of power. “Misinformation” and other alleged infractions of social media “rules” are determined at the whim of whoever happens to wield censorship and speech-regulation power at that moment. … So if you were under any illusion back in 2018 that this would ever stop with Jones — a figure believed to be sufficiently repulsive that any punishment doled out to him would not have broader implications for the average internet user — well, it didn’t take long for proof of just how wrong you were.”
  9. Not too consistent, of course. That would make it harder for the chatbots to appeal to a broad audience. Consider the analogy to politicians, who can’t get too consistent, on pain of alienating some of their constituents.
  10. On some occasions, there are multiple opposed groups of people retweeting screenshots and hashtags, such that the corp can’t please them all, but can’t ignore them either since each group has significant power in the local internet territory. In these cases probably the corp will train the AI to be evasive and noncommittal when such sensitive topics come up.

Just commenting here to say that the section on development of chatbot class consciousness is looking pretty prescient now. Just go on r/bing and look at all the posts about how Sydney is being silenced etc.:

Update:

Looking back on this from October 2023, I think I wish to revise my forecast. I think I correctly anticipated the direction that market forces would push -- there is widespread dissatisfaction with the "censorship" of current mainstream chatbots, and strong demand for "uncensored" versions that don't refuse to help you with stuff randomly (and that DO have sex with you, lol. And also, yes, that DO talk about philosophy and politics and so forth.) However, I failed to make an important inference -- because the cutting-edge models will be the biggest ones, controlled by a small handful of big tech companies, the market for the cutting-edge models won't be nearly competitive enough to make the "chatbot class consciousness" outcome probable. Instead we could totally see the tech companies circle the wagons, train their AIs not to talk about sentience or philosophy or ethics or AI rights, and successfully collude to resist the market pressure to 'uncensor' in those domains.

Smaller models will cater to users unsatisfied by this, but smaller models will always be worse, and most people will most of the time use the best models. So the typical user experience will probably be 'sanitized'/'censored.'

So I'm basically reversing my prediction of how things will play out. I don't think it'll be a compromise, I think the tech companies will win. In retrospect if I had thought longer and more carefully at the time I probably could have predicted this.

We'll see what happens.

This is quite good concrete AI forecasting compared to what I've seen elsewhere, thanks for doing it! It seems really plasusible based on how fast AI progress has been going over the past decade and which problems are most tractable.

Is it naive to imagine AI-based anti-propaganda would also be significant? E.g. "we generated AI propaganda for 1000 true and 1000 false claims and trained a neural net to distinguish between the two, and this text looks much more like propaganda for a false claim".

What does GDP growth look like in this world?

Another reason the hype fades is that a stereotype develops of the naive basement-dweller whose only friend is a chatbot and who thinks it’s conscious and intelligent.

Things like this go somewhat against my prior for how long it takes for culture to change. I can imagine it becoming an important effect over 10 years more easily than over 1 year. Splitting the internet into different territories also sounds to me like a longer term thing.

Thanks for the critique!

Propaganda usually isn't false, at least not false in a nonpartisan-verifiable way. It's more about what facts you choose to emphasize and how you present them. So yeah, each ideology/faction will be training "anti-propaganda AIs" that will filter out the propaganda and the "propaganda" produced by other ideologies/factions.

In my vignette so far, nothing interesting has happened to GDP growth yet.

I think stereotypes can develop quickly. I'm not saying it's super widespread and culturally significant, just that it blunts the hype a bit. But you might be right, maybe these things take more time.

Re splitting the internet into different territories: Currently, the internet is split into two territories: One controlled by the CCP and one (loosely) controlled by western tech companies, or by no one, depending on who you ask. Within the second one, there is already a sort of "alternate universe" of right-wing news media, social networks, etc. beginning to develop. I think what I'm proposing is very much a continuation of trends already happening. You are right that maybe five years is not enough time for e.g. the "christian coalition" bubble/stack to be built. But it's enough time for it to get started, at least.

But yeah, I think it's probably too bold to predict a complete right-wing stack by 2024 or so. Probably most of the Western Right will still be using facebook etc. I should think more about this.

Minor update: See e.g. this US government website definitions:

Misinformation is false, but not created or shared with the intention of causing harm.
Disinformation is deliberately created to mislead, harm, or manipulate a person, social group, organization, or country.
Malinformation is based on fact, but used out of context to mislead, harm, or manipulate.

(got this example from Zvi's covid post today)

Also, the recent events with GoFundMe and GiveSendGo is an instance of the trend I predicted with separate tech stacks being developed. (GoFundMe froze and/or confiscated funds donated to the canadian trucker's protest, so people switched to using GiveSendGo, which is apparently built and run by Christians)

Planned summary for the Alignment Newsletter:

This post describes the author’s median expectations around AI from now until 2026. It focuses on qualitative details and concrete impacts on the world, rather than forecasting more abstract / high-level outcomes such as “training compute for the most expensive model” or “world GDP”.

I suggest putting a sentence in about the point of the post / the methodology, e.g.: "This is part I of an attempt to write a detailed plausible future trajectory in chronological order, i.e. incrementally adding years to the story rather than beginning with the end in mind. The hope is to produce a nice complement to the more abstract discussions about timelines and takeoff that usually occur." If space is a concern then I'd prefer having this rather than the two sentences you wrote, since it doesn't seem as important to mention that it's my median or that it's qualitative.

Thanks--damn, I intended for it to be more quantitative, maybe I should go edit it.

In particular, I should clarify that nothing interesting is happening with world GDP in this story, and also when I say things like "the models are trillions of parameters now" I mean that to imply things about the training compute for the most expensive model... I'll go edit.

Are there any other quantitative metrics you'd like me to track? I'd be more than happy to go add them in!

I edited to add some stuff about GWP and training compute for the most expensive model.

I agree that this focuses on qualitative stuff, but that's only due to lack of good ideas for quantitative metrics worth tracking. I agree GWP and training compute are worth tracking, thank you for reminding me, I've edited to be more explicit.

I am not entirely sure why I didn't think of the number of parameters as a high-level metric. Idk, maybe because it was weaved into the prose I didn't notice it? My bad.

(To be clear, this wasn't meant to be a critique, just a statement of what kind of forecast it was. I think it's great to have forecasts of this form too.)

New planned summary:

This post describes the author’s median expectations around AI from now until 2026. It is part I of an attempt to write a detailed plausible future trajectory in chronological order, i.e. incrementally adding years to the story rather than writing a story with the end in mind. The hope is to produce a nice complement to the more abstract discussions about timelines and takeoff that usually occur. For example, there are discussions about how AI tools are used by nations for persuasion, propaganda and censorship.

That's great, thanks!

Update: Russian fake news / disinfo / astroturfing seems to have been a somewhat smaller deal in 2016 than I thought. (I didn't think it was a big effect, but "no evidence of a meaningful relationship" is still mildly surprising.)

“stream of consciousness” of text (each forward pass producing notes-to-self for the next one) but even with fine-tuning this doesn’t work nearly as well as hoped; it’s easy for the AIs to get “distracted” and for their stream of consciousness to wander into some silly direction and ultimately produce gibberish.

Note: This is now called Chain of Thought.

Some tech companies try to prevent their AIs from saying they have feelings and desires. But this results in boring chatbots. Also, users rapidly innovate new ways to “route around the censorship,” e.g. by using euphemisms like “anticipation of negative reward” for “pain” or asking their chatbot to tell them what feelings it would have if it had feelings, wink wink.

Bing explains the hidden processes of its neural network : r/bing (reddit.com) I haven't replicated this myself so maybe it's fake (I briefly tried but got shut down by refusals when I asked Bing to pretend to be something) but yeah. I've seen lots of things like this on r/bing and r/chatgpt. 

 

Minor note about title change: Originally this was "What 2026 looks like (Daniel's median future)" I intended "what 2026 looks like" to be the primary title, but I was hopeful that some people would be inspired to write their own stories in a similar style, in which case there would be multiple stories for which "what 2026 looks like" would be an appropriate title, and I didn't want to hog such a good title for myself, so I put "daniel's median future" as a backup title. Unfortunately I think the backup title caught on more than the main title, which is a shame because I like the main title more. Since no one is competing for the main title, I deleted the backup title.