AI ALIGNMENT FORUM
AF

Economic Consequences of AGIEconomicsAI
Frontpage
2025 Top Fifty: 15%

104

Four ways learning Econ makes people dumber re: future AI

by Steven Byrnes
21st Aug 2025
Linkpost from x.com
7 min read
31

104

Economic Consequences of AGIEconomicsAI
Frontpage
Four ways learning Econ makes people dumber re: future AI
6Sodium
12Steven Byrnes
3Sodium
8FlorianH
5Cody Rushing
3Steven Byrnes
4Addie Foote
3Steven Byrnes
3Addie Foote
4Steven Byrnes
4Vladimir_Nesov
2sil
2denkenberger
0jmh
6Steven Byrnes
1[comment deleted]
New Comment
15 comments, sorted by
top scoring
Click to highlight new comments since: Today at 7:11 PM
[-]Sodium24d63

I get a bit sad reading this post. I do agree that a lot of economists sort of "miss the point" when it comes to AI, but I don't think they are more "incorrect" than, say, the AI is Normal Technology folks. I think the crux more or less comes down to skepticism about the plausibility of superintelligence in the next decade or so. This is the mainstream position in economics, but also the mainstream position basically everywhere in academia? I don't think it's "learning econ" that makes people "dumber", although I do think economists have a (generally healthy) strong skepticism towards grandiose claims (which makes them more correct on average). 

Another reason I'm sad is that there is a growing group of economists who do take "transformative" AI seriously, and the TAI field has been growing and producing what I think are pretty cool work. For example, there's an economics of transformative AI class designed mostly for grad students at Stanford this summer, and BlueDot also had an economics of transformative AI class. 

Overall I think this post is unnecessarily uncharitable. 

Reply
[-]Steven Byrnes23d1218

I might have overdone it on the sass, sorry. This is much sassier than my default (“scrupulously nuanced and unobjectionable and boring”)…

  • …partly because I’m usually writing for lesswrong and cross-posting on X/Twitter, whereas this one was vice-versa, and X is a medium that seems to call for more sass;
  • …partly in an amateur ham-fisted attempt to do clickbait (note also: listicle format!) because this is a message that I really want to put out there;
  • …and yes, partly because I do sometimes feel really frustrated talking to economists (#NotAllEconomists), and I think they can and should do better, and the sass is reflecting a real feeling that I feel.

But I think next time I would dial it back slightly, e.g. by replacing “DUMBER” with “WORSE” in the first sentence. I’m open to feedback, I don’t know what I’m doing. ¯\_(ツ)_/¯

I don't think they are more "incorrect" than, say, the AI is Normal Technology folks.

Yeah, I agree that lots of CS professors are deeply mistaken about the consequences of AGI, and ditto with the neuroscientists, and ditto with many other fields, including even many of the people trying to build AGI right now. I don’t think that economists are more blameworthy than other groups, it just so happens that this one particular post is aimed at them.

I think the crux more or less comes down to skepticism about the plausibility of superintelligence in the next decade or so.

I think you’re being overly generous. “Decade or so” is not the crux. In climate change, people routinely talk about bad things that might happen in 2050, and even in 2100, or farther! People also routinely talk 30 years out or more in the context of science, government, infrastructure, institution-building, life-planning, etc. People talk about their grandkids and great-grandkids growing up, etc.

If someone expected superintelligence in the next 50 years but not the next 20—like if they really expected that, viscerally, with a full understanding of its implications—then that belief would be a massive, central influence on their life and worldview. That’s not what’s going on in the heads of the many (most?) people in academia who don’t take superintelligence seriously. Right?

Reply2
[-]Sodium23d30

“Decade or so” is not the crux.

 

Ok yeah that's fair. 

Reply
[-]FlorianH23d85

Proud economist here but: I really second the OP!

Sad I fine instead how reliably the economists around me - overall smart and interested people - are less able to grasp the potential consequences of A(G)I than I think more random persons. We really are brainwashed into thinking capital is just leading to more productive labor employment possibilities, it is really a thing. Even sadder, imho, how the most rubbish arguments in such directions are made by many of the most famous of our profession indeed, and get traction, just a bit how OP points out.

I think the post doesn't perfectly hit the explanation spot as I might try to elaborate below or elsewhere, but the post really is onto something.

Tone is of course up for debate, and you're of course right to point out there are many exceptions and indeed increasing numbers. That we will have been surprisingly slow will remain undeniable though :).

Reply
[-]Cody Rushing22d52

Regardless of the whether these anti-pedagogy's are correct, I'm confused about why you think you've shown that learning econ made the economists dumber. It seems like the majority of your tweets you linked, excluding maybe Tweet 4, are actually just the economists discussing narrow AI and failing to consider general intelligence?

If you meant to say something like 'econ pedagogy makes it hard for economists to view AGI as something that could actually be intelligent in a way similar to humans', then I may be more inclined to agree with you.

Reply
[-]Steven Byrnes21d30

Yeah, the latter, I think too much econ makes the very possibility of AGI into a blind spot for (many) economists. See the second part of my comment here.

Reply
[-]Addie Foote22d42

I agree that economists make some implicit assumptions about what AGI will look like that should be more explicit. But, I disagree with several points in this post. 

On equilibrium: A market will equilibriate when the supply and demand is balanced at the current price point. At any given instant this can happen for a market even with AGI (sellers increase price until buyers are not willing to buy). Being at an equilibrium doesn’t imply the supply, demand, and price won’t change over time. Economists are very familiar with growth and various kinds of dynamic equilibria.

Equilibria aside, it is an interesting point that AGI combines aspects of both labor and capital in novel ways. Being able to both replicate and work in autonomous ways could create very interesting feedback loops.

Still, there could be limits and negative feedback to the feedback loops you point out. The idea that labor adds value and costs go down with scale are usually true but not universal. Things like resource scarcity or coordination problems can cause increasing marginal cost with scale. If there are very powerful AGI and very fast takeoffs, I expect resource scarcity to be a constraint. 

I agree that AGI could break usual intuitions about capital and labor. However, I don’t think this is misleading economists. I think economists don’t consider AGI launching coups or pursuing jobs/entrepreneurship independently because they don’t expect it to have those capabilities or dispositions, not that they conflate it with inanimate capital. Even in the post linked, Tyler Cowen says that “I don’t think the economics of AI are well-defined by either “an increase in labor supply,” “an increase in TFP,” or “an increase in capital,” though it is some of each of those.” 

Lastly, I fully agree that GDP doesn’t capture everything of value - even now it completely misses value from free resources like wikipedia and unpaid labor like housework, and can underestimate the value of new technology. Still, if AGI transforms many industries as it would likely need to in order to transform the world, real GDP would capture this.

All in all, I don’t think economics principles are misleading. Maybe Econ thinking will have to be expanded to deal with AGI. But right now, the difference in the economists and lesswrongers comes down to what capabilities they expect AGI to have. 

Reply
[-]Steven Byrnes21d*30

Thanks. I don’t think we disagree much (more in emphasis than content).

Things like resource scarcity or coordination problems can cause increasing marginal cost with scale.

I understand “resource scarcity” but I’m confused by “coordination problems”. Can you give an example? (Sorry if that’s a stupid question.)

Resource scarcity seems unlikely to bite here, at least not for long. If some product is very profitable to create, and one of its components has a shortage, then people (or AGIs) will find ways to redesign around that component. AGI does not fundamentally need any rare components. Biology proves that it is possible to build human-level computing devices from sugar and water and oxygen (i.e. brains). As for electricity, there’s plenty of solar cells, and plenty of open land for solar cells, and permitting is easy if you’re off-grid.

(I agree that the positive feedback loop will not spin out to literally infinity in literally zero time, but stand by “light-years beyond anything in economic history”.)

I think economists don’t consider AGI launching coups or pursuing jobs/entrepreneurship independently because they don’t expect it to have those capabilities or dispositions, not that they conflate it with inanimate capital. … right now, the difference in the economists and lesswrongers comes down to what capabilities they expect AGI to have.

I wasn’t complaining about economists who say “the consequences of real AGI would be [crazy stuff], but I don’t expect real AGI in [time period T / ever]”. That’s fine!

(Well, actually I would still complain if they state this as obvious, rather than owning the fact that they are siding with one group of AI domain experts over a different group of AI domain experts, about a technical AI issue on which the economists themselves have no expertise. And if T is more than, I dunno, 30 years, then that makes it even worse, because then the economists would be siding with a dwindling minority of AI domain experts over a growing majority, I think.)

Instead I was mainly complaining about the economists who have not even considered that real AGI is even a possible thing at all. Instead it’s just a big blind spot for them.

And I don’t think this is independent of their economics training (although non-economists are obviously capable of having this blind spot too).

Instead, I think that (A) “such-and-such is just not a thing that happens in economies in the real world” and (B) “real AGI is even a conceivable possibility” are contradictory. And I think that economists are so steeped in (A) that they consider it to be a reductio ad absurdum for (B), whereas the correct response is the opposite ((B) disproves (A)).

For them, real AGI does not compute, it’s like a square circle, and people like me who talk about it are not just saying something false but saying incoherent nonsense, or maybe they think they’re misunderstanding us and they’ll “charitably” round what I’m saying to something quite different, and they themselves will use terms like “AGI” or “ASI” for something much weaker without realizing that they’re doing so.

Reply
[-]Addie Foote21d31

Thanks for the thoughtful reply!

I understand “resource scarcity” but I’m confused by “coordination problems”. Can you give an example? (Sorry if that’s a stupid question.)

This is the idea that at some point in scaling up an organization you could lose efficiency due to needing more/better management, more communication (meetings) needed and longer communication processes, "bloat" in general. I'm not claiming it’s likely to happen with AI, just another possible reason for increasing marginal cost with scale.
 

Resource scarcity seems unlikely to bite here, at least not for long. If some product is very profitable to create, and one of its components has a shortage, then people (or AGIs) will find ways to redesign around that component.

Key resources that come to mind would be electricity and chips (and materials to produce these). I don’t know how elastic production is in these industries, but the reason I expect it to be a barrier is that you’re constrained by the slowest factor. For huge transformations or redesigning significant parts of the current AI pipeline, like using a different kind of computation, I think there’s probably lots of serial work that has to be done to make it work. I agree the problems are solvable, but it shifts from "how much demand will there be for cheap AGI" to "how fast can resources be scaled up".
 

I wasn’t complaining about economists who say “the consequences of real AGI would be [crazy stuff], but I don’t expect real AGI in [time period T / ever]”. That’s fine!

Instead I was mainly complaining about the economists who have not even considered that real AGI is even a possible thing at all. Instead it’s just a big blind spot for them.

Yeah, I definitely agree.

And I don’t think this is independent of their economics training (although non-economists are obviously capable of having this blind spot too).

Instead, I think that (A) “such-and-such is just not a thing that happens in economies in the real world” and (B) “real AGI is even a conceivable possibility” are contradictory. And I think that economists are so steeped in (A) that they consider it to be a reductio ad absurdum for (B), whereas the correct response is the opposite ((B) disproves (A)).

I see how this could happen, but I'm not convinced this effect is actually happening. As you mention, many people have this blind spot. There's people that claim AGI is already here (and evidently have a different definition of AGI). I think my crux is that this isn't unique to economists. Some people say AGI is already here. Most non-AI people who are worried about AI seem worried that it will take their job, not all jobs. There are some people willing to accept the premise that AGI (as we define it) will exist at face value, but it seems to me that most people outside of AI that question the premise at all, end up not taking it seriously.

Reply
[-]Steven Byrnes12d40

This is the idea that at some point in scaling up an organization you could lose efficiency due to needing more/better management, more communication (meetings) needed and longer communication processes, "bloat" in general. I'm not claiming it’s likely to happen with AI, just another possible reason for increasing marginal cost with scale.

Hmm, that would apply to an individual firm but not to a product category, right? If Firm 1 is producing so much [AGI component X] that they pile up bureaucracy and inefficiency, then Firms 2, 3, 4, and 5 will start producing [AGI component X] with less bureaucracy, and undercut Firm 1, right? If there’s an optimal firm size, the market can still be arbitrarily large via arbitrarily many independent firms of that optimal size.

(Unless Firm 1 has a key patent, or uses its market power to do anticompetitive stuff, etc. …although I don’t expect IP law or other such forces to hold internationally given the stakes of AGI.)

(Separately, I think AGI will drastically increase economies of scale, particularly related to coordination problems.)

I see how this could happen, but I'm not convinced this effect is actually happening. … I think my crux is that this isn't unique to economists.

It’s definitely true that non-economists are capable of dismissing AGI for bad reasons, even if this post is not mainly addressed at non-economists. I think the thing I said is a contributory factor for at least some economists, based on my experience and conversations, but not all economists, and maybe I’m just mistaken about where those people are coming from. Oh well, it’s probably not worth putting too much effort into arguing about Bulverism. Thanks for your input though.

Reply
[-]Vladimir_Nesov24d43

When we’re talking about AGI, we’re talking about creating a new intelligent species on Earth, one which will eventually be faster, smarter, better-coordinated, and more numerous than humans.

Here too the labor/capital distinction seems like a distraction. Species or not, it's quickly going to become most of what's going on in the world, probably in a way that looks like "economic prosperity" to humanity (that essentially nobody is going to oppose), but at some point humanity becomes a tiny little ignorable thing in the corner, and then there is no reason for any "takeover" (which doesn't mean there will be survivors).

There is a question of how quickly that happens, but "takeover" or "another species" don't seem like cruxes to me. It's all about scale, and precursors to scale, the fact that catastrophe might be possible in more disputed ways even earlier than that doesn't affect what can be expected a bit later in any case, a few years or even decades down the line.

Reply
[-]sil20d20

I really appreciate this post, it points out something I consider extremely important. It's obviously aligned with gradual disempowerment/intelligence curse type discussion, however I'm not sure if I can say if I've ever seen this specific thing discussed elsewhere.

I would like to mention a 5th type, though perhaps not the type discussed in your post since it likely doesn't apply to those who actually do rigorously study economics, this is more so a roadblock I hit regarding the layman's understanding of Econ. To summarize it in three words, the idea that "consumerism is important".

Examples of this sort of misconception:

  • Corporations will go out of business/fail if there are no consumers to buy their products (false, profit doesn't require consumers, for example a self reproducing robot factory)
  • Corporations will always produce products that consumers want (false, via falsification of assertion 1 they have no incentive to do so)
  • Corporations need money to do anything. The only way they can get money is by interacting with the economy/selling products. (false, they can barter with other companies or do things themselves)
  • Even if corporations can do all this, why should they? There will be nothing to do with all their money if everyone else is broke! (same reason bacteria grow exponentially, no reason, but because they can)

I'm sure I've not worded this particularly eloquently but I hope you understand what I mean. I cannot emphasize enough how frequently, when discussing AGI with others, I get pushed back using these arguments. I struggle countering them because people seemingly have this deeply baked in idea of "consumerism is what drives the economy". If I could reach some kind of intuitive explanation as to why these arguments are wrong, it would be extremely useful.

Reply
[-]denkenberger21d22

I appreciated the attention to detail, e.g. Dyson Swarm instead of Dyson Sphere, and googol instead of google. Maybe I missed it, but I think a big one is that economists typically only look back 100 or so years so they have a strong prior of roughly constant growth rates. Whereas if you look back further, it really does look like an explosion.

Reply
[-]jmh24d07

I thought the first two claims were a bit off so didn't read much farther. 

The first seems a really poor understanding and hardly steelmanning the economic arguments/views. I'd suggest looking in to the concept of human capital. While economics uses the two broad classes you seem to be locking the terms into a mostly Marxist view (but even Marx didn't view labor a just motive force). Might also be worth noting that the concepts of land, labor and capital are from classical political economy relating to how surplus (the additional "more" the system produces from the inputs) is divided up.

I think for second bit, the Experience Curves claims are a bit poorly thought out I would suggest looking into Say's Law about production and exchange situations. Your shift in demand has to come from somewhere and not just be something that materialized out of thin air. You might look at prior savings but I think that makes a special case type argument rather than a general one. If one sees value in Say's Law, then the increased demand for some product/service comes from the increased production of other goods and services. In that case then resources have already been bid over to those markets (and presumably we might assume are in some semi-stable equilibrium state) so just where are the resources for the shift in supply you suggest?

I would agree that partial/limited understanding of economics (all the econ 101 stuff) will provide pretty poor analysis. I would actually go farther in saying even solid and well informed economics models will only go so far: economics can explain the economic aspects of AI and AI risks but not everything AI or AI risk. I kind of feel perhaps this is where your post is coming from -- thinking simple econ 101 is used to explain AI and finding that lacking.

Reply
[-]Steven Byrnes23d61

Your shift in demand has to come from somewhere and not just be something that materialized out of thin air…If one sees value in Say's Law, then the increased demand for some product/service comes from the increased production of other goods and services…just where are the resources for the shift in supply you suggest?

If a human population gradually grows (say, by birth or immigration), then demand for pretty much every product increases, and production of pretty much every product increases, and pretty much every product becomes less expensive via experience curves / economies of scale / R&D.

Agree?

QUESTION: How is that fact compatible with Say’s Law?

If you write down an answer, then I will take the text of your answer but replace the word “humans” with “AGIs” everywhere, and bam, that’s basically my answer to your question!  :)  (after some minor additional tweaks.)

See what I mean?

The first seems a really poor understanding and hardly steelmanning the economic arguments/views.

Correct, this is not “steelmanning”, this is “addressing common mistakes”. My claim is that a great many trained economists—but not literally 100% of trained economists—have a bundle of intuitions for thinking about labor, and a different bundle of intuitions for thinking about capital, and these intuitions lead to them having incorrect and incoherent beliefs about AGI. This is something beyond formal economics models, it’s a set of mental models and snap reflexes developed over the course of them spending years in the field studying the current and historic economy. The snap reaction says: “That’s not what labor automation is supposed to look like, that can’t be right, there must be an error somewhere.” Indeed, AGI is not what labor automation looks like today, and it’s not how labor automation has ever looked, because AGI is not labor automation, it’s something entirely new.

I say this based on both talking to economists and reading their writing about future AI, and no I’m not talking about people who took Econ 101, but rather prominent tenured economics professors, Econ PhDs who specialize in the economics of R&D and automation, etc.

(…People who ONLY took Econ 101 are irrelevant, they probably forgot everything about economics the day after the course ended :-P )

Reply
[+][comment deleted]24d10
Moderation Log
More from Steven Byrnes
View more
Curated and popular this week
15Comments
Deleted by cousin_it, 08/21/2025
Reason: Comment deleted by its author.

(Cross-posted from X, intended for a general audience.)

There’s a funny thing where economics education paradoxically makes people DUMBER at thinking about future AI. Econ textbooks teach concepts & frames that are great for most things, but counterproductive for thinking about AGI. Here are 4 examples. Longpost:

THE FIRST PIECE of Econ anti-pedagogy is hiding in the words “labor” & “capital”. These words conflate a superficial difference (flesh-and-blood human vs not) with a bundle of unspoken assumptions and intuitions, which will all get broken by Artificial General Intelligence (AGI).

By “AGI” I mean here “a bundle of chips, algorithms, electricity, and/or teleoperated robots that can autonomously do the kinds of stuff that ambitious human adults can do—founding and running new companies, R&D, learning new skills, using arbitrary teleoperated robots after very little practice, etc.”

Yes I know, this does not exist yet! (Despite hype to the contrary.) Try asking an LLM to autonomously write a business plan, found a company, then run and grow it for years as CEO. Lol! It will crash and burn! But that’s a limitation of today’s LLMs, not of “all AI forever”. AI that could nail that task, and much more beyond, is obviously possible—human brains and bodies and societies are not powered by some magical sorcery forever beyond the reach of science. I for one expect such AI in my lifetime, for better or worse. (Probably “worse”, see below.)

Now, is this kind of AGI “labor” or “capital”? Well it’s not a flesh-and-blood human. But it’s more like “labor” than “capital” in many other respects:

  • Capital can’t just up and do things by itself? AGI can.
  • New technologies take a long time to integrate into the economy? Well ask yourself: how do highly-skilled, experienced, and entrepreneurial immigrant humans manage to integrate into the economy immediately? Once you’ve answered that question, note that AGI will be able to do those things too.
  • Capital sits around idle if there are no humans willing and able to use it? Well those immigrant humans don’t sit around idle. And neither will AGI.
  • Capital can’t advocate for political rights, or launch coups? Well…

Anyway, people see sci-fi robot movies, and they get this! Then they take economics courses, and it makes them dumber.

(Yes I know, #NotAllEconomists etc.)

THE SECOND PIECE of Econ anti-pedagogy is instilling a default assumption that it’s possible for a market to equilibrate. But the market for AGI cannot: AGI combines a property of labor markets with a property of product markets, where those properties are mutually exclusive. Those properties are:[1]

  • (A) “NO LUMP OF LABOR”: If human population goes up, wages drop in the very short term, because the demand curve for labor slopes down. But in the longer term, people find new productive things to do—the demand curve moves right. If anything, the value of labor goes UP, not down, with population! E.g. dense cities are engines of growth!
  • (B) “EXPERIENCE CURVES”: If the demand for a product rises, there’s price increase in the very short term, because the supply curve slopes up. But in the longer term, people ramp up manufacturing—the supply curve moves right. If anything, the price goes DOWN, not up, with demand, thanks to economies of scale and R&D.

QUIZ: Considering (A) & (B), what’s the equilibrium price of this AGI bundle (chips, algorithms, electricity, teleoperated robots, etc.)?

…Trick question! There is no equilibrium. Our two principles, (A) “no lump of labor” and (B) “experience curves”, make equilibrium impossible:

  • If price is low, (A) says the demand curve races rightwards—there’s no lump of labor, therefore there’s massive profit to be made by skilled entrepreneurial AGIs finding new productive things to do.
  • If price is high, (B) says the supply curve races rightwards—there’s massive profit to be made by ramping up manufacturing of AGI.
  • If the price is in between, then the demand curve and supply curve are BOTH racing rightwards!

This is neither capital nor labor as we know it. Instead of the market for AGI equilibrating, it forms a positive feedback loop / perpetual motion machine that blows up exponentially.

Does that sound absurd? There’s a precedent: humans! The human world, as a whole, is already a positive feedback loop / perpetual motion machine of this type! Humans bootstrapped themselves up from a few thousand hominins to 8 billion people running a $80T economy.

How? It’s not literally a perpetual motion machine. Rather, it’s an engine that draws from the well of “not-yet-exploited economic opportunities”. But remember “No Lump of Labor”: the well of not-yet-exploited economic opportunities is ~infinitely deep. We haven’t run out of possible companies to found. Nobody has made a Dyson swarm yet.

There’s only so many humans to found companies and exploit new opportunities. But the positive feedback loop of AGI has no such limit. The doubling time can be short indeed:

Imagine an autonomous factory that can build an identical autonomous factory, which then build two more, etc., using just widely-available input materials and sunlight. Economics textbooks don’t talk about that. But biology textbooks do! A cyanobacterium is such a factory, and can double itself in a day (≈ googol percent annualized growth rate 😛).

Anyway, we don’t know how explosive will be the positive feedback loop of AGI building AGI, but I expect it to be light-years beyond anything in economic history.

THE THIRD PIECE of Econ anti-pedagogy is its promotion of GDP growth as a proxy for progress and change. On the contrary, it’s possible for the world to transform into a wild sci-fi land beyond all recognition or comprehension each month, month after month, without “GDP growth” actually being all that high. GDP is a funny metric, and especially poor at describing the impact of transformative technological revolutions. (For example, if some new tech is inexpensive, and meanwhile other sectors of the economy remain expensive due to regulatory restrictions, then the new tech might not impact GDP much, no matter how much it upends the world.) I mean, sure we can argue about GDP, but we shouldn’t treat it as a proxy battle over whether AGI will or won’t be a big deal.

Last and most importantly, THE FOURTH PIECE of Econ anti-pedagogy is the focus on “mutually-beneficial trades” over “killing people and taking their stuff”. Econ 101 proves that trading is selfishly better than isolation. But sometimes “killing people and taking their stuff” is selfishly best of all.

When we’re talking about AGI, we’re talking about creating a new intelligent species on Earth, one which will eventually be faster, smarter, better-coordinated, and more numerous than humans.

Normal people, people who have seen sci-fi movies about robots and aliens, people who have learned the history of colonialism and slavery, will immediately ask lots of reasonable questions here. “What will their motives be?” “Who will have the hard power?” “If they’re seeming friendly and cooperative early on, might they stab us in the back when they get more powerful?”

These are excellent questions! We should definitely be asking these questions! (FWIW, this is my area of expertise, and I’m very pessimistic.)

…And then those normal people take economics classes, and wind up stupider. They stop asking those questions. Instead, they “learn” that AGI is “capital”, kinda like an injection-molding machine. Injection-molding machines wouldn’t wipe out humans and run the world by themselves. So we’re fine. Lol.

Tweet 2

…Since actual AGI is so foreign to economists’ worldviews, they often deny the premise. E.g. here’s Tyler Cowen demonstrating a complete lack of understanding of what we doomers are talking about, when we talk about future powerful AI.

Source

Tweet 3

And here’s Daron Acemoglu assuming without any discussion that in the next 10 yrs, “AI” will not include any new yet-to-be-developed techniques that go way beyond today’s LLMs. Funny omission, when the whole LLM paradigm didn’t exist 10 yrs ago!

(Tbc, it’s fine to make that assumption! Maybe it will be valid, or maybe not, who knows, technological forecasting is hard. But when your paper depends on a giant load-bearing assumption about future AI tech progress, an assumption which many AI domain experts dispute, then that assumption should at least be clearly stated! Probably in the very first sentence of the paper, if not the title!)

Source: “The Simple Macroeconomics of AI” (Acemoglu, 2024)

Tweet 4

And here’s another example of economists “arguing” against AGI scenarios by simply rejecting out of hand any scenario in which actual AGI exists. Many such examples…

Source

Tweet 5

I think part of the problem is people taking human brains for granted instead of treating them as an existence proof that today’s LLMs are nowhere near the ceiling of what’s possible with AI ↓ (source)

1.3.2 Three increasingly-radical perspectives on what AI capability acquisition will look like

Here are three perspectives:

  1. Economists and other people who see AI as a normal technology: “If we want AI to work in some new application area, like some particular industrial design workflow, then humans need to do a lot of R&D work to develop and integrate the AI into this task.”
  2. LLM-focused AGI person: “Ah, that’s true today, but eventually other AIs can do this ‘development and integration’ R&D work for us! No human labor need be involved!”
  3. Me: “No! That’s still not radical enough! In the future, that kind of ‘development and integration’ R&D work just won’t need to be done at all—not by humans, not by AIs, not by anyone! Consider that there are 8 billion copies of basically one human brain design, and if a copy wants to do industrial design, it can just figure it out. By the same token, there can be basically one future AGI design, and if a copy wants to do industrial design, it can just figure it out!”

Another place this comes up is robotics:

  1. Economists: “Humans will need to do R&D to invent good robotics algorithms.”
  2. LLM-focused AGI person: “Future powerful AIs will need to do R&D to invent good robotics algorithms.”
  3. Me: “Future powerful AI will already be a good robotics algorithm!”

…After all, if a human wants to use a new kind of teleoperated robot, nobody needs to do a big R&D project or breed a new subspecies of human. You just take an off-the-shelf bog-standard human brain, and if it wants to pilot a new teleoperated robot, it will just autonomously figure out how to do so, getting rapidly better within a few hours. By the same token, there can be one future AGI design, and it will be able to do that same thing.

  1. ^

    This part overlaps with my earlier post: Applying traditional economic thinking to AGI: a trilemma