The efficient market hypothesis applied to AI is an important variable for timelines. The idea is: If AGI (or TAI, or whatever) was close, the big corporations would be spending a lot more money trying to get to it first. Half of their budget, for example. Or at least half of their research budget! Since they aren't, either they are all incompetent at recognizing that AGI is close, or AGI isn't close. Since they probably aren't all incompetent, AGI probably isn't close.

I'd love to see some good historical examples of entire industries exhibiting the sort of incompetence at issue here. If none can be found, that's good evidence for this EMH-based argument.

--Submissions don't have to be about AI research; any industry failing to invest in some other up-and-coming technology highly relevant to their bottom line should work.

--Submissions don't need to be private corporations necessarily. Could be militaries around the world, for example.

(As an aside, I'd like to hear discussion of whether the supposed incompetence is actually rational behavior--even if AI might be close, perhaps it's not rational for big corporations to throw lots of money at mere maybes. Or maybe they think that if AGI is close they wouldn't be able to profit from racing towards it, perhaps because they'd be nationalized, or perhaps because the tech would be too easy to steal, reverse engineer, or discover independently. Kudos to Asya Bergal for this idea.)

New Answer
New Comment

2 Answers sorted by

I was prompted to write this question by reading this excellent blog post about AlphaFold I'll quote it at length because it serves as a candidate answer to my question:

What is worse than academic groups getting scooped by DeepMind? The fact that the collective powers of Novartis, Pfizer, etc, with their hundreds of thousands (~million?) of employees, let an industrial lab that is a complete outsider to the field, with virtually no prior molecular sciences experience, come in and thoroughly beat them on a problem that is, quite frankly, of far greater importance to pharmaceuticals than it is to Alphabet. It is an indictment of the laughable “basic research” groups of these companies, which pay lip service to fundamental science but focus myopically on target-driven research that they managed to so badly embarrass themselves in this episode.
If you think I’m being overly dramatic, consider this counterfactual scenario. Take a problem proximal to tech companies’ bottom line, e.g. image recognition or speech, and imagine that no tech company was investing research money into the problem. (IBM alone has been working on speech for decades.) Then imagine that a pharmaceutical company suddenly enters ImageNet and blows the competition out of the water, leaving the academics scratching their heads at what just happened and the tech companies almost unaware it even happened. Does this seem like a realistic scenario? Of course not. It would be absurd. That’s because tech companies have broad research agendas spanning the basic to the applied, while pharmas maintain anemic research groups on their seemingly ever continuing mission to downsize internal research labs while building up sales armies numbering in the tens of thousands of employees.
If you think that image recognition is closer to tech’s bottom line than protein structure is to pharma’s, consider the fact that some pharmaceuticals have internal crystallographic databases that rival or exceed the PDB in size for some protein families.

This was about AlphaFold, by the way, not AlphaFold2. (!!!)

5 comments, sorted by Click to highlight new comments since: Today at 10:51 AM

There's a fairly straightforward optimization process that occurs in product development that I don't often see talked about in the abstract that goes something like this:

It seems like bigger firms should be able to produce higher quality goods. They can afford longer product development cycles, hire a broader variety of specialized labor, etc. In practice, it's smaller firms that compete on quality, why is this?

One of the reasons is that the pressure to cut corners increases enormously at scale along more than one dimension. As a product scales, eking out smaller efficiency gains is still worth enough money that that particular efficiency gain can have an entire employee, or team devoted to it. The incentive is to cut costs in all ways that are illegible to the consumer. But the average consumer is changing as a product scales up in popularity. Early adopters and people with more specialized needs are more sensitive to quality. As the product scales to less sensitive buyers, the firm can cut corners that would have resulted in lost sales earlier on in the product cycle, but now isn't a large enough effect to show up as revenues and profits go up. So this process continues up the curve as the product serves an ever larger and less sensitive market. Fewer things move the needle, and now the firm is milking its cash cow, which brings in a different sort of optimization (bean counters) which continues this process.

Now, some firms, rather than allow their lunch to get eaten, do engage in market segmentation to capture more value. The most obvious is when a brand has a sub brand that is a luxury line, like basically all car makers. The luxury line will take advantage of some of the advantages of scale from the more commoditized product lines but do things like manufacture key components in, say, germany instead of china. But with the same management running the whole show, it's hard for a large firm to insulate the market segmentation from exactly the same forces already described.

All of this is to answer the abstract question of why large firms don't generate the sort of culture that can do innovation, even when they seemingly throw a lot of money and time at it. The incentives flow down from the top. The 'top' of firms are answerable to the wrong set of metrics/incentives. This is 100% true of most of academia as well as private R&D.

So to answer the original question, I see micro examples of failing to invest in the right things everywhere. Large firms could be hotbeds of experimentation in large scale project coordination, but in practice individuals within an org are forced to conform to internal APIs to maintain legibility to management which explains why something like Slack didn't emerge as an internal tool at any big company.

(I'm not an economist but my understanding is that...) The EMH works in markets that fulfill the following condition: If Alice is way better than the market at predicting future prices, she can use her superior prediction capability to gain more and more control over the market, until the point where her control over the market makes the market prices reflect her prediction capability.

If Alice is way better than anyone else at predicting AGI, how can she use her superior prediction capability to gain more control over big corporations? I don't see how the EMH an EMH-based argument applies here.

Yeah, maybe it's not really EMH-based but rather EMH-inspired or EMH-adjacent. The core idea is that if AI is close lots of big corporations are really messing up big time; it's in their self-interest (at least, given their lack of concern for AI risk) to pursue it aggressively. And the other part of the core idea is that that's implausible.

And the other part of the core idea is that that's implausible.

I don't see why that's implausible. The condition I gave is also my explanation for why the EMH fulfills (in markets where it does), and it doesn't explain why big corporations should be good at predicting AGI.

it's in their self-interest (at least, given their lack of concern for AI risk) to pursue it aggressively

So the questions I'm curious about here are:

  1. What mechanism is supposed to causes big corporations to be good at predicting AGI?
  2. How come that mechanism doesn't also cause big corporations to understand the existential risk concerns?

I think the idea is that in general they are good at doing things that are in their self-interest, and since they don't currently think AI is an existential threat, they should think it's in their self-interest to make AGI if possible, and if it is possible, they should be able to recognise that since the relevant expertise in AI and AI forecasting is something they can acquire.

To be honest, I don't put much stock in this argument, which is why I'm asking this question.