TLDR: I try to find the root causes of why math is useful.

Epistemic status: Marginally confident in veracity, not at all confident in novelty.

Background

I recently had a discussion with Rohin that started from Paul Christiano's concept of "intent alignment" but branched off into a different question, namely: do we need mathematical theory to solve AI risk, or is it sufficient to do experiments and informal reasoning? I argued that experiments are important but insufficient because

...ultimately, you need theoretical knowledge to know what can be safely inferred from these experiments. Without theory you cannot extrapolate.

At some point Rohin later asked me

I'm curious what you think doesn't require building a mathematical theory?

I replied

I'm not sure about the scope of your question? I made a sandwich this morning without building mathematical theory :)

to which Rohin said

Presumably the ingredients were in a slightly different configuration than you had ever seen them before, but you were still able to "extrapolate" to figure out how to make a sandwich anyway. Why didn't you need theory for that extrapolation?

Obviously this is a silly example, but I don't currently see any qualitative difference between sandwich-making-extrapolation, and the sort of extrapolation we do when we make qualitative arguments about AI risk. Why trust the former but not the latter? One is answer is that the latter is more complex, but you seem to be arguing something else.

So, in this essay I will try to explain my view of the role of mathematical theory and the qualitative difference between sandwiches and AI pertaining to this role.

The miracle of math

It probably brooks no argument that mathematics played a central role in the tremendous progress of science and technology during the last few centuries, and that it is used extensively in virtually all fields of modern engineering. The successes of mathematics have been so impressive that they prompted the Nobel-winning physicist Eugene Wigner into writing eir famous essay "The Unreasonable Effectiveness of Mathematics in the Natural Sciences", in which ey call it no less than a "miracle" and write:

...the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious... there is no rational explanation for it.

I see two main reasons why it's important to find that elusive rational explanation. First, knowing why mathematics is useful will help us figuring out exactly when is it useful, and in particular what should its role be in AI alignment. Second, the effectiveness of mathematics is in itself an observation about the properties of human reasoning, and as such it might hint at insights both about intelligence in the abstract and human intelligence in particular, both of which are important to understand for AI alignment.

Risking hubris, I will now take a stab at dispersing Wigner's mystery. After all, mystery exists in the map, not in the territory.

Math versus the brain

First, let's look at how do we actually use mathematics to solve real world problems. To solve a real world problem using any method, input from the real world is needed. You can't know how the world looks like without actually looking at it. But then, once you looked at it, you want to somehow extrapolate your observations and infer new facts and predictions. Making such an extrapolation requires building models and, deciding how probable these models are, and finally applying the models to the question of interest. Mathematics then enters as a language in which such models can be described and as a toolbox for extracting predictions from these models.

Second, although the track record of mathematics is evident, it is even more evident that humans don't require mathematics to think. In fact, the human brain is perfectly capable of accomplishing all of the steps above on its own: constructing models, evaluating models and extracting predictions from models. To some extent, natural language plays the same role in human thinking at large as mathematics plays in its applications.

However, there is an important difference between mathematical reasoning and "informal" reasoning: the latter virtually always involves a component that is not conscious and that cannot be easily verbalized. So, although thinking always involves models, a lot of the time these models are fully or partially hidden, encoded somewhere in the neural networks of the brain. This hidden, unconscious part is often called "intuition".

Now, using math doesn't replace our cognition, it augments it. Even when we use math we actually use all three types of thinking at once: the unconscious intuition, the conscious informal verbal reasoning and (also conscious) mathematical reasoning. Indeed, reasoning using only math would be more or less equivalent to creating AGI (mathematical computations can be delegated to a computer, and anything a computer does can be regarded as a mathematical computation). The question is then, what does this last layer do that the first two don't do as well on their own, and in which cases is it needed.

Math versus natural language

I already said that reasoning using mathematics is somewhat similar to reasoning using natural language. But, there are two main differences between mathematics and natural language that are relevant to the former's effectiveness:

Precision

Mathematics is precise. Mathematical definitions are crisp and mathematical statements have unambiguous meaning within the ontology of mathematics. On the other hand, natural language concepts are usually fuzzy, their semantics defined by subjective unconscious knowledge that varies from speaker to speaker. Somewhere in the brain is a neural circuit representing the concept, but this neural circuit is not itself part of language. Moreover, natural language statements have meaning that often depends on context and background assumptions.

Objectivity

Mathematics evolved in order to answer objective questions about the world at large. (And, bootstrapping from that, in order to answer questions about mathematics itself.) Mathematics happened because we looked for models and tools that generalize as much as possible and that don't depend on social context[1]. Moreover, the evolution of mathematics was a conscious process, one in which we fully applied our collective reasoning faculties to make mathematics better.

On the other hand, natural language evolved to some extent to answer objective questions about the world, but also in order to play complex social games. Natural language is heavily biased towards a human-centric view of the world, and to some extent towards the conditions in which human existed historically. Natural language evolved in a process which was largely unconscious and not even quite human (in the same sense that biological evolution of humans is not in itself human).

Why is math effective?

These two differences lead to five reasons why augmenting reasoning by mathematics is sometimes effective:

Legibility to others

The precise nature of mathematics makes mathematical reasoning legible to other people. Other people can evaluate your math and build on it, without any risk that they misunderstand your definitions or having to deal with difficult to convey intuitions[2]. Since human civilization is a collective multi-generational effort, the improved ability to collaborate can serve to significantly enhance and accelerate the generation of knowledge.

Legibility to oneself

The precise nature of mathematics makes mathematical reasoning legible to yourself. This might seem nonsensical at first: shouldn't you perfectly understand your own reasoning anyway? But, our reasoning is not transparent to us.

Sometimes we believe things for reasons that we are no aware of, and these reasons might be poorly aligned with truth-seeking: hence, cognitive bias. Of course, such biases all should have evolutionary reasons. But, these reasons probably have to do with specifics of the ancestral environment, and the game theory of conforming to the tribe.

Moreover, when your reasoning is transparent, you can make full use of your cognitive faculties to improve the reasoning process. This is something I already mentioned when I spoke about the objectivity of mathematics. A transparent phenomenon can be analysed the same way as any phenomenon in the external world. On the other hand, an opaque phenomenon, some of which is hidden inside your own brain, can only be analysed to the extent your brain is specifically designed to analyse it (which is often limited).

Measuring complexity

I have mentioned the need to evaluate the probability of different models. This evaluation is done by comparing to observations, but it also requires a prior. The human brain has such a prior implicitly, but this prior is in some sense biased towards the ancestral environment. This is why humans have come up with anthropomorphic explanations of natural phenomena for so long, an error that took millennia to correct (and is still not fully corrected).

Now, what is the "correct" prior? Arguably it is Occam's razor: simpler hypotheses are more likely. But, how do we decide what is "simple"? Solomonoff induction is a formalization of Occam's razor, but Solomonoff induction depends on the choice of a universal Turing machine. More broadly and less formally, description complexity depends on the language you use to write the description. My claim is: objectivity of mathematics means it is the correct language of choice.

Now, this claim is not entirely precise. There is not really a single formal mathematical language, there are different such languages, and if we want to literally measure description length then it depends on the precise encoding too. Moreover, nobody really measures the length of mathematical descriptions when evaluating scientific hypotheses (although maybe they should). However, the use of mathematical language still naturally leads to a better model evaluation process than what we have without it.

We should also consider the counterargument that, the prior is subjective by definition. So, shouldn't the "brain's prior", whatever it is, be the correct prior by definition? I think that, strictly speaking, the answer is "yes". But, over the lifetime of civilization, our accumulated experience led us to update this prior, and single out the complexity measure suggested by math. This is exactly the objectivity of mathematics I mentioned before.

Quantitative answers

Another advantage of math is that it allows producing precise quantitative answers in a way informal reasoning usually doesn't. Even someone who has fairly good intuition about mechanics of physical bodies cannot guess their trajectories or stability with the same precision a mathematical model can. I am not sure exactly why this is the case, but it seems to be the result of some noise inherent to the human brain, or to the translation between different modules in the brain. However, this advantage is only significant when your mathematical model is very accurate.

Specifically in the case of AI alignment, I am not sure how important is this advantage. I expect us to mostly only come up with models that depend on parameters for which we have rough order-of-magnitude estimates at best. But, maybe when the theory is fully revealed, there will be some use cases for quantitative precision.

Leveraging computers

In the information age, math gained another advantage due to the possibility of computer simulations. Such simulations allow us leveraging the computing power of machines which can surpass the brain along some parameters (such as serial speed). On the other hand, you cannot offload some of your brain's neural networks to a computer. (Yet. Growth mindset!)

When is math effective?

Let us now return to the question posed by Rohin: what is the difference between making sandwiches and solving AI risk? Why does the former requires no mathematical theory [citation needed] whereas the latter does require it (according to me)? I see three relevant differences:

Serial depth

Making sandwiches is a task relatively similar to tasks we had to deal with in the ancestral environment, and in particular there is not a lot of serial depth to the know-how of making sandwiches. If we pluck a person from virtually any culture in any period of history, then it won't be difficult to explain em how to make a sandwich. On the other hand, in the case of AI risk, just understanding the question requires a lot of background knowledge that was built over generations and requires years of study to properly grasp.

For tasks of this type, the "natural" human prior is perfectly suitable, there is not much need for collaboration (except sometimes the type of collaboration which comes naturally), and there is no need for strong optimization of the reasoning process. We are already wired to solve them.

Anthropocentrism

Making a good sandwich requires a lot of human-centric knowledge: it has to do with how and what humans like to eat. To give another example, consider artistic sculpting. This is also a field of knowledge that took generations to build and requires years to learn. And, some math may come useful there, for example geometric calculations, not to mention the math that was needed to make the physical tools and materials that modern sculptors may use. But, since a large component of the task is catering to human aesthetic tastes, math cannot compete with innate human abilities that are designed to be human-centric.

On the other hand, studying AI risk involves questions about what kind of intelligent agents can exist in general, and what properties these agents have. Such questions have "objective", not human-centric nature, and are better addressed by the "math-simplicity" prior. There might also be human-centric aspects when we speak of aligning AIs to humans. But, even there promising approaches should not rely on a lot of detailed properties humans have, otherwise we would get a solution that is very complex and fragile.

Sample complexity [EDIT 2020-02-15]

When we're learning to make a sandwich, we can make many attempts to perfect the technique, bounded only by the cost of time and ingredients. (Although most people don't experiment that much with sandwiches, civilization as a whole experiments with food a lot.) As a more important example, consider deep learning. Deep learning is far from the ancestral environment, and is not especially human-centric. Nevertheless, it had impressive successes despite making only relatively modest use of mathematical theory (modulo pre-existing tools), thanks to much trial and error (a process that is much cheaper for software than for hardware engineering). On the other hand, with AI risk we want to limit trial and error, since the entire problem is that errors might be too costly.

Since the role of math is enhancing our ability to extrapolate observations, it in particular improves our sample complexity. That is, math allows us to reach useful conclusions based on less empirical data. In particular, I said before that one advantage of math is that it effectively starts from a better prior. Now, if you start from a worse prior, you will still converge to the right answer (unless the prior is dogmatic), but it will take you longer.

What's next?

I want to clarify that the theory I presented here is not supposed to be the final word on this question. Among other epistemic sins I surely made here, I presented five reasons and said nothing about their relative importance (although these reasons are not fully independent so it's not necessarily likely that one of them has overwhelming importance compared to the rest). Moreover, I should eat my own dog food and construct a mathematical theory that makes these arguments rigorous. In particular, I think that the separation into conscious and unconscious reasoning and its consequences can be modeled using Turing RL. But, elaborating this further is work for another time.


  1. This is perhaps somewhat circular: mathematics is effective because we looked for something effective. But, I hope to at least elucidate a few gears inside this effectiveness. ↩︎

  2. Of course, there are many difficult to convey intuitions about how to do math: how to find proofs, and how to even decide which mathematical lines of inquiry are promising. But, the bare bones product of this process is fully transparent. ↩︎

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 1:13 AM

In this post I speculated on the reasons for why mathematics is so useful so often, and I still stand behind it. The context, though, is the ongoing debate in the AI alignment community between the proponents of heuristic approaches and empirical research[1] ("prosaic alignment") and the proponents of building foundational theory and mathematical analysis (as exemplified in MIRI's "agent foundations" and my own "learning-theoretic" research agendas).

Previous volleys in this debate include Ngo's "realism about rationality" (on the anti-theory side), the pro-theory replies (including my own) and Yudkowsky's "the rocket alignment problem" (on the pro-theory side).

Unfortunately, it doesn't seem like any of the key participants budged much on their position, AFAICT. If progress on this is possible, then it probably requires both sides working harder to make their cruxes explicit.


  1. To be clear, I'm in favor of empirical research, I just think that we need theory to guide it and interpret the results. ↩︎