There are correspondences between things in the world, of course. Things in the world become entangled as they interact. The way one thing is configured -- say the arrangement of all the atoms comprising the planet Earth -- can affect the way that another thing is configured -- say the subatomic structure of a rock orbiting the Earth that is absorbing photons bouncing off the surface of the Earth -- in such a way that if you know the configuration of the second thing then you can deduce something about the configuration of the first. Which is to say that I am not calling into question the phenomenon of evidence, or the phenomenon of reasoning from evidence. But it just is not tenable that the defining feature of knowledge is a correspondence between map and territory, because most everything has a correspondence with the territory. A rock orbiting the Earth has a correspondence with the territory. A video camera recording a long video has a correspondence with the territory. The hair follicles on your head, being stores of huge amounts of quantum information, and being impacted by a barrage of photons that are themselves entangled with your environment, surely have a much more detailed correspondence with your environment than any mental model that you could ever enunciate, and yet these are not what we mean by knowledge. So although there is undoubtedly a correspondence between neurologically-encoded maps in your head and reality, it is not this correspondence that makes these maps interesting and useful and true, because such correspondences are common as pig tracks.
It’s a devastating conclusion, I know. Yet it seems completely unavoidable. We have founded much of our collective worldview on the notion of map/territory correspondence that can be improved over time, and yet when we look carefully it just does not seem that such a notion is viable at the level of physics.
Clearly there is such a thing as knowledge, and clearly it can be improved over time, and clearly there is a difference between knowing a thing and not knowing a thing, between having accurate beliefs and having inaccurate beliefs. But in terms of grounding this subjective experience out in objective reality, we find ourselves, apparently, totally adrift. The foundation that I assumed was there is not there, since this idea that we can account for the difference between accurate and inaccurate beliefs in terms of a correspondence between some map and some territory just does not check out.
Now I realize that this may feel a bit like a rug has been pulled out from under you. That’s how I feel. I was not expecting this investigation to go this way. But here we are.
And I know it may be tempting to grab hold of some alternate definition of knowledge that sidesteps the counter-examples that I’ve explored. And that is a good thing to do, but as you do, please go systematically through this puzzle, because if there is one thing that the history of the analysis of knowledge has shown it is that definitions of knowledge that seem compelling to their progenitors are a dime a dozen, and yet every single one so far proposed in the entire history of the analysis of knowledge has, so far as I can tell, fallen prey to further counter-examples. So please, be gentle with this one.
You may say that knowledge requires not just a correspondence between map and territory but also a capacity for prediction. But a textbook on its own is not capable of making predictions. You sit in front of your chemistry textbook and ask it questions all day; it will not answer. Are you willing to say that a chemistry textbook contains no knowledge whatsoever?
You may then say that knowledge consists of a map together with a decoder, where the map has a correspondence with reality, and the decoder is responsible for reading the map and making predictions. But then if a superintelligence could look at an ordinary rock and derive from it an understanding of chemistry, is it really the case that any ordinary rock contains just as much knowledge as a chemistry textbook? That there really is nothing whatsoever to say about a chemistry textbook that distinguishes it from any other clump of matter from which an understanding of chemistry could in principle be derived?
Suppose that one day an alien artifact landed unexpectedly on Earth, and on this artifact was a theory of spaceship design that had been carefully crafted so as to be comprehensible by any intelligent species that might find it, perhaps by first introducing simple concepts via literal illustrations, followed by instructions based on these concepts for decoding a more finely printed section, followed by further concepts and instructions for decoding a yet-more-finely-printed section, followed eventually by the theory itself. Is there no sense in which this artifact is fundamentally different from a mere data recorder that has been travelling through cosmos recording enough sensor data that a sufficiently intelligent mind could derive the same theory from it? What is it about the theory that distinguishes it from the data recorder? It is not that the former is in closer correspondence with reality than the latter. In fact the data recorder almost certainly corresponds in a much more fine-grained way to reality than the theory, since in addition to containing enough information to derive the theory, it also likely contains much information about specific stars and planets that the theory does not. And it is not that the theory can make predictions while the data recorder cannot: both are inert artifacts incapable of making any prediction on their own. And it is not that the theory can be used to make predictions while the data recorder cannot: a sufficiently intelligent agent could use the data recorder to make all the same predictions as it could using the theory.
Perhaps you say that knowledge is rightly defined relative to a particular recipient, so the instruction manual contains knowledge for us since we are intelligent enough to decode it, but the data recorder does not, since we are not intelligent enough to decode it. But firstly we probably are intelligent enough to decode the data recorder and use it to work out how to build spaceships given enough time, and secondly are you really saying that there is no such thing as objective knowledge? That there is no objective difference between a book containing a painstakingly accurate account of a particular battle, and another book of carelessly assembled just-so stories about the same battle?
Now you may say that knowledge is that which gives us the capacity to achieve our goals despite obstacles, and here I wholeheartedly agree. But this is not an answer to the question, it is a restatement of the question. What is it that gives us the capacity to achieve our goals despite obstacles? The thing we intuitively call knowledge seems to be a key ingredient, and in humans, knowledge seems to be some kind of organization and compression of evidence into a form that is useful for planning with respect to a variety of goals. And you might say, well, there just isn’t any more to say than that. Perhaps agents input observations at one end, and output actions at the other end, and that what happens in between follows no fundamental rhyme or reason, is entirely a matter of what works. Well, Eliezer has written about a time when he believed this about AI, too, until seeing that probability theory constrains mind design space in a way that is not merely a set of engineering tricks that "just work". But probability theory does not concretely constrain mind-design space. It is not generally feasible to take a physical device containing sensors and actuators and ask whether or to what extent its internal belief-formation or planning capacities are congruent with the laws of probability theory. Probability theory isn’t that kind of theory. At the level of engineering, it merely suggests certain designs. It is not the kind of theory that lets us take arbitrary minds and understand how they work, not in the way that the theory of electromagnetism allows us to take arbitrary circuits and understand how they work.
What we are seeking is a general understanding of the physical phenomenon of the collection and organization of evidence into a form that is conducive to planning. Most importantly, we are seeking a characterization of the patterns themselves that are produced by evidence-collecting, evidence-organizing entities, and are later used to exert flexible influence over the future. Could it really be that there is nothing general to say about such patterns? That knowledge itself is entirely a chimera? That it’s just a bunch of engineering hacks all the way down and there is no real sense in which we come to know things about the world, except as measured by our capacity to accomplish tasks? That there is no true art of epistemic rationality, only of instrumental rationality? That having true beliefs has no basis in physical reality?
I do not believe that the resolution to this question is a correspondence between internal and external states, because although there certainly are correspondences between internal and external states, such correspondences are far too common to account for what it means to have true beliefs, or to characterize the physical accumulation of knowledge.
But neither do I believe that there is nothing more to say about knowledge as a physical phenomenon.
It is a lot of fun to share this journey with you.
Financial status: This is independent research, now supported by a grant. I welcome further financial support.
Epistemic status: This is in-progress thinking.
Friends, I know that it is difficult to accept, but it just does not seem tenable that knowledge consists of a correspondence between map and territory. It’s shocking, I know.
There are correspondences between things in the world, of course. Things in the world become entangled as they interact. The way one thing is configured -- say the arrangement of all the atoms comprising the planet Earth -- can affect the way that another thing is configured -- say the subatomic structure of a rock orbiting the Earth that is absorbing photons bouncing off the surface of the Earth -- in such a way that if you know the configuration of the second thing then you can deduce something about the configuration of the first. Which is to say that I am not calling into question the phenomenon of evidence, or the phenomenon of reasoning from evidence. But it just is not tenable that the defining feature of knowledge is a correspondence between map and territory, because most everything has a correspondence with the territory. A rock orbiting the Earth has a correspondence with the territory. A video camera recording a long video has a correspondence with the territory. The hair follicles on your head, being stores of huge amounts of quantum information, and being impacted by a barrage of photons that are themselves entangled with your environment, surely have a much more detailed correspondence with your environment than any mental model that you could ever enunciate, and yet these are not what we mean by knowledge. So although there is undoubtedly a correspondence between neurologically-encoded maps in your head and reality, it is not this correspondence that makes these maps interesting and useful and true, because such correspondences are common as pig tracks.
It’s a devastating conclusion, I know. Yet it seems completely unavoidable. We have founded much of our collective worldview on the notion of map/territory correspondence that can be improved over time, and yet when we look carefully it just does not seem that such a notion is viable at the level of physics.
Clearly there is such a thing as knowledge, and clearly it can be improved over time, and clearly there is a difference between knowing a thing and not knowing a thing, between having accurate beliefs and having inaccurate beliefs. But in terms of grounding this subjective experience out in objective reality, we find ourselves, apparently, totally adrift. The foundation that I assumed was there is not there, since this idea that we can account for the difference between accurate and inaccurate beliefs in terms of a correspondence between some map and some territory just does not check out.
Now I realize that this may feel a bit like a rug has been pulled out from under you. That’s how I feel. I was not expecting this investigation to go this way. But here we are.
And I know it may be tempting to grab hold of some alternate definition of knowledge that sidesteps the counter-examples that I’ve explored. And that is a good thing to do, but as you do, please go systematically through this puzzle, because if there is one thing that the history of the analysis of knowledge has shown it is that definitions of knowledge that seem compelling to their progenitors are a dime a dozen, and yet every single one so far proposed in the entire history of the analysis of knowledge has, so far as I can tell, fallen prey to further counter-examples. So please, be gentle with this one.
You may say that knowledge requires not just a correspondence between map and territory but also a capacity for prediction. But a textbook on its own is not capable of making predictions. You sit in front of your chemistry textbook and ask it questions all day; it will not answer. Are you willing to say that a chemistry textbook contains no knowledge whatsoever?
You may then say that knowledge consists of a map together with a decoder, where the map has a correspondence with reality, and the decoder is responsible for reading the map and making predictions. But then if a superintelligence could look at an ordinary rock and derive from it an understanding of chemistry, is it really the case that any ordinary rock contains just as much knowledge as a chemistry textbook? That there really is nothing whatsoever to say about a chemistry textbook that distinguishes it from any other clump of matter from which an understanding of chemistry could in principle be derived?
Suppose that one day an alien artifact landed unexpectedly on Earth, and on this artifact was a theory of spaceship design that had been carefully crafted so as to be comprehensible by any intelligent species that might find it, perhaps by first introducing simple concepts via literal illustrations, followed by instructions based on these concepts for decoding a more finely printed section, followed by further concepts and instructions for decoding a yet-more-finely-printed section, followed eventually by the theory itself. Is there no sense in which this artifact is fundamentally different from a mere data recorder that has been travelling through cosmos recording enough sensor data that a sufficiently intelligent mind could derive the same theory from it? What is it about the theory that distinguishes it from the data recorder? It is not that the former is in closer correspondence with reality than the latter. In fact the data recorder almost certainly corresponds in a much more fine-grained way to reality than the theory, since in addition to containing enough information to derive the theory, it also likely contains much information about specific stars and planets that the theory does not. And it is not that the theory can make predictions while the data recorder cannot: both are inert artifacts incapable of making any prediction on their own. And it is not that the theory can be used to make predictions while the data recorder cannot: a sufficiently intelligent agent could use the data recorder to make all the same predictions as it could using the theory.
Perhaps you say that knowledge is rightly defined relative to a particular recipient, so the instruction manual contains knowledge for us since we are intelligent enough to decode it, but the data recorder does not, since we are not intelligent enough to decode it. But firstly we probably are intelligent enough to decode the data recorder and use it to work out how to build spaceships given enough time, and secondly are you really saying that there is no such thing as objective knowledge? That there is no objective difference between a book containing a painstakingly accurate account of a particular battle, and another book of carelessly assembled just-so stories about the same battle?
Now you may say that knowledge is that which gives us the capacity to achieve our goals despite obstacles, and here I wholeheartedly agree. But this is not an answer to the question, it is a restatement of the question. What is it that gives us the capacity to achieve our goals despite obstacles? The thing we intuitively call knowledge seems to be a key ingredient, and in humans, knowledge seems to be some kind of organization and compression of evidence into a form that is useful for planning with respect to a variety of goals. And you might say, well, there just isn’t any more to say than that. Perhaps agents input observations at one end, and output actions at the other end, and that what happens in between follows no fundamental rhyme or reason, is entirely a matter of what works. Well, Eliezer has written about a time when he believed this about AI, too, until seeing that probability theory constrains mind design space in a way that is not merely a set of engineering tricks that "just work". But probability theory does not concretely constrain mind-design space. It is not generally feasible to take a physical device containing sensors and actuators and ask whether or to what extent its internal belief-formation or planning capacities are congruent with the laws of probability theory. Probability theory isn’t that kind of theory. At the level of engineering, it merely suggests certain designs. It is not the kind of theory that lets us take arbitrary minds and understand how they work, not in the way that the theory of electromagnetism allows us to take arbitrary circuits and understand how they work.
What we are seeking is a general understanding of the physical phenomenon of the collection and organization of evidence into a form that is conducive to planning. Most importantly, we are seeking a characterization of the patterns themselves that are produced by evidence-collecting, evidence-organizing entities, and are later used to exert flexible influence over the future. Could it really be that there is nothing general to say about such patterns? That knowledge itself is entirely a chimera? That it’s just a bunch of engineering hacks all the way down and there is no real sense in which we come to know things about the world, except as measured by our capacity to accomplish tasks? That there is no true art of epistemic rationality, only of instrumental rationality? That having true beliefs has no basis in physical reality?
I do not believe that the resolution to this question is a correspondence between internal and external states, because although there certainly are correspondences between internal and external states, such correspondences are far too common to account for what it means to have true beliefs, or to characterize the physical accumulation of knowledge.
But neither do I believe that there is nothing more to say about knowledge as a physical phenomenon.
It is a lot of fun to share this journey with you.