The meme-theoretic view of humans says: Memes are to humans as sailors are to ships in the age of sail.

If you want to predict where a ship will go, ask: Is it currently crewed by the French or the English? Is it crewed by merchants, pirates, or soldiers? These are the most important questions.

You can also ask e.g. "Does it have a large cargo hold? Is it swift? Does it have many cannon-ports?" But these questions are less predictive of where it will go next. They are useful for explaining how it got the crew it has, but only to a point--while it's true that a ship built with a large cargo hold is more likely to be a merchant for more of its life, it's quite common to encounter a ship with a large cargo hold that is crewed by soldiers, or for a ship built in France to be sailed by the English, etc. The main determinants of how a ship got the crew it currently has are its previous interactions with other crews, e.g. the fights it had, the money that changed hands when it was in port, etc.

The meme-theoretic view says: Similarly, the best way to explain human behavior is by reference to the memes in their head, and the best way to explain how those memes got there is to talk about the history of how those memes evolved inside the head in response to other memes they encountered outside the head. Non-memetic properties of the human (their genes, their nutrition, their age, etc.) matter, but not as much, just like how the internal layout of a ship, its size, its age, etc. matter too, but not as much as the sailors inside it.

Anyhow, the meme-theoretic view is an interesting contrast to the highly-capable-agent view. If we apply the meme-theoretic view to AI, we get the following vague implications:

--Mesa-alignment problems are severe. The paper already talks about how there are different ways a system could be psuedo-aligned, e.g. it could have a stable objective that is a proxy of the real objective, or it could have a completely different objective but be instrumentally motivated to pretend, or it could have a completely different objective but have some irrational tic or false belief that makes it behave the way we want for now. Well, on a meme-theoretic view these sorts of issues are the default, they are the most important things for us to be thinking about.

--There may be no stable objective/goal at all in the system. It may have an objective/goal now, but if the objective is a function of the memes it currently has and the memes can change in hard-to-predict ways based on which other memes it encounters...

--Training/evolving an AI to behave a certain way will be very different at each stage of smartness. When it is too dumb to host anything worthy of the name meme, it'll be one thing. When it is smart enough to host simple memes, it'll be another thing. When it is smart enough to host complex memes, it'll be another thing entirely. Progress and success made at one level might not carry over to higher levels.

--There is a massive training vs. deployment problem. The memes our AI encounters in deployment will probably be massively different from those in training, so how do we ensure that it reacts to them appropriately? We have no idea what memes it will encounter when deployed, because we want it go to out into the world and do all sorts of learning and doing on our behalf.

Thanks to Abram Demski for reading a draft and providing some better terminology

New Answer
New Comment
6 comments, sorted by Click to highlight new comments since:

My understanding of meme theory is that it considers the setting where memes mutate, reproduce, and are under selection pressure. This basically requires you to think that there's some population pool where the memes are spreading. So, one way to think about it might be to ask what memetic environment your AI systems are in.

  • Are human memes a good fit for AI agents? You might think that a physics simulator is not going to be a good fit for most human memes (except perhaps for memes like "representation theory is a good way to think about quantum operators"), because your physics simulator is structured differently from most human minds, and doesn't have the initial memes that our memes are co-adapted with. That being said, GPT-8 might be very receptive to human memes, as memes are pretty relevant to what characters humans type on the internet.
  • How large is the AI population? If there's just one smart AI overlord and then a bunch of MS Excel-level clever computers, the AI overlord is probably not exchanging memes with the spreadsheets. However, if there's a large number of smart AI systems that work in basically the same manner, you might think that that forms the relevant "meme pool", and the resulting memes are going to be different from human memes (if the smart AI systems are cognitively different from humans), and as a result perhaps harder to predict. You could also imagine there being lots of AI system communities where communication is easy within each community but difficult between communities due to architectural differences.

One scenario that worries me: At first the number of AIs is small, and they aren't super smart, so they mostly just host normal human memes and seem as far as we (and even they) can tell to be perfectly aligned. Then, they get more widely deployed, and now there are many AIs and maybe they are smarter also, and alas it turns out that AIs are a different environment than humans, in a way which was not apparent until now. So different memes flourish and spread in the new environment, and bad things happen.

I find this perspective interesting (and confusing), and want to think about it more deeply. Can you recommend reading anything to have a better understanding of what you're thinking, or what led you to this idea in specific?

Beyond the possible implications you mentioned, I think this might be useful in clarifying the 'trajectory' of agent selection pressure far from theoretical extremes that Richard Ngo mentioned in "agi safety from first principles" sequence. 

My vague intuition is that successful, infectious memes work by reconfiguring agents to shift from one fix point in policy to another while not disrupting utility. Does that make sense?

Thanks! Excellent point about the connection to the trajectory of agent selection pressure.

I don't remember what led me to this idea in particular. I've been influenced by doing a lot of thinking about agent foundations and metaethics and noticing the ways in which humans don't seem to be well modelled as utility maximizers or even just any sort of rational goal-directed agents with stable goals. I also read the book "The Meme Machine" and liked it, though that was after writing this post, not before, IIRC.

I don't know what you mean by fixed points in policy. Elaborate?

I think that there is an unwarrented jump here from (Humans are highly memetic) to (AI's will be highly memetic).

I will grant you that memes have a substantial effect on human behaviour. It doesn't follow that AI's will be like this.

Your conditions would only have a strong argument for them if there was a good argument that AI's should be meme driven.

I didn't take myself to be arguing that AIs will be highly memetic, but rather just floating the possibility and asking what the implications would be.

Do you have arguments in mind for why AIs will be less memetic than humans? I'd be interested to hear them.