This was an outstanding post! The concept of a "conflationary alliance" seems high-value and novel to me. The anthropological study mostly confirms what I already believed, but provides very legible evidence.
I'm a bit surprised that none of none of the definitions you encountered focused on phenomenal consciousness: the feeling of what it's like to experience the world from a first-person perspective, i.e. what p-zombies lack.
I don't want to speculate much here, but it's also possible that people mentioned this definition and you translated what they said into something more concrete and unambiguous (which I think might be be reasonable, depending on whether you are eliminativist about phenomenal consciousness).
Curated.
I like that this went out and did some 'field work', and is clear about the process so you can evaluate how compelling to find it. I found the concept of a conflationary alliance pretty helpful.
That said, I don't think the second half of the article argues especially well for a "consciousness conflationary alliance" existing. I did immediately think "oh this seems like a fairly likely thing to exist as soon as it's pointed out" (in particular given some recent discussion on why consciousness is difficult to talk about), but I think if it wasn't immediately intuitive to me the second-half-of-the-post wouldn't have really have convinced me.
Still, I like this post for object-level helping me realize how many ways people were using "consciousness", and giving me some gears to think about re: how rationality might get wonky around politics.
I think many of the different takes you listed with "consciousness as... X" can actually be held together and are not mutuall exclusive :)
Also, you may enjoy seeing David Chalmer's paper on The Meta-Problem of Consciousness... "the problem of explaining why we think consciousness is hard to explain" in the first place. https://philarchive.org/archive/CHATMO-32
Tl;dr: In this post, I argue from many anecdotes that the concept of 'consciousness' is more conflated than people realize, in that there's a lot of divergence in what people mean by "consciousness", and people are unaware of the degree of divergence. This confusion allows the formation of broad alliances around the value of consciousness, even when people don't agree on how to define it. What their definitions do have in common is that most people tend to use the word "consciousness" to refer to an experience they detect within themselves and value intrinsically. So, it seems that people are simply learning to use the word "consciousness" to refer to whatever internal experience(s) they value intrinsically, and thus they agree whenever someone says "consciousness is clearly morally valuable" or similar.
I also introduce the term "conflationary alliance" for alliances formed by conflating terminology.
Executive Summary
Part 1: Mostly during my PhD, I somewhat-methodically interviewed a couple dozen people to figure out what they meant by consciousness, and found that (a) there seems to be a surprising amount of diversity in what people mean by "consciousness", and (b) they are often surprised to find out that other people mean different things when they say "consciousness". This has implications for AI safety advocacy because AI will sometimes be feared and/or protected on the grounds that it is "conscious", and it's good to be able to navigate these debates wisely.
(Other heavily conflated terms in AI discourse might include "fairness", "justice", "alignment", and "safety", although I don't want to debate any of those cases here. This post is going to focus on consciousness, and general ideas about the structure of alliances built around confused concepts in general.)
Part 2: When X is a conflated term like "consciousness", large alliances can form around claims like "X is important" or "X should be protected". Here, the size of the alliance is a function of how many concepts get conflated with X. Thus, the alliance grows because of the confusion of meanings, not in spite of it. I call this a conflationary alliance. Persistent conflationary alliances resist disambiguation of their core conflations, because doing so would break up the alliance into factions who value the more precisely defined terms. This resistance to deconflation can be deliberate, or merely a social habit or inertia. Either way, groups that resist deconflation tend to last longer, so conflationary alliance concepts have a way of sticking around once they take hold.
Part 1: What people mean by "consciousness".
"Consciousness" is an interesting word, because many people have already started to notice that it's a confused term, yet there is still widespread agreement that conscious beings have moral value. You'll even find some people taking on strange positions like "I'm not conscious" or "I don't know if I'm conscious" or "lookup tables are conscious", as if rebelling against the implicit alliance forming around the "consciousness" concept. What's going on here?
To investigate, over about 10 years between 2008 and 2018 I informally interviewed dozens of people who I noticed were interested in talking about consciousness, for 1-3 hours each. I did not publish these results, and never intended to, because I was mainly just investigating for my own interest. In retrospect, it would have been better, for me and for anyone reading this post, if I'd made a proper anthropological study of it. I'm sorry that didn't happen. In any case, here is what I have to share:
"Methodology"
Extremely informal; feel free to skip or just come back to this part if you want to see my conclusions first.
Whom did I interview?
Mostly academics I met in grad school, in cognitive science, AI, ML, and mathematics. In an ad hoc manner at academic or other intellectually-themed gatherings, whenever people talked about consciousness, I gravitated toward the conversation and tried to get someone to spend a long conversation with me to unpack what they meant.
How did I interview them?
First, early in the discussion, I would ask "Are you conscious?" and they would almost always say "yes". If they said "no" or "I don't know", we'd have a different conversation, which maybe happened like 3 times, essentially excluding those people from the "study".
For everyone who said "yes I'm conscious", I would then ask "How can you tell?", and they'd invariably say "I can just tell/sense/perceive/know that I am conscious" or something similar.
I would then ask them do somehow pay closer attention to the cconsciousness thing or aspect of their mind that they could just "tell" was there, and aasked them to "tell" me more abut that consciousness thing they were finding withing theselves. "What's it like?" I would ask, or similar. If they felt incapable of introspection (maybe 20% felt that way?), I'd ask them to introspect on other tthings as a warm up, like how their body felt.
I did not say "this is an interview" or anything official-sounding, because honestly I didn't feel very official about it.
When they defined consciousness using common near-synonyms like "awareness" or "experience", I asked them to instead describe the structure of the consciousness process, in terms of moving parts and/or subprocesses, at a level that would in principle help me to programmatically check whether the processes inside another mind or object were conscious.
Often it took me 2-5 push-backs to get them focussing on the 'structure' of what they called consciousness and not just synonyms for it, but if they stuck with me for 10 minutes, they usually ended up staying in the conversation beyond that, for more like 1-3 hours in total, with them attending for a long time to whatever inside them they meant by "consciousness". Sometimes the conversation ended more quickly, in like 20 minutes, if the notion of consciousness being conveyed was fairly simple to describe. Some people seemed to have multiple views on what consciousness is, in which cases I talked to them longer until they became fairly committed to one main idea.
Caveats
I'm mainly only confident in the conclusion that people are referring to a lot of different mental processes in mind when they say "consciousness", and are surprised to hear that others have very different meanings in mind.
I didn't take many notes or engage anyone else to longitudinally observe these discussions, or do any other kind of adversarially-robust-scientist stuff. I do not remember the names of the people with each answer, and I'm pretty sure I have a bias where I've more easily remembered answers that were given by more than one person. Nonetheless, I think my memory here is good enough to be interesting and worth sharing, so here goes.
Results
Epistemic status: reporting from memory.
From the roughly thirty conversations I remember having, below are the answers I remember getting. Each answer is labeled with a number (n) roughly counting the number of people I remember having that answer. After most of the conversations I told people about the answers other people had given, and >80% of the time they seemed surprised:
Note: people with this answer tended to have shorter conversations with me than the others, because the idea was simpler to explain than most of the other answers.
Note: Unlike the others, I don't remember this person saying they considered consciousness to be valuable.
So what is "consciousness"?
It's a confused word that people reliably use to refer to mental phenomena that they value intrinsically, with surprising variation in what specifically people have in mind when they say it. As a result, we observe
What can be done about this?
For one thing, when people digress from a conversation to debate about "consciousness", nowadays I usually try asking them to focus away from "consciousness" and instead talk about either "intrinsically valued cognition" or "formidable intelligence". This usually helps the conversation move forward without having to pin down what precisely they meant by "consciousness".
More generally, this variation in meanings intended by the word "consciousness" has implications for how we think about alliances that form around the value of consciousness as a core value.
Part 2: The conflationary alliance around human consciousness
Epistemic status: personal sense-making from the observations above
Most people use the word "consciousness" to refer to a cognitive process that they consider either
Thus, it's easy to form alliances or agreement around claims like
Such utterances reinforce the presumption that consciousness must be something valuable, but without pinning down specifically what is being referred to. This vagueness in turn makes the claims more broadly agreeable, and the alliance around the value of human consciousness strengthens.
I call this a conflationary alliance, because it's an alliance supported by the conflation of concepts that would otherwise have been valued by a smaller alliance. Here, the size of the alliance is a function of how many concepts get conflated with the core value term.
A persistent conflationary alliance must, tautologically, resist the disambiguation of its core conflations. The resistance can arise by intentional design of certain Overton windows or slogans, or arise simply by natural selection acting on the ability of memes to form alliances that reinforce them.
Correspondingly, there are lots of social patterns that somehow end up protecting the conflated status of "consciousness" as a justification for the moral value of human beings. Some examples:
Bob: You shouldn't eat pigs; they're conscious beings capable of suffering, you know!
Alice: There's no scientific consensus on what consciousness is. It's mysterious, and and I believe it's unique to humans. [continues eating porkchop]
Dana: Don't worry; there is no consensus on what consciousness is, because it's a mystery. It's hubris to think scientists are able to build conscious machines!
Charlie: [feels relieved] Hmm, yeah, good point.
Faye: Consciousness is special to humans and other living organisms, not machines. How it works is still a mystery to scientists, and definitely not something we can program into a computer.
Eric: But these days AI systems are trained, not programmed, and how they work is mysterious to us, just like consciousness. So, couldn't we end up making them conscious without even knowing it?
Faye: Perhaps, but the fact that we don't know means we shouldn't treat them as valuable in the way humans are, because we know humans are conscious. At least I am; aren't you?
Eric: Yes of course I'm conscious! [feels insecure about whether others will believe he's conscious] When you put it that way, I guess we're more confident in each other's consciousness than we can be about the consciousness of something different from us.
What should be done about these patterns? I'm not sure yet; a topic for another day!
Conclusion
In Part 1, I described a bunch of slightly-methodical conversations I've had, where I learned that people are referring to many different kinds of processes inside themselves when they say "consciousness", and that they're surprised by the diversity of other people's answers. I've also noticed people used "consciousness" to refer to things they value, either terminally or instrumentally. In Part 2, I note how this makes it easier to form alliances around the idea that consciousness is valuable. There seems to be a kind of social resistance to clarification about the meaning of "consciousness", especially in situations where someone is defending or avoiding the questioning of human moral superiority or priority. I speculate that these conversational patterns further perpetuate the notion that "consciousness" refers to something inherently mysterious. In such cases, I often find it helpful to ask people to focus away from "consciousness" and instead talk about either "intrinsically valued cognition" or "formidable intelligence", whichever better suits the discussion at hand.
In future posts I pan to discuss the implications of conflationary terms and alliances for the future of AI and AI policy, but that work will necessarily be more speculative and less descriptive than this one.
Thanks for reading!