I've taken a somewhat caricaturist view of moral realism[1], describing it, essentially, as the random walk of a process defined by its "stopping" properties.

In this view, people start improving their morality according to certain criteria (self-consistency, simplicity, what they would believe if they were smarter, etc...) and continue on this until the criteria are finally met. Because there is no way of knowing how "far" this process can continue until the criteria are met, this can drift very far indeed from its starting point.

Now I would like to be able to argue, from a very anti-realist perspective, that:

  • Argument A: I want to be able to judge that morality is better than morality , based on some personal intuition or judgement of correctness. I want to be able to judge that is alien and evil, even if it is fully self-consistent according to formal criteria, while is not fully self-consistent.

Moral realists look like moral anti-realists

Now, I maintain that this "random walk to stopping point" is an accurate description of many (most?) moral realist systems. But it's a terrible description of moral realists. In practice, most moral realists allow for the possibility of moral uncertainty, and hence that their preference approach might have a small chance of being wrong.

And how would they identify that wrongness? By looking outside the formal process, and checking if the path that the moral "self-improvement" is taking is plausible, and doesn't lead to obviously terrible outcomes.

So, to pick one example from Wei Dai (similar examples can be found in this post on self-deception, and in the "Senator Cruz" section of Scott Alexander's "debate questions" post):

I’m envisioning that in the future there will also be systems where you can input any conclusion that you want to argue (including moral conclusions) and the target audience, and the system will give you the most convincing arguments for it. At that point people won’t be able to participate in any online (or offline for that matter) discussions without risking their object-level values being hijacked.

If the moral realist approach included getting into conversations with such systems and thus getting randomly subverted, then the moral realists I know would agree that the approach had failed, no matter how internally consistent it seems. Thus, they allow, in practice, some considerations akin to Argument A: where the moral process ends up (or at least the path that it takes) can affect their belief that the moral realist conclusion is correct.

So moral realists, in practice, do have conditional meta-preferences that can override their moral realist system. Indeed, most moral realists don't have a fully-designed system yet, but have a rough overview of what they want, with some details they expect to fill in later; from the perspective of here and now, they have some preferences, some strong meta-preferences (on how the system should work) and some conditional meta-preferences (on how the design of the system should work, conditional on certain facts or arguments they will learn later).

Moral anti-realists look like moral realists

Enough picking on moral realists; let's look now at moral anti-realists, which is relatively easy for me as I'm one of them. Suppose I was to investigate an area of morality that I haven't investigated before; say, political theory of justice.

Then, I would expect that as I investigated this area, I would start to develop better categories than what I have now, with crisper and more principled boundaries. I would expect to meet arguments that would change how I feel and what I value in these areas. I would apply simplicity arguments to make more elegant the hodgepodge of half-baked ideas that I currently have in that area.

In short, I would expect to engage in moral learning. Which is a peculiar thing for a moral anti-realist to expect...

The first-order similarity

So, to generalise a bit across the two categories:

  1. Moral realists are willing to question the truth of their systems based on facts about the world that should formally be irrelevant to that truth, and use their own private judgement in these cases.
  2. Moral anti-realists are willing to engage in something that looks like moral learning.

Note that the justifications of the two points of view are different - the moral realist can point to moral uncertainty, the moral anti-realist to personal preferences for a more consistent system. And the long-term perspectives are different: the moral realist expects that their process will likely converge to something with fantastic properties, the moral anti-realist thinks it likely that the degree of moral learning is sharply limited, only a few "iterations" beyond their current morality.

Still, in practice, and to a short-term, first-order approximation, moral realists and moral-anti realists seem very similar. Which is probably why they can continue to have conversations and debates that are not immediately pointless.


  1. I apologise for my simplistic understanding and definitions of moral realism. However, my partial experience in this field has been enough to convince me that there are many incompatible definition of moral realism, and many arguments about them, so it's not clear there is a single simple thing to understand. So I've tried to define is very roughly, enough so that the gist of this post makes sense. ↩︎

New Comment
2 comments, sorted by Click to highlight new comments since:

"""And the long-term perspectives are different: the moral realist expects that their process will likely converge to something with fantastic properties, the moral anti-realist thinks it likely that the degree of moral learning is sharply limited, only a few "iterations" beyond their current morality."""

^ Why do you say this?

Just my impression based on discussing the issue with some moral realists/non-realists.