Jessica recently wrote about difficulties with physicalist accounts of the world and alternatives to logical counterfactuals. In my recent post about the deconfusing human values research agenda, Charlie left a comment highlighting that my current model depends on a notion of "could have done something else" to talk about decisions.

Additionally, I have a strong belief that the world is subjectively deterministic, i.e. that from my point of view the world couldn't have turned out any other way than the way it did because I only ever experience myself to be in a single causal history. Yet I also suspect this is not the whole story because it appears the world I find myself in is one of many possible causal histories, possibly realized in many causally isolated worlds after the point where they diverge (i.e. a non-collapse interpretation of quantum physics).

So this leaves me in a weird place. When thinking about values, it often make sense to think about the downstream effects of values on decisions and actions and in fact many people try to infer upstream values from observations of downstream behaviors, yet the notion of "deciding" implies there was some choice to make, which I think maybe there wasn't. Thus I have theories that conflict with each other yet seek to explain the same phenomena, so I'm confused.

Seeking to see through this confusion, what are some ways of reconciling both the experience of determinism and the experience of freedom of choice or free will?

Since this has impacts on how to think about decision theory, my hope is that people might be able to share how they've thought about this question and tried to resolve it.

New Answer
New Comment

1 Answers sorted by

My answer is a rather standard compatibilist one, the algorithm in your brain produces the sensation of free will as an artifact of an optimization process.

There is nothing you can do about it (you are executing an algorithm, after all), but your subjective perception of free will may change as you interact with other algorithms, like me or Jessica or whoever. There aren't really any objective intentional "decisions", only our perception of them. Therefore there the decision theories are just byproducts of all these algorithms executing. It doesn't matter though, because you have no choice but to feel that decision theories are important.

So, watch the world unfold before your eyes, and enjoy the illusion of making decisions.

I wrote about this over the last few years:

https://www.lesswrong.com/posts/NptifNqFw4wT4MuY8/agency-is-bugs-and-uncertainty

https://www.lesswrong.com/posts/TQvSZ4n4BuntC22Af/decisions-are-not-about-changing-the-world-they-are-about

https://www.lesswrong.com/posts/436REfuffDacQRbzq/logical-counterfactuals-are-low-res

Thanks, I'll revisit these. They seem like they might be pointing towards a useful resolution I can use to better model values.

1shminux4y
Feel free to let me know either way, even if you find that the posts seem totally wrong or missing the point.
1Gordon Seidoh Worley4y
Okay, so now that I've had more time to think about it, I do really like the idea of thinking of "decisions" as the subjective expression of what it feels like to learn what universe you are in, and this holds true for the third-person perspective of considering the "decisions" of others: they still go through the whole process that feels from the inside like choosing or deciding, but from the outside there is no need to appeal to this to talk about "decisions". Instead, to the outside observers, "decisions" are just resolutions of uncertainty about what will happen to a part of the universe modeled as another agent. This seems quite elegant for my purposes, as I don't run into the problems associated with formalizing UDT (at least, not yet), and it let's me modify my model for understanding human values to push "decisions" outside of it or into the after-the-fact part.