My answer is a rather standard compatibilist one, the algorithm in your brain produces the sensation of free will as an artifact of an optimization process.
There is nothing you can do about it (you are executing an algorithm, after all), but your subjective perception of free will may change as you interact with other algorithms, like me or Jessica or whoever. There aren't really any objective intentional "decisions", only our perception of them. Therefore there the decision theories are just byproducts of all these algorithms executing. It doesn't matter though, because you have no choice but to feel that decision theories are important.
So, watch the world unfold before your eyes, and enjoy the illusion of making decisions.
I wrote about this over the last few years:
https://www.lesswrong.com/posts/NptifNqFw4wT4MuY8/agency-is-bugs-and-uncertainty
https://www.lesswrong.com/posts/436REfuffDacQRbzq/logical-counterfactuals-are-low-res
Thanks, I'll revisit these. They seem like they might be pointing towards a useful resolution I can use to better model values.
Jessica recently wrote about difficulties with physicalist accounts of the world and alternatives to logical counterfactuals. In my recent post about the deconfusing human values research agenda, Charlie left a comment highlighting that my current model depends on a notion of "could have done something else" to talk about decisions.
Additionally, I have a strong belief that the world is subjectively deterministic, i.e. that from my point of view the world couldn't have turned out any other way than the way it did because I only ever experience myself to be in a single causal history. Yet I also suspect this is not the whole story because it appears the world I find myself in is one of many possible causal histories, possibly realized in many causally isolated worlds after the point where they diverge (i.e. a non-collapse interpretation of quantum physics).
So this leaves me in a weird place. When thinking about values, it often make sense to think about the downstream effects of values on decisions and actions and in fact many people try to infer upstream values from observations of downstream behaviors, yet the notion of "deciding" implies there was some choice to make, which I think maybe there wasn't. Thus I have theories that conflict with each other yet seek to explain the same phenomena, so I'm confused.
Seeking to see through this confusion, what are some ways of reconciling both the experience of determinism and the experience of freedom of choice or free will?
Since this has impacts on how to think about decision theory, my hope is that people might be able to share how they've thought about this question and tried to resolve it.