I gave a talk at FHI ages ago on how to use causal graphs to solve Newcomb type problems. It wasn't even an original idea: Spohn had something similar in 2012.
I don't think any of this stuff is interesting, or relevant for AI safety. There's a pretty big literature on model robustness and algorithmic fairness that uses causal ideas.
If you want to worry about the end of the world, we have climate change, pandemics, and the rise of fascism.
Counterfactuals (in the potential outcome sense used in statistics) and Pearl's structural equation causality semantics are equivalent.
https://auai.org/uai2021/pdf/uai2021.89.preliminary.pdf (this really is preliminary, e.g. they have not yet uploaded a newer version that incorporates peer review suggestions).
---
Can't do stuff in the second paper without worrying about stuff in the first (unless your model is very simple).
Pretty interesting.
Since you are interested in policies that operate along some paths only, you might find these of interest:
https://pubmed.ncbi.nlm.nih.gov/31565035/
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6330047/
We have some recent stuff on generalizing MDPs to have a causal model inside every state ('path dependent structural equation models', to appear in UAI this year).
You can read Halpern's stuff if you want an axiomatization of something like the responses to the do-operator.
Or you can try to understand the relationship of do() and counterfactual random variables, and try to formulate causality as a missing data problem (whereby a full data distribution on counterfactuals and an observed data distribution on factuals are related via a coarsening process).
It's important to internalize that the intellectual world lives in the attention economy, like eveything else.
Just like "content creators" on social platforms think hard about capturing and keeping attention, so do intellectuals and academics. Clarity and rigor is a part of that.
No one has time, energy, (or crayons, as the saying goes) for half-baked ramblings on a blog or forum somewhere.