This post is a mixture of two questions: "interventions" from an agent which is part of the world, and restrictions
The first is actually a problem, and is closely related to the problem of how to extract a single causal model which is executed repeatedly from a universe in which everything only happens once. Pearl's answer, from IIRC Chapter 7 of Causality, which I find 80% satisfying, is about using external knowledge about repeatability to consider a system in isolation. The same principle gets applied whenever a researcher tries to shield an experiment ...
Very interesting post. I was very prepared to praise it with "this draws some useful categories for me," but it began to get less clear as I tried more examples. And I'm still trying to come up with a distinction between brinksmanship and extortion. I've thought about the payoff matrices (they look the same), and whether "unilateral attack vs. not" is a distinguishing factor (I don't think so). I still can't find a clear distinction.
Examples:
(1) You say that releasing nude photos is in the blackmail category. But who's the audience?
(2) For n=1, m lar...
In this possible world, it is the case that "A" returns Y upon being given those same observations. But, the output of "A" when given those observations is a fixed computation, so you now need to reason about a possible world that is logically incoherent, given your knowledge that "A" in fact returns X. This possible world is, then, a logical counterfactual: a "possible world" that is logically incoherent.
Simpler solution: in that world, your code is instead A', which is exactly like A, except that it returns Y ...
Causal inference has long been about how to take small assumptions about causality and turn them into big inferences about causality. It's very bad at getting causal knowledge from nothing. This has long been known.
For the first: Well, yep, that's why I said I was only 80% satisfied.
For the second: I think you'll need to give a concrete example, with edges, probabilities, and functions. I'm not seeing how to apply thinking about complexity to a type causality setting, where it's assumed you have actual probabilities on co-occurrences.