This post is a mixture of two questions: "interventions" from an agent which is part of the world, and restrictions
The first is actually a problem, and is closely related to the problem of how to extract a single causal model which is executed repeatedly from a universe in which everything only happens once. Pearl's answer, from IIRC Chapter 7 of Causality, which I find 80% satisfying, is about using external knowledge about repeatability to consider a system in isolation. The same principle gets applied whenever a researcher tries to shield an experiment from outside interference.
The second is about limiting allowed interventions. This looks like a special case of normality conditions, which are described in Chapter 3 of Halpern's book. Halpern's treatment of normality conditions actually involves a normality ordering on worlds, though this can easily be massaged to imply a normality ordering on possible interventions. I don't see any special mileage here out of making the normality ordering dependent on complexity, as opposed to any other arbitrary normality ordering, though someone may be able to find some interesting interaction between normality and complexity.
Speaking more broadly, this is part of the broader problem that our current definitions of actual causation are extremely model-sensitive, which I find a serious problem. I don't see a mechanistic resolution, but I did find this essay extremely thought provoking, which posits considering interventions in all possible containing models: http://strevens.org/research/expln/MacRules.pdf
Very interesting post. I was very prepared to praise it with "this draws some useful categories for me," but it began to get less clear as I tried more examples. And I'm still trying to come up with a distinction between brinksmanship and extortion. I've thought about the payoff matrices (they look the same), and whether "unilateral attack vs. not" is a distinguishing factor (I don't think so). I still can't find a clear distinction.
Examples:
(1) You say that releasing nude photos is in the blackmail category. But who's the audience?
(2) For n=1, m large: Is an example of brinkmanship here a monopolistic buyer who will only choose suppliers giving cutrate prices? It seems to have been quite effective for Walmart decades ago, and effective for Hollywood today ( https://www.engadget.com/2018-02-24-black-panther-vfx-models.html ).
In this possible world, it is the case that "A" returns Y upon being given those same observations. But, the output of "A" when given those observations is a fixed computation, so you now need to reason about a possible world that is logically incoherent, given your knowledge that "A" in fact returns X. This possible world is, then, a logical counterfactual: a "possible world" that is logically incoherent.
Simpler solution: in that world, your code is instead A', which is exactly like A, except that it returns Y in this situation. This is the more general solution derived from Pearl's account of counterfactuals in domains with a finite number of variables (the "twin network construction").
Last year, my colleagues and I published a paper on Turing-complete counterfactual models ("causal probabilistic programming"), which details how to do this, and even gives executable code to play with, as well as a formal semantics. Have a look at our predator-prey example, a fully worked example of how to do this "counterfactual world is same except blah" construction.
Causal inference has long been about how to take small assumptions about causality and turn them into big inferences about causality. It's very bad at getting causal knowledge from nothing. This has long been known.
For the first: Well, yep, that's why I said I was only 80% satisfied.
For the second: I think you'll need to give a concrete example, with edges, probabilities, and functions. I'm not seeing how to apply thinking about complexity to a type causality setting, where it's assumed you have actual probabilities on co-occurrences.