This is an attempt to build a basis for formalising logical counterfactuals. It's not workable as is, but I think it can potentially solve the problem of "outthinking the counterfactual".
For causal counterfactuals, we build a directed acyclic causal graph. We then modify the value of the node representing the condition, and recompute the value of the descendent nodes. I try to mirror this apporach.
The nodes of our new graph will of course be propositions, and the edges inferences. There are problems however. An inference can have multiple antecedents, and we cant just represent this by giving them all and edge to the consequent, because there can also be multiple ways to infer that proposition, and it makes much more sense for multiple incoming edges to mean that. The graph also won't be acyclic, so even if we found a way to recompute along one edge, we could get different outcomes depending on how we move through the graph. To resolve this, we need to look at logic a bit differently.
I'm using a formulation of logic as outlined by Donald Hobson here. TLDR: All inference rules have only one antecedent. They are substitutions which lead to a logically equivalent statement. Since logical equivalence is symmetrical we will make an undirected graph, which is simpler. Our graphy will have one node for every grammatical proposition of the theory, and an edge between any two conneced directly by an inference rule.
A proof of a proposition x is then a path through the graph from ⊤ to x.
Logical Counterfactual of a Set Conditional
Now, how do we solve the cyclicality? By cutting of so much that you cant go around the conditioning statement. Its clear that we can't change just that statement, because you could trivially do all the same proofs from it's double negation. So to begin we will define the condition not as a single statements but a set of them that is modified. Some definitions:
e: A single proposition.
C: A set of propositions such that every proof of e contains at least one element of C, and that there is no subset of C that can be removed without losing this property, and which doesn't contain e.
B: The set of all propositions that have a proof not containing any element of C.
G: The set of all propositions which are in C or whichs negation is in C.
For K⊂G, a K-proof of x is a path from ⊤ to x in a modified version of the graph, where for every z∈G and every w∈B, if there is an inference from w to z, then the inference rule it uses can not be used with the same values for the parameters as in that use, and for every y∈K, y≡⊤ has been added as an inference rule. I write K⊢x for "There is a K-proof of x.".
(What do I mean by parameters? Say you have the proposition like ((a⇒b)∧(a⇒b))∨c and apply the rule α∧α≡α to conclude (a⇒b)∨c, then I say that α is a parameter and its value is "(a⇒b)". )
Let f:C→G be a function which assigns every element of C either itself or its negation. Then if ¬(f(C)⊢⊥) and f(C)⊢e, then "If f(C), then (counterlogically) e." is true, and if f(C)⊢¬e , then "If f(C), then (counterlogically) ¬e."
Conclusion
We can now define the counterfactual of single proposition in terms of that of sets. For this we need to define a C and f depending on that proposition. This is where I'm stuck for now. Proceding from here would be some combination of finding restrictions on which values of C and f give defined answers, considerations from the game-theoretic applications that can further limit them (after all, it may well be that just the conditional proposition on its own is insufficient for a unique answer, and that more information about the context is needed to ask for the "correct" one), and a proof of invariance of the result in this sufficiently narrowed-down space.
If something like this is possible, it could explain why running logical inductors for a short time can often give intuitively correct answers to logical counterfactuals. If the conditional is "closer" to the consequent than to the axioms, then the proof representing a path from conditional to consequent is likely found before a proof of the consequent itself. The broad cutting of this method ensures that this second proof is never found.
I hope someone with more mathematical background knowledge (and time) than me can complete this in a fruitful way.
Introduction
This is an attempt to build a basis for formalising logical counterfactuals. It's not workable as is, but I think it can potentially solve the problem of "outthinking the counterfactual".
For causal counterfactuals, we build a directed acyclic causal graph. We then modify the value of the node representing the condition, and recompute the value of the descendent nodes. I try to mirror this apporach.
The nodes of our new graph will of course be propositions, and the edges inferences. There are problems however. An inference can have multiple antecedents, and we cant just represent this by giving them all and edge to the consequent, because there can also be multiple ways to infer that proposition, and it makes much more sense for multiple incoming edges to mean that. The graph also won't be acyclic, so even if we found a way to recompute along one edge, we could get different outcomes depending on how we move through the graph. To resolve this, we need to look at logic a bit differently.
I'm using a formulation of logic as outlined by Donald Hobson here. TLDR: All inference rules have only one antecedent. They are substitutions which lead to a logically equivalent statement. Since logical equivalence is symmetrical we will make an undirected graph, which is simpler. Our graphy will have one node for every grammatical proposition of the theory, and an edge between any two conneced directly by an inference rule.
A proof of a proposition x is then a path through the graph from ⊤ to x.
Logical Counterfactual of a Set Conditional
Now, how do we solve the cyclicality? By cutting of so much that you cant go around the conditioning statement. Its clear that we can't change just that statement, because you could trivially do all the same proofs from it's double negation. So to begin we will define the condition not as a single statements but a set of them that is modified. Some definitions:
e: A single proposition.
C: A set of propositions such that every proof of e contains at least one element of C, and that there is no subset of C that can be removed without losing this property, and which doesn't contain e.
B: The set of all propositions that have a proof not containing any element of C.
G: The set of all propositions which are in C or whichs negation is in C.
For K⊂G, a K-proof of x is a path from ⊤ to x in a modified version of the graph, where for every z∈G and every w∈B, if there is an inference from w to z, then the inference rule it uses can not be used with the same values for the parameters as in that use, and for every y∈K, y≡⊤ has been added as an inference rule. I write K⊢x for "There is a K-proof of x.".
(What do I mean by parameters? Say you have the proposition like ((a⇒b)∧(a⇒b))∨c and apply the rule α∧α≡α to conclude (a⇒b)∨c, then I say that α is a parameter and its value is "(a⇒b)". )
Let f:C→G be a function which assigns every element of C either itself or its negation. Then if ¬(f(C)⊢⊥) and f(C)⊢e, then "If f(C), then (counterlogically) e." is true, and if f(C)⊢¬e , then "If f(C), then (counterlogically) ¬e."
Conclusion
We can now define the counterfactual of single proposition in terms of that of sets. For this we need to define a C and f depending on that proposition. This is where I'm stuck for now. Proceding from here would be some combination of finding restrictions on which values of C and f give defined answers, considerations from the game-theoretic applications that can further limit them (after all, it may well be that just the conditional proposition on its own is insufficient for a unique answer, and that more information about the context is needed to ask for the "correct" one), and a proof of invariance of the result in this sufficiently narrowed-down space.
If something like this is possible, it could explain why running logical inductors for a short time can often give intuitively correct answers to logical counterfactuals. If the conditional is "closer" to the consequent than to the axioms, then the proof representing a path from conditional to consequent is likely found before a proof of the consequent itself. The broad cutting of this method ensures that this second proof is never found.
I hope someone with more mathematical background knowledge (and time) than me can complete this in a fruitful way.