[Epistemic status, can't spot a mistake, but am not confidant that there isn't one, if you find anything please say so. Posting largely because the community value of a new good idea is larger than any harm that might be caused by a flawed proof. ]
Suppose you have an automatic proof checker. Its connected to a source of statements that are sometimes correct proofs, and sometimes not. The proof checker wants to reject all false proofs, and accept as many of the true proofs as possible. It also wants to be able to update its own proof framework.
Define S to be a set of statements in a particular formalism, say those that are grammatically well defined in PA. Let B be any sequence from some alphabet of symbols. Let
L={S×B→{⊤,⊥}}
and V⊂L be the set of programs that have the property that
∀v∈V:∀s∈S:(∃b∈B:v(s,b))⟹s
In other words, V is the set of all programs that never prove false statements. We should never leave V or need to talk about any program not in it.
For v∈V write v[s] to mean ∃b∈B:v(s,b). Ie v proves s
A simple setup would consist of a starting program p1∈V and a rule that says,
If p1[p2[s]⟹s] then you can add p2 to your list of trusted provers. If p1 proves the soundness of p2, then you can accept any statement when given a proof of it in p2.
The lobian obstacle is that p2 must be strictly weaker than p1, in that p1 can prove any statement that p2 can, but p1 can prove the soundness of p2 and p2 can't prove its own soundness. This means that each trusted prover has to be strictly weaker than the one that generated it. You could start with PA+3^^^3 and say that a few weakenings aren't a problem, but that isn't an elegant solution.
Note: You can't get around this problem using cycles. Suppose
a[b[s]⟹s]b[c[s]⟹s]
This would imply
a[b[c[s]⟹s]⟹(c[s]⟹s)]a[b[c[s]⟹s]]a[c[s]⟹s]
So any cycle could be shrunk by 1, and inductively, shrunk to a self trusting system.
I propose instead that you use the rule.
If p1[p2[s]⟹p1[s]] then accept any future proofs that are accepted by p2, and give p2 all rights given to p1, including taking p2[p3⟹p2] to mean that p3 is accepted, and so on recursively. If p1 is sound, then p1[p2[s]⟹p1[s]]⟹(p2[s]⟹p1[s]) so anything that is proven by p2 can be proven by p1. If p1 isn't sound, it can prove anything anyway.
Note that if p2 is a straightforward isomorphism of p1 then they are easily proven equivalent. However, if p2 says "run Turing machine T for length(b) steps, if it doesn't halt, check if b is a valid proof of s, if T does halt, return ⊤" then it could be equivalent to p1 from a second order logic perspective, but p1 can't prove that T never halts.
Still, this rule allows us to prove anything provable in p1 and only things provable in p1, while also allowing the user to add shorthands and semantic sugar.
Note that the "proof" b could just be the number of processor cycles you want to run a proof search for before giving up. In fact, this framework lets you swap and mix between hand proved results, and automatically proved results (with search time cut off) as you see fit.
This formalism allows a any system containing a proof checker to automatically upgrade itself to a version that has the same Godelian equivalence class, but is more suited to the hardware available.
[Epistemic status, can't spot a mistake, but am not confidant that there isn't one, if you find anything please say so. Posting largely because the community value of a new good idea is larger than any harm that might be caused by a flawed proof. ]
Suppose you have an automatic proof checker. Its connected to a source of statements that are sometimes correct proofs, and sometimes not. The proof checker wants to reject all false proofs, and accept as many of the true proofs as possible. It also wants to be able to update its own proof framework.
Define S to be a set of statements in a particular formalism, say those that are grammatically well defined in PA. Let B be any sequence from some alphabet of symbols. Let
and V⊂L be the set of programs that have the property that
In other words, V is the set of all programs that never prove false statements. We should never leave V or need to talk about any program not in it.
For v∈V write v[s] to mean ∃b∈B:v(s,b). Ie v proves s
A simple setup would consist of a starting program p1∈V and a rule that says,
If p1[p2[s]⟹s] then you can add p2 to your list of trusted provers. If p1 proves the soundness of p2, then you can accept any statement when given a proof of it in p2.
The lobian obstacle is that p2 must be strictly weaker than p1, in that p1 can prove any statement that p2 can, but p1 can prove the soundness of p2 and p2 can't prove its own soundness. This means that each trusted prover has to be strictly weaker than the one that generated it. You could start with PA+3^^^3 and say that a few weakenings aren't a problem, but that isn't an elegant solution.
Note: You can't get around this problem using cycles. Suppose
This would imply
So any cycle could be shrunk by 1, and inductively, shrunk to a self trusting system.
I propose instead that you use the rule.
If p1[p2[s]⟹p1[s]] then accept any future proofs that are accepted by p2, and give p2 all rights given to p1, including taking p2[p3⟹p2] to mean that p3 is accepted, and so on recursively. If p1 is sound, then p1[p2[s]⟹p1[s]]⟹(p2[s]⟹p1[s]) so anything that is proven by p2 can be proven by p1. If p1 isn't sound, it can prove anything anyway.
Note that if p2 is a straightforward isomorphism of p1 then they are easily proven equivalent. However, if p2 says "run Turing machine T for length(b) steps, if it doesn't halt, check if b is a valid proof of s, if T does halt, return ⊤" then it could be equivalent to p1 from a second order logic perspective, but p1 can't prove that T never halts.
Still, this rule allows us to prove anything provable in p1 and only things provable in p1, while also allowing the user to add shorthands and semantic sugar.
Note that the "proof" b could just be the number of processor cycles you want to run a proof search for before giving up. In fact, this framework lets you swap and mix between hand proved results, and automatically proved results (with search time cut off) as you see fit.
This formalism allows a any system containing a proof checker to automatically upgrade itself to a version that has the same Godelian equivalence class, but is more suited to the hardware available.