Recently, the Computational No-Coincidende Conjecture[1] was proposed, presented as an assumption that might be needed to develop methods to explain neural nets in the current approach from the Alignment Research Center. In this post, I will prove that some plausible extra assumption can be used to amplify the conjecture, showing it implies an arbitrarily stronger version of itself.
In a slightly weakened version, the conjecture reads: (weak)Computational no-coincidence conjecture:For a reversible circuit C:{0,1}3n→{0,1}3n, let P(C) be the property that there is no input x to C that ends in n zeros, such that C(x) also ends in n zeros. There exists a polynomial-time verification algorithm V that receives as input:
A reversible circuit C:{0,1}3n→{0,1}3n
An advice string π
such that:
For all C such thatP(C) is true, there exists π with length polynomial in the size of C, such that V(C,π)=1.
For 99% of random reversible circuits Cfor which P(C) is false, no such π exists.
The original conjecture omits "for which P(C) is false". As such π always exists when P(C) is true, the original conjecture implies this weak version.
With an additional assumption, I will show that this (weak) conjecture implies a stronger version of itself, in which the "99%" can be replaced by "1−ϵ" for an arbitrarily small ϵ>0. This entails the original conjecture can also be strengthened.
The strategy will be to define a new problem in NP using the conjecture. This problem is to decide whether a reversible circuit is in the set containing all circuits satisfying the property P plus a fraction of those falsifying it. If that fraction is smaller than 1%, we will have an stronger weak conjecture with a probability bound higher than "99%". Iterating this amplification step, that bound will be replaced by "1−ϵ", arbitrarily close to one[2]. To achieve that, we will employ an extra assumption. Then we will apply the results to the original conjecture and discuss how reasonable the extra premise is.
The Amplification Step
Let C be the set of all reversible circuits and let P⊊C denote the set of circuits C such that P(C) is true. As the authors note, deciding P[3] is coNP-hard, and thus, coNP-complete[4], implying ¯¯¯¯P (the complement of P relative to C) is NP-complete.
We assume the (weak) conjecture is true, so there is a polynomial-time verifier V0 such that: for all C∈P, there is a polynomial-sized π with V0(C,π)=1; and for 99% of random circuits C∈¯¯¯¯P, no such π exists. Let V0[5] denote also the set of circuits C (a property) such that there is a π yielding V0(C,π)=1. By definition, V0 is in NP, implying ¯¯¯¯¯V0 is in coNP. Note that ¯¯¯¯¯V0⊆¯¯¯¯P.
Define the set V′0=V0∩¯¯¯¯P. That is, V′0 contains the circuits C∈C such that P(C) does not hold and there is a π with V0(C,π)=1. Since both ¯¯¯¯P and V0 are in NP, so is V′0, and ¯¯¯¯¯¯V′0 is in coNP. Note that V′0⊊¯¯¯¯P, and the conjecture implies Prob(V0(C)|¯¯¯¯P(C))=Prob(V′0(C)|¯¯¯¯P(C))≤10−2, for a random C∈C[6].
As V′0is in NP, it can be reduced in polynomial time to ¯¯¯¯P, which is NP-complete. Formally, there is a polynomial-time algorithm f0 (a many-one reduction) such that C∈V′0 iff f0(C)∈¯¯¯¯P. Given such a reduction f0, define the set Vf00={C∈C∣f0(C)∈V0}. For any C∈¯¯¯¯¯¯V′0, f0(C)∈P and, by the conjecture, there is a polynomial-sized π yielding V0(f0(C),π)=1, and thus f0(C)∈V0 . Therefore, C∈¯¯¯¯¯¯V′0 implies C∈Vf00. Note that Vf00 is in NP by definition, as we can define a polynomial verification algorithm Vf00(C,π)=V0(f0(C),π) such that C∈Vf00 iff there is a (polynomial-sized) π yielding Vf00(C,π)=1. Also note that ¯¯¯¯¯¯¯¯¯Vf00⊆V′0.
Since V0 and Vf00 are in NP, so is V1=V0∩Vf00 (the problem we are interested in). This implies there is a polynomial-time verifier V1(⋅,⋅) such that for any circuit C∈V1, there is a polynomial-sized π yielding V1(C,π)=1. In particular, for all C satisfying P, since P⊆V1, and all C∈V1∩¯¯¯¯P, there is such a π.
Now define the set of circuits V′1=V1∩¯¯¯¯P=V′0∩Vf00. As V1 and ¯¯¯¯P are in NP, so is V′1. The set V′1 contains all circuits falsifying P that exhibit a special structure of those circuits satisfying P (there is a π with V1(C,π)=1). As V′1⊆V′0, Prob(V1(C)|¯¯¯¯P(C))=Prob(V′1(C)|¯¯¯¯P(C)) is not higher than Prob(V′0(C)|¯¯¯¯P(C)), and V1(⋅,⋅) is a polynomial-time verifier satisfying both conditions in the weak conjecture. For V1(⋅,⋅) to satisfy a stronger version of the conjecture, with Prob(V′1(C)|¯¯¯¯P(C))<Prob(V′0(C)|¯¯¯¯P(C))≤10−2, we need some extra assumption.
The Extra Assumption
Since V′1=V1∩¯¯¯¯P⊆V′0⊆¯¯¯¯P, it follows that, for a random C∈C,Prob(V′1(C)∣¯¯¯¯P(C))=Prob(V′1(C)∣V′0(C))Prob(V′0(C)∣¯¯¯¯P(C)). So we just need an assumption yielding Prob(C∈V′1|C∈V′0)=Prob(f0(C)∈V′0∣C∈V′0)<1 in order for the polynomial verifier V1 to satisfy a stronger version of the weak conjecture.
Recall that C∈V′0 iff f0(C)∈¯¯¯¯P, and note that Prob(C∈V1|C∈V′0) is equal to Prob(f0(C)∈V0|C∈V′0)=Prob(f0(C)∈V′0|C∈V′0). Thus we need an assumption entailing Prob(f0(C)∈V′0∣f0(C)∈¯¯¯¯P)<1, or, equivalently, Prob(f0(C)∈¯¯¯¯¯¯V′0∣f0(C)∈¯¯¯¯P)>0. This inequality says that, given a random reversible circuit C for which P(f0(C)) is false, f0(C) has non-zero probability of being in ¯¯¯¯¯V0 (not having a π such that V0(f0(C),π)=1). For a random C falsifying P, the conjecture implies Prob(C∈¯¯¯¯¯¯V′0∣C∈¯¯¯¯P)≥99%, thus we might wonder whether this probability bound is robust to applying a reduction to C. Nonetheless, by employing an arbitrary number of iterations of the amplification step, as we will show, we can achieve the desired strengthening of the conjecture with a much weaker premise[7]:
Reduction-Regularity:[8]There is aδ∈(0,1)such that, given any NPproblemL⊊¯¯¯¯P with Prob(C∈¯¯¯¯L∣C∈¯¯¯¯P)≥99%for a random C∈C, there is a polynomial-time reduction f(.) from L to ¯¯¯¯Pyielding Prob(f(C)∈¯¯¯¯L∣f(C)∈¯¯¯¯P)≥δfor a random C.
While the (weak) conjecture implies Prob(V′0(C)|¯¯¯¯P(C))≤10−2, for a random C∈C, assuming also the premise above (with L=V′0) implies there is a reduction f0 such that Prob(V′1(C)|V′0(C))≤1−δ. Consequently, we obtain Prob(V′1(C)|¯¯¯¯P(C))=Prob(V′1(C)|V′0(C))Prob(V′0(C)|¯¯¯¯P(C))≤10−2(1−δ). This seems a small gain, as δ can be close to zero, but it is sufficient to iteratively amplify the weak conjecture.
Amplifying the Weak Conjecture
The whole process can be repeated to achieve a higher probability bound. The weak conjecture and Reduction-Regularity imply Prob(V1(C)|¯¯¯¯P(C))≥0.99+10−2δ, which would already give us a stronger version of the weak conjecture. But now Reduction-Regularity (with L=V′1) implies there is a polynomial many-one reduction f1 from V′1=V1∩¯¯¯¯P to ¯¯¯¯P such that Prob(f1(C)∉V′1|f1(C)∈¯¯¯¯P)≥δ. Defining V2=V1∩Vf11, where Vf11={C∈C|f1(C)∈V1}, and V′2=V2∩¯¯¯¯P, this entails Prob(¯¯¯¯¯V2(C)|V′1(C))≥δ and Prob(V2(C)|V′1(C))=Prob(V′2(C)|V′1(C))≤1−δ. As Prob(V′1(C)|¯¯¯¯P(C))≤10−2(1−δ) and V′2⊆V′1⊆¯¯¯¯P, it follows that:
Note that P⊆V2, so V2 would satisfy an even stronger conjecture.
The amplification step can be iterated inductively. For any positive i∈N, given V′i is in NP, we can construct a reduction fi from V′i to ¯¯¯¯P (via Reduction-Regularity) such that defining Vi+1=Vi∩Vfii and V′i+1=Vi+1∩¯¯¯¯P yields Prob(¯¯¯¯¯¯¯¯¯¯V′i+1(C)|V′i(C))≥δ and Prob(V′i+1(C)|V′i(C))=Prob(Vi+1(C)|V′i(C))≤1−δ. If the amplification is iterated m times, we have:
Given any ϵ∈(0,1), we can choose an m∈N such that 10−2(1−δ)m≤ϵ, or m≥log1−δ(102ϵ). Iterating the amplification step m times thus yields Prob(Vm(C)|¯¯¯¯P(C))≤ϵ and, consequently, Prob(¯¯¯¯¯¯¯Vm(C)|¯¯¯¯P(C))≥1−ϵ. Note that Vm is exactly the set of reversible circuits C for which there is a π yielding Vm(C,π)=1, for a polynomial verifier Vm(⋅,⋅), since Vm is in NP. Furthermore, P⊆Vm. Therefore the (weak) conjecture, together with Reduction-Regularity, implies:
Amplified (weak)Computational no-coincidence conjecture:For a reversible circuit C:{0,1}3n→{0,1}3n, let P(C) be the property that there is no input x to C that ends in n zeros, such that C(x) also ends in n zeros. For any ϵ∈(0,1), there exists a polynomial-time verification algorithm V that receives as input:
A reversible circuit C:{0,1}3n→{0,1}3n
An advice string π
such that:
For all C such thatP(C) is true, there exists π with length polynomial in the size of C, such that V(C,π)=1.
For 100(1−ϵ)% of random reversible circuits Cfor which P(C) is false, no such π exists.
Note that, relative to the size of the circuit C, the algorithm V (or Vm) still runs in polynomial time and π still is polynomial-sized. Nonetheless, the number m of amplification iterations needed for a given ϵ is proportional to log1/ϵ if we assume a fixed δ∈(0,1). This implies the computational complexity of V and π might increase exponentially on 1/ϵ, as indicated in the Appendix.
Amplifying the Original Conjecture
The amplified weak conjecture can finally be used to amplify the original one. Let α∈(0,1) be such that Prob(C∈P)=α and Prob(C∈¯¯¯¯P)=1−α. Since Prob(¯¯¯¯V(C)|P(C))=0 for any V satisfying the amplified weak conjecture, the latter implies that, for any ϵ∈(0,1), there is a V such that Prob(¯¯¯¯V(C))=Prob(¯¯¯¯P(C))Prob((¯¯¯¯V(C)|¯¯¯¯P(C))≥(1−α)(1−ϵ). Note that Prob(¯¯¯¯V(C))≤1−α, for all C∈P is also in V, as required by the amplified conjecture. Hence, we can make the probability bound arbitrarily close to that limit, and, assuming Reduction-Regularity, the (weak) conjecture implies:
Amplified Computational no-coincidence conjecture:For a reversible circuit C:{0,1}3n→{0,1}3n, let P(C) be the property that there is no input x to C that ends in n zeros, such that C(x) also ends in n zeros. Let α∈(0,1) be such that exactly 100α% of random reversible circuits satisfy P. For any ϵ∈(0,1), there exists a polynomial-time verification algorithm V that receives as input:
A reversible circuit C:{0,1}3n→{0,1}3n
An advice string π
such that:
For all C such thatP(C) is true, there exists π with length polynomial in the size of C, such that V(C,π)=1.
For 100(1−ϵ)(1−α)% of random reversible circuits C, no such π exists.
The Reasonability of Reduction-Regularity
Before speculating about the implications of these results, we have to check how reasonable Reduction-Regularity is. At first, it might appear that it is just a harmless assumption. Upon closer look, we can see, for instance, that it implies a δ-fraction of the circuits in V′i to be reduced (via fi) to circuits in ¯¯¯¯P∖V′i. All circuits in V′i⊂Vi show the special structure (Vi(C,π)=1) without satisfying the property P. Therefore, it is plausible toconceive that C∈V′i might imply probability 100% for fi(C)∈V′i⊂Vi, even though at least 99% of the circuits in ¯¯¯¯P are not in Vi⊆V0. Against that, we can argue that fi can be constructed as a composition of reductions, for instance reducing V′i to SAT and then reducing SAT to ¯¯¯¯P, probably washing away any possible structure that entail fi(C)∈V′i(C). In the end, the plausibility of Reduction-Regularity might depend on the specific probability distribution over circuits; personally, I believe this premise is quite reasonable, and maybe it can be supported by some empirical experiments.
Reduction-Regularity can actually be weakened, as we just need an infinite number of i's such that Prob(Vi(C)|V′i−1)≤1−δ. Let I⊆N be the (infinite) set of such values of i, with I={i1,i2,i3…} such that i1<i2<i3…. Then the m amplification iterations with the original Reduction-Regularity can be emulated with im iterations using this weakened version. Though the bounds derived in the Appendix do not hold anymore, intuitively the weakened version should be even more plausible.
Conclusion
Assuming Reduction-Regularity, we might speculate whether the original conjecture is likely by analysing its amplified version. Intuitively, the strengthened version should decrease our credence on the conjecture, but it is not clear to what amount. It is tempting to consider what happens when m tends to infinity, as ϵ tends to zero and V′itends to the empty set. This would imply P∪V′i=P is in NP, but we know it is coNP-hard, so we would have NP=coNP. However, the running time of the verifier might increase exponentially on 1/ϵ, as indicated in the Appendix, or the complexity of fi might increase with i, not being polynomial in the limit.
Personally, I am inclined to think the conjecture is false. If the conjecture is indeed false, then it follows that NP≠coNP[9] and, consequently, P≠NP. Hence, it should be rather hard to prove the conjecture is false.
Appendix
To derive an upper bound for the complexities of V and π in the amplified conjecture after m amplification iterations, we construct an algorithm for V1(⋅,⋅) using V0(⋅,⋅), f0 and Vf00(⋅,⋅). Recall that V1(⋅,⋅) is a verifier for V0∩Vf00, V0(⋅,⋅) is a verifier for V0 and Vf00(⋅,⋅) is a verifier for Vf00. To verify that C∈V0∩Vf00, we can verify that C∈V0 and C∈Vf00. C∈V0 iff there is a π0 such that V0(C,π0)=1; C∈Vf00 iff there is a πf0 such that Vf00(C,πf0)=V0(f0(C),πf0)=1. Hence, V1(⋅,⋅) can be defined via the multiplication V1(C,π0πf0)=V0(C,π0)V0(f0(C),πf0),[10] such that C∈V1 iff there are π0 and πf0 such that V1(C,π0πf0)=1. The complexity of V1(⋅,⋅) is thus dominated by the complexity of Vf00(⋅,⋅). Assuming |π0|∈O(|C|α) and the time to run V0(C,π) is O((|C|+|π|)β), it follows that V0 is in O((|C|αβ). If the time to run f0(C) is O(|C|γ), then the time to run V1(C,π)=V0(f0(C),π) is O((|C|αβγ) and the size of its certificate is O(|C|αγ). That is, after one iteration of the amplifying step, the size of the certificate π and the and the run time of the verifier have their upper bound raised to the power of γ. After m iterations, they will be raised to the power of γm (assuming all fi have the same complexity[11]), hence increasing doubly exponentially on m, unless γ=1 (all reductions run in linear time). Since m=O(log1/ϵ) for a fixed δ, both complexity bounds would increase exponentially on 1/ϵ.
In the original conjecture, this probability cannot arbitrarily approach 1, as there is a positive probability for P(C) being true, as we will discuss.
We use the same symbol, say S, to denote a property of circuits, the set of circuits satifying that property and the problem of deciding, given a circuit C∈C, whether C∈S (or, equivalently, S(C)).
To verify that a C is in ¯¯¯¯P, one can guess an input that ends with n zeros and obtain the output ending with zeros in polynomial time. Thus ¯¯¯¯P is in NP and P is in coNP.
Introduction
Recently, the Computational No-Coincidende Conjecture[1] was proposed, presented as an assumption that might be needed to develop methods to explain neural nets in the current approach from the Alignment Research Center. In this post, I will prove that some plausible extra assumption can be used to amplify the conjecture, showing it implies an arbitrarily stronger version of itself.
In a slightly weakened version, the conjecture reads:
(weak) Computational no-coincidence conjecture: For a reversible circuit C:{0,1}3n→{0,1}3n, let P(C) be the property that there is no input x to C that ends in n zeros, such that C(x) also ends in n zeros. There exists a polynomial-time verification algorithm V that receives as input:
such that:
The original conjecture omits "for which P(C) is false". As such π always exists when P(C) is true, the original conjecture implies this weak version.
With an additional assumption, I will show that this (weak) conjecture implies a stronger version of itself, in which the "99%" can be replaced by "1−ϵ" for an arbitrarily small ϵ>0. This entails the original conjecture can also be strengthened.
The strategy will be to define a new problem in NP using the conjecture. This problem is to decide whether a reversible circuit is in the set containing all circuits satisfying the property P plus a fraction of those falsifying it. If that fraction is smaller than 1%, we will have an stronger weak conjecture with a probability bound higher than "99%". Iterating this amplification step, that bound will be replaced by "1−ϵ", arbitrarily close to one[2]. To achieve that, we will employ an extra assumption. Then we will apply the results to the original conjecture and discuss how reasonable the extra premise is.
The Amplification Step
Let C be the set of all reversible circuits and let P⊊C denote the set of circuits C such that P(C) is true. As the authors note, deciding P[3] is coNP-hard, and thus, coNP-complete[4], implying ¯¯¯¯P (the complement of P relative to C) is NP-complete.
We assume the (weak) conjecture is true, so there is a polynomial-time verifier V0 such that: for all C∈P, there is a polynomial-sized π with V0(C,π)=1; and for 99% of random circuits C∈¯¯¯¯P, no such π exists. Let V0[5] denote also the set of circuits C (a property) such that there is a π yielding V0(C,π)=1. By definition, V0 is in NP, implying ¯¯¯¯¯V0 is in coNP. Note that ¯¯¯¯¯V0⊆¯¯¯¯P.
Define the set V′0=V0∩¯¯¯¯P. That is, V′0 contains the circuits C∈C such that P(C) does not hold and there is a π with V0(C,π)=1. Since both ¯¯¯¯P and V0 are in NP, so is V′0, and ¯¯¯¯¯¯V′0 is in coNP. Note that V′0⊊¯¯¯¯P, and the conjecture implies Prob(V0(C)|¯¯¯¯P(C))=Prob(V′0(C)|¯¯¯¯P(C))≤10−2, for a random C∈C [6].
As V′0 is in NP, it can be reduced in polynomial time to ¯¯¯¯P, which is NP-complete. Formally, there is a polynomial-time algorithm f0 (a many-one reduction) such that C∈V′0 iff f0(C)∈¯¯¯¯P. Given such a reduction f0, define the set Vf00={C∈C∣f0(C)∈V0}. For any C∈¯¯¯¯¯¯V′0, f0(C)∈P and, by the conjecture, there is a polynomial-sized π yielding V0(f0(C),π)=1, and thus f0(C)∈V0 . Therefore, C∈¯¯¯¯¯¯V′0 implies C∈Vf00. Note that Vf00 is in NP by definition, as we can define a polynomial verification algorithm Vf00(C,π)=V0(f0(C),π) such that C∈Vf00 iff there is a (polynomial-sized) π yielding Vf00(C,π)=1. Also note that ¯¯¯¯¯¯¯¯¯Vf00⊆V′0.
Since V0 and Vf00 are in NP, so is V1=V0∩Vf00 (the problem we are interested in). This implies there is a polynomial-time verifier V1(⋅,⋅) such that for any circuit C∈V1, there is a polynomial-sized π yielding V1(C,π)=1. In particular, for all C satisfying P, since P⊆V1, and all C∈V1∩¯¯¯¯P, there is such a π.
Now define the set of circuits V′1=V1∩¯¯¯¯P=V′0∩Vf00. As V1 and ¯¯¯¯P are in NP, so is V′1. The set V′1 contains all circuits falsifying P that exhibit a special structure of those circuits satisfying P (there is a π with V1(C,π)=1). As V′1⊆V′0, Prob(V1(C)|¯¯¯¯P(C))=Prob(V′1(C)|¯¯¯¯P(C)) is not higher than Prob(V′0(C)|¯¯¯¯P(C)), and V1(⋅,⋅) is a polynomial-time verifier satisfying both conditions in the weak conjecture. For V1(⋅,⋅) to satisfy a stronger version of the conjecture, with Prob(V′1(C)|¯¯¯¯P(C))<Prob(V′0(C)|¯¯¯¯P(C))≤10−2, we need some extra assumption.
The Extra Assumption
Since V′1=V1∩¯¯¯¯P⊆V′0⊆¯¯¯¯P, it follows that, for a random C∈C,Prob(V′1(C)∣¯¯¯¯P(C))=Prob(V′1(C)∣V′0(C))Prob(V′0(C)∣¯¯¯¯P(C)). So we just need an assumption yielding Prob(C∈V′1|C∈V′0)=Prob(f0(C)∈V′0∣C∈V′0)<1 in order for the polynomial verifier V1 to satisfy a stronger version of the weak conjecture.
Recall that C∈V′0 iff f0(C)∈¯¯¯¯P, and note that Prob(C∈V1|C∈V′0) is equal to Prob(f0(C)∈V0|C∈V′0)=Prob(f0(C)∈V′0|C∈V′0). Thus we need an assumption entailing Prob(f0(C)∈V′0∣f0(C)∈¯¯¯¯P)<1, or, equivalently, Prob(f0(C)∈¯¯¯¯¯¯V′0∣f0(C)∈¯¯¯¯P)>0. This inequality says that, given a random reversible circuit C for which P(f0(C)) is false, f0(C) has non-zero probability of being in ¯¯¯¯¯V0 (not having a π such that V0(f0(C),π)=1). For a random C falsifying P, the conjecture implies Prob(C∈¯¯¯¯¯¯V′0∣C∈¯¯¯¯P)≥99%, thus we might wonder whether this probability bound is robust to applying a reduction to C. Nonetheless, by employing an arbitrary number of iterations of the amplification step, as we will show, we can achieve the desired strengthening of the conjecture with a much weaker premise[7]:
Reduction-Regularity:[8] There is a δ∈(0,1) such that, given any NP problem L⊊¯¯¯¯P with Prob(C∈¯¯¯¯L∣C∈¯¯¯¯P)≥99% for a random C∈C, there is a polynomial-time reduction f(.) from L to ¯¯¯¯P yielding Prob(f(C)∈¯¯¯¯L∣f(C)∈¯¯¯¯P)≥δ for a random C.
While the (weak) conjecture implies Prob(V′0(C)|¯¯¯¯P(C))≤10−2, for a random C∈C, assuming also the premise above (with L=V′0) implies there is a reduction f0 such that Prob(V′1(C)|V′0(C))≤1−δ. Consequently, we obtain Prob(V′1(C)|¯¯¯¯P(C))=Prob(V′1(C)|V′0(C))Prob(V′0(C)|¯¯¯¯P(C))≤10−2(1−δ). This seems a small gain, as δ can be close to zero, but it is sufficient to iteratively amplify the weak conjecture.
Amplifying the Weak Conjecture
The whole process can be repeated to achieve a higher probability bound. The weak conjecture and Reduction-Regularity imply Prob(V1(C)|¯¯¯¯P(C))≥0.99+10−2δ, which would already give us a stronger version of the weak conjecture. But now Reduction-Regularity (with L=V′1) implies there is a polynomial many-one reduction f1 from V′1=V1∩¯¯¯¯P to ¯¯¯¯P such that Prob(f1(C)∉V′1|f1(C)∈¯¯¯¯P)≥δ. Defining V2=V1∩Vf11, where Vf11={C∈C|f1(C)∈V1}, and V′2=V2∩¯¯¯¯P, this entails Prob(¯¯¯¯¯V2(C)|V′1(C))≥δ and Prob(V2(C)|V′1(C))=Prob(V′2(C)|V′1(C))≤1−δ. As Prob(V′1(C)|¯¯¯¯P(C))≤10−2(1−δ) and V′2⊆V′1⊆¯¯¯¯P, it follows that:
Prob(V2(C)|¯¯¯¯P(C))=Prob(V′2(C)|V′1(C))Prob(V′1(C)|¯¯¯¯P(C))≤10−2(1−δ)2
Note that P⊆V2, so V2 would satisfy an even stronger conjecture.
The amplification step can be iterated inductively. For any positive i∈N, given V′i is in NP, we can construct a reduction fi from V′i to ¯¯¯¯P (via Reduction-Regularity) such that defining Vi+1=Vi∩Vfii and V′i+1=Vi+1∩¯¯¯¯P yields Prob(¯¯¯¯¯¯¯¯¯¯V′i+1(C)|V′i(C))≥δ and Prob(V′i+1(C)|V′i(C))=Prob(Vi+1(C)|V′i(C))≤1−δ. If the amplification is iterated m times, we have:
Prob(Vm(C)|¯¯¯¯P(C))=Prob(V0(C)|¯¯¯¯P(C))m∏i=1Prob(Vi(C)|V′i−1(C))≤(1−δ)m10−2
Given any ϵ∈(0,1), we can choose an m∈N such that 10−2(1−δ)m≤ϵ, or m≥log1−δ(102ϵ). Iterating the amplification step m times thus yields Prob(Vm(C)|¯¯¯¯P(C))≤ϵ and, consequently, Prob(¯¯¯¯¯¯¯Vm(C)|¯¯¯¯P(C))≥1−ϵ. Note that Vm is exactly the set of reversible circuits C for which there is a π yielding Vm(C,π)=1, for a polynomial verifier Vm(⋅,⋅), since Vm is in NP. Furthermore, P⊆Vm. Therefore the (weak) conjecture, together with Reduction-Regularity, implies:
Amplified (weak) Computational no-coincidence conjecture: For a reversible circuit C:{0,1}3n→{0,1}3n, let P(C) be the property that there is no input x to C that ends in n zeros, such that C(x) also ends in n zeros. For any ϵ∈(0,1), there exists a polynomial-time verification algorithm V that receives as input:
such that:
Note that, relative to the size of the circuit C, the algorithm V (or Vm) still runs in polynomial time and π still is polynomial-sized. Nonetheless, the number m of amplification iterations needed for a given ϵ is proportional to log1/ϵ if we assume a fixed δ∈(0,1). This implies the computational complexity of V and π might increase exponentially on 1/ϵ, as indicated in the Appendix.
Amplifying the Original Conjecture
The amplified weak conjecture can finally be used to amplify the original one. Let α∈(0,1) be such that Prob(C∈P)=α and Prob(C∈¯¯¯¯P)=1−α. Since Prob(¯¯¯¯V(C)|P(C))=0 for any V satisfying the amplified weak conjecture, the latter implies that, for any ϵ∈(0,1), there is a V such that Prob(¯¯¯¯V(C))=Prob(¯¯¯¯P(C))Prob((¯¯¯¯V(C)|¯¯¯¯P(C))≥(1−α)(1−ϵ). Note that Prob(¯¯¯¯V(C))≤1−α, for all C∈P is also in V, as required by the amplified conjecture. Hence, we can make the probability bound arbitrarily close to that limit, and, assuming Reduction-Regularity, the (weak) conjecture implies:
Amplified Computational no-coincidence conjecture: For a reversible circuit C:{0,1}3n→{0,1}3n, let P(C) be the property that there is no input x to C that ends in n zeros, such that C(x) also ends in n zeros. Let α∈(0,1) be such that exactly 100α% of random reversible circuits satisfy P. For any ϵ∈(0,1), there exists a polynomial-time verification algorithm V that receives as input:
such that:
The Reasonability of Reduction-Regularity
Before speculating about the implications of these results, we have to check how reasonable Reduction-Regularity is. At first, it might appear that it is just a harmless assumption. Upon closer look, we can see, for instance, that it implies a δ-fraction of the circuits in V′i to be reduced (via fi) to circuits in ¯¯¯¯P∖V′i. All circuits in V′i⊂Vi show the special structure (Vi(C,π)=1) without satisfying the property P. Therefore, it is plausible to conceive that C∈V′i might imply probability 100% for fi(C)∈V′i⊂Vi, even though at least 99% of the circuits in ¯¯¯¯P are not in Vi⊆V0. Against that, we can argue that fi can be constructed as a composition of reductions, for instance reducing V′i to SAT and then reducing SAT to ¯¯¯¯P, probably washing away any possible structure that entail fi(C)∈V′i(C). In the end, the plausibility of Reduction-Regularity might depend on the specific probability distribution over circuits; personally, I believe this premise is quite reasonable, and maybe it can be supported by some empirical experiments.
Reduction-Regularity can actually be weakened, as we just need an infinite number of i's such that Prob(Vi(C)|V′i−1)≤1−δ. Let I⊆N be the (infinite) set of such values of i, with I={i1,i2,i3…} such that i1<i2<i3…. Then the m amplification iterations with the original Reduction-Regularity can be emulated with im iterations using this weakened version. Though the bounds derived in the Appendix do not hold anymore, intuitively the weakened version should be even more plausible.
Conclusion
Assuming Reduction-Regularity, we might speculate whether the original conjecture is likely by analysing its amplified version. Intuitively, the strengthened version should decrease our credence on the conjecture, but it is not clear to what amount. It is tempting to consider what happens when m tends to infinity, as ϵ tends to zero and V′itends to the empty set. This would imply P∪V′i=P is in NP, but we know it is coNP-hard, so we would have NP=coNP. However, the running time of the verifier might increase exponentially on 1/ϵ, as indicated in the Appendix, or the complexity of fi might increase with i, not being polynomial in the limit.
Personally, I am inclined to think the conjecture is false. If the conjecture is indeed false, then it follows that NP≠coNP[9] and, consequently, P≠NP. Hence, it should be rather hard to prove the conjecture is false.
Appendix
To derive an upper bound for the complexities of V and π in the amplified conjecture after m amplification iterations, we construct an algorithm for V1(⋅,⋅) using V0(⋅,⋅), f0 and Vf00(⋅,⋅). Recall that V1(⋅,⋅) is a verifier for V0∩Vf00, V0(⋅,⋅) is a verifier for V0 and Vf00(⋅,⋅) is a verifier for Vf00. To verify that C∈V0∩Vf00, we can verify that C∈V0 and C∈Vf00. C∈V0 iff there is a π0 such that V0(C,π0)=1; C∈Vf00 iff there is a πf0 such that Vf00(C,πf0)=V0(f0(C),πf0)=1. Hence, V1(⋅,⋅) can be defined via the multiplication V1(C,π0πf0)=V0(C,π0)V0(f0(C),πf0),[10] such that C∈V1 iff there are π0 and πf0 such that V1(C,π0πf0)=1. The complexity of V1(⋅,⋅) is thus dominated by the complexity of Vf00(⋅,⋅). Assuming |π0|∈O(|C|α) and the time to run V0(C,π) is O((|C|+|π|)β), it follows that V0 is in O((|C|αβ). If the time to run f0(C) is O(|C|γ), then the time to run V1(C,π)=V0(f0(C),π) is O((|C|αβγ) and the size of its certificate is O(|C|αγ). That is, after one iteration of the amplifying step, the size of the certificate π and the and the run time of the verifier have their upper bound raised to the power of γ. After m iterations, they will be raised to the power of γm (assuming all fi have the same complexity[11]), hence increasing doubly exponentially on m, unless γ=1 (all reductions run in linear time). Since m=O(log1/ϵ) for a fixed δ, both complexity bounds would increase exponentially on 1/ϵ.
Also posted at LessWrong.
In the original conjecture, this probability cannot arbitrarily approach 1, as there is a positive probability for P(C) being true, as we will discuss.
We use the same symbol, say S, to denote a property of circuits, the set of circuits satifying that property and the problem of deciding, given a circuit C∈C, whether C∈S (or, equivalently, S(C)).
To verify that a C is in ¯¯¯¯P, one can guess an input that ends with n zeros and obtain the output ending with zeros in polynomial time. Thus ¯¯¯¯P is in NP and P is in coNP.
Abusing the notation, we use the same symbol for a set in NP and the corresponding polynomial-time verifier.
We think the probability distribution should not matter, as the conjecture's authors write, but they show a possible distribution to be concrete.
This premise can be weakened as we discuss later.
This is admittedly not a good name, and suggestions are welcome.
If NP=coNP, then deciding the property P is in NP, and there is a verifier satisfying the conjecture.
For strings a and b, ab denotes their concatanation.
This might be a rather strong assumption, as the complexity of fi might increase with i.