Introduction

Recently, the Computational No-Coincidende Conjecture[1] was proposed, presented as an assumption that might be needed to develop methods to explain neural nets in the current approach from the Alignment Research Center. In this post, I will prove that some plausible extra assumption can be used to amplify the conjecture, showing it implies an arbitrarily stronger version of itself.

In a slightly weakened version, the conjecture reads:
(weak) Computational no-coincidence conjecture: For a reversible circuit , let  be the property that there is no input  to  that ends in  zeros, such that  also ends in  zeros. There exists a polynomial-time verification algorithm  that receives as input:

  • A reversible circuit 
  • An advice string 

such that:

  • For all  such that  is true, there exists  with length polynomial in the size of , such that .
  • For 99% of random reversible circuits  for which P(C) is false, no such  exists.

The original conjecture omits "for which  is false". As such  always exists when  is true, the original conjecture implies this weak version.

With an additional assumption, I will show that this (weak) conjecture implies a stronger version of itself, in which the "99%" can be replaced by "" for an arbitrarily small . This entails the original conjecture can also be strengthened. 

The strategy will be to define a new problem in NP using the conjecture. This problem is to decide whether a reversible circuit is in the set containing all circuits satisfying the property  plus a fraction of those falsifying it. If that fraction is smaller than 1%, we will have an stronger weak conjecture with a probability bound higher than "99%". Iterating this amplification step, that bound will be replaced by "", arbitrarily close to one[2]. To achieve that, we will employ an extra assumption. Then we will apply the results to the original conjecture and discuss how reasonable the extra premise is.

The Amplification Step

Let  be the set of all reversible circuits and let  denote the set of circuits  such that  is true. As the authors note, deciding [3] is coNP-hard, and thus, coNP-complete[4], implying  (the complement of  relative to ) is NP-complete.

We assume the (weak) conjecture is true, so there is a polynomial-time verifier  such that: for all , there is a polynomial-sized  with ;  and for 99% of random circuits , no such  exists. Let [5] denote also the set of circuits  (a property) such that there is a  yielding . By definition,  is in NP, implying  is in coNP. Note that .

Define the set . That is,  contains the circuits  such that  does not hold and there is a  with . Since both  and  are in NP, so is , and  is in coNP. Note that , and the conjecture implies , for a random  [6].

As  is in NP, it can be reduced in polynomial time to , which is NP-complete. Formally, there is a polynomial-time algorithm  (a many-one reduction) such that  iff . Given such a reduction , define the set . For any ,   and, by the conjecture, there is a polynomial-sized  yielding , and thus  . Therefore,  implies .  Note that  is in NP by definition, as we can define a polynomial verification algorithm  such that  iff there is a (polynomial-sized)  yielding . Also note that  .

Since  and  are in NP, so is  (the problem we are interested in). This implies there is a polynomial-time verifier  such that for any circuit , there is a polynomial-sized  yielding . In particular, for all  satisfying , since , and all , there is such a .

Now define the set of circuits . As  and  are in NP, so is . The set  contains all circuits falsifying  that exhibit a special structure of those circuits satisfying  (there is a  with ). As  is not higher than , and  is a polynomial-time verifier satisfying both conditions in the weak conjecture. For  to satisfy a stronger version of the conjecture, with , we need some extra assumption.

The Extra Assumption

 Since , it follows that, for a random ,. So we just need an assumption yielding  in order for the polynomial verifier  to satisfy a stronger version of the weak conjecture.

Recall that  iff , and note that  is equal to . Thus we need an assumption entailing , or, equivalently, . This inequality says that, given a random reversible circuit  for which  is false,  has non-zero probability of being in  (not having a  such that ). For a random  falsifying , the conjecture implies %,  thus we might wonder whether this probability bound is robust to applying a reduction to . Nonetheless, by employing an arbitrary number of iterations of the amplification step, as we will show, we can achieve the desired strengthening of the conjecture with a much weaker premise[7]

Reduction-Regularity:[8] There is a  such that, given any NP problem  with  for a random , there is a polynomial-time reduction  from  to  yielding  for a random .

While the (weak) conjecture implies , for a random , assuming also the premise above (with ) implies there is a reduction  such that .  Consequently, we obtain . This seems a small gain, as  can be close to zero, but it is sufficient to iteratively amplify the weak conjecture.

Amplifying the Weak Conjecture

The whole process can be repeated to achieve a higher probability bound. The weak conjecture and Reduction-Regularity imply , which would already give us a stronger version of the weak conjecture. But now Reduction-Regularity (with ) implies there is a polynomial many-one reduction  from  to  such that . Defining  , where , and , this entails  and . As  and  , it follows that:

 

Note that , so  would satisfy an even stronger conjecture. 

The amplification step can be iterated inductively. For any positive , given  is in NP, we can construct a reduction  from  to  (via Reduction-Regularity) such that defining  and  yields  and . If the amplification is iterated  times, we have:

 

Given any , we can choose an  such that , or .   Iterating the amplification step  times thus yields  and, consequently, . Note that  is exactly the set of reversible circuits  for which there is a  yielding , for a polynomial verifier , since  is in NP. Furthermore,  Therefore the (weak) conjecture, together with Reduction-Regularity, implies:

Amplified (weak) Computational no-coincidence conjecture: For a reversible circuit , let  be the property that there is no input  to  that ends in  zeros, such that  also ends in  zeros. For any , there exists a polynomial-time verification algorithm  that receives as input:

  • A reversible circuit 
  • An advice string 

such that:

  • For all  such that  is true, there exists  with length polynomial in the size of , such that .
  • For % of random reversible circuits  for which P(C) is false, no such  exists.

Note that, relative to the size of the circuit , the algorithm  (or ) still runs in polynomial time and  still is polynomial-sized. Nonetheless, the number  of amplification iterations needed for a given  is proportional to  if we assume a fixed . This implies the computational complexity of  and  might increase exponentially on , as indicated in the Appendix. 

Amplifying the Original Conjecture

The amplified weak conjecture can finally be used to amplify the original one. Let  be such that  and . Since  for any  satisfying the amplified weak conjecture, the latter implies that, for any , there is a  such that . Note that , for all  is also in , as required by the amplified conjecture. Hence, we can make the probability bound arbitrarily close to that limit, and, assuming Reduction-Regularity, the (weak) conjecture implies:

Amplified Computational no-coincidence conjecture: For a reversible circuit , let  be the property that there is no input  to  that ends in  zeros, such that  also ends in  zeros. Let  be such that exactly  of  random reversible circuits satisfy P. For any , there exists a polynomial-time verification algorithm  that receives as input:

  • A reversible circuit 
  • An advice string 

such that:

  • For all  such that  is true, there exists  with length polynomial in the size of , such that .
  • For  of random reversible circuits , no such  exists.

The Reasonability of Reduction-Regularity

Before speculating about the implications of these results, we have to check how reasonable Reduction-Regularity is. At first, it might appear that it is just a harmless assumption. Upon closer look, we can see, for instance, that it implies a -fraction of the circuits in  to be reduced (via ) to circuits in . All circuits in  show the special structure () without satisfying the property . Therefore, it is plausible to conceive that  might imply probability 100% for , even though at least 99% of the circuits in  are not in . Against that, we can argue that  can be constructed as a composition of reductions, for instance reducing  to SAT and then reducing SAT to , probably washing away any possible structure that entail (C). In the end, the plausibility of Reduction-Regularity might depend on the specific probability distribution over circuits; personally, I believe this premise is quite reasonable, and maybe it can be supported by some empirical experiments.

Reduction-Regularity can actually be weakened, as we just need an infinite number of i's such that . Let  be the (infinite) set of such values of , with  such that . Then the  amplification iterations with the original Reduction-Regularity can be emulated with  iterations using this weakened version. Though the bounds derived in the Appendix do not hold anymore, intuitively the weakened version should be even more plausible. 

Conclusion

Assuming Reduction-Regularity, we might speculate whether the original conjecture is likely by analysing its amplified version. Intuitively, the strengthened version should decrease our credence on the conjecture, but it is not clear to what amount. It is tempting to consider what happens when  tends to infinity, as  tends to zero and tends to the empty set. This would imply  is in NP, but we know it is coNP-hard, so we would have NP=coNP. However, the running time of the verifier might increase exponentially on , as indicated in the Appendix, or the complexity of  might increase with , not being polynomial in the limit. 

Personally, I am inclined to think the conjecture is false. If the conjecture is indeed false, then it follows that NPcoNP[9] and, consequently, PNP. Hence, it should be rather hard to prove the conjecture is false.

Appendix

To derive an upper bound for the complexities of  and  in the amplified conjecture after  amplification iterations, we construct an algorithm for  using  and . Recall that  is a verifier for  is a verifier for  and  is a verifier for . To verify that , we can verify that  and  iff there is a  such that  iff there is a  such that  . Hence,  can be defined via the multiplication ,[10] such that  iff there are  and  such that . The complexity of  is thus dominated by the complexity of . Assuming  and the time to run  is , it follows that  is in .  If the time to run  is , then the time to run  is   and the size of its certificate is . That is, after one iteration of the amplifying step, the size of the certificate  and the and the run time of the verifier have their upper bound raised to the power of . After  iterations, they will be raised to the power of  (assuming all  have the same complexity[11]), hence increasing doubly exponentially on , unless  (all reductions run in linear time). Since  for a fixed , both complexity bounds would increase exponentially on 

  1. ^

    Also posted at LessWrong.

  2. ^

    In the original conjecture, this probability cannot arbitrarily approach 1, as there is a positive probability for  being true, as we will discuss.

  3. ^

    We use the same symbol, say , to denote a property of circuits, the set of circuits satifying that property and the problem of deciding, given a circuit , whether  (or, equivalently, ).

  4. ^

    To verify that a  is in , one can guess an input that ends with  zeros and obtain the output ending with zeros in polynomial time. Thus  is in NP and  is in coNP. 

  5. ^

    Abusing the notation, we use the same symbol for a set in NP and the corresponding polynomial-time verifier.

  6. ^

    We think the probability distribution should not matter, as the conjecture's authors write, but they show a possible distribution to be concrete. 

  7. ^

    This premise can be weakened as we discuss later.

  8. ^

    This is admittedly not a good name, and suggestions are welcome.

  9. ^

    If NP=coNP, then deciding the property  is in NP, and there is a verifier satisfying the conjecture.  

  10. ^

    For strings  and  denotes their concatanation.

  11. ^

    This might be a rather strong assumption, as the complexity of  might increase with .

AI
Frontpage
New Comment
Curated and popular this week