All of chasmani's Comments + Replies

Thanks for the reply! 

I think it might be true that substrate convergence is inevitable eventually. But it would be helpful to know how long it would take. Potentially we might be ok with it if the expected timescale is long enough (or the probability of it happening in a given timescale is low enough).

I think the singleton scenario is the most interesting, since I think that if we have several competing AI's, then we are just super doomed. 

If that's true then that is a super important finding! And also an important thing to communicate to people... (read more)

1Linda Linsefors
Agreed. I'd love for someone to investigate the possibility of slowing down substrate-convergence enough to be basically solved. Hm, to me this conclusion seem fairly obvious. I don't know how to communicate it though, since I don't know what the crux is. I'd be up for participating in a public debate about this, if you can find me an opponent. Although, not until after AISC research lead applications are over, and I got some time to recover. So maybe late November at the earliest. 

I am interested in the substrate-needs convergence project. 

Here are some initial thoughts, I would love to hear some responses:

  • An approach could be to say under what conditions natural selection will and will not sneak in. 
  • Natural selection requires variation. Information theory tells us that all information is subject to noise and therefore variation across time. However, we can reduce error rates to arbitrarily low probabilities using coding schemes. Essentially this means that it is possible to propagate information across finite timescales w
... (read more)
0Remmelt Ellen
Thanks for the thoughts! Some critical questions: Are you considering variations introduced during learning (as essentially changes to code, that can then be copied). Are you consider variations introduced by microscopic changes to the chemical/structural configurations of the maintained/produced hardware? Claude Shannon showed this to be the case for a single channel of communication. How about when you have many possible routing channels through which physical signals can leak to and back from the environment? If you look at existing networked system architectures, does the near-zero error rates you can correct toward at the binary level (eg. with use of CRC code) also apply at higher layers of abstraction (eg. in detecting possible trojan horse adversarial attacks)? This is true. Can there be no variation introduced into AGI, when they are self-learning code and self-maintaining hardware in ways that continue to be adaptive to changes within a more complex environment? Besides point-change mutations, are you taking into account exaptation, as the natural selection for shifts in the expression of previous (learned) functionality? Must exaptation, as involving the reuse of functionality in new ways, involve smooth changes in phenotypic expression? Are the other attraction basins instantiated at higher layers of abstraction? Are any other optima approached through selection across the same fine-grained super-dimensional landscape that natural selection is selective across? If not, would natural selection “leak” around those abstraction layers, as not completely being pulled into the attraction basins that are in fact pulling across a greatly reduced set of dimensions? Put a different way, can natural selection pull side-ways on the dimensional pulls of those other attraction basins? I get how you would represent it this way, because that’s often how natural selection gets discussed as applying to biological organisms. It is not quite thorough in terms of des
1Linda Linsefors
Yes! Yes! The big question to me is if we can reduced error rates enough. And "error rates" here is not just hardware signal error, but also randomness that comes from interacting with the environment. It has to be smooth relative to the jumps the jumps that can be achieved what ever is generating the variation. Natural mutation don't typically do large jumps. But if you have a smal change in motivation for an intelligent system, this may cause a large shift in behaviour.  I though so too to start with. I still don't know what is the right conclusion, but I think that substrate-needs convergence it at least still a risk even with a singleton. Something that is smart enough to be a general intelligence, is probably complex enough to have internal parts that operate semi independently, and therefore these parts can compete with each other.  I think the singleton scenario is the most interesting, since I think that if we have several competing AI's, then we are just super doomed.  And by singleton I don't necessarily mean a single entity. It could also be a single alliance. The boundaries between group and individual is might not be as clear with AIs as with humans.  This will probably be correct for a time. But will it be true forever? One of the possible end goals for Alignment research is to build the aligned super intelligence that saves us all. If substrate convergence is true, then this end goal is of the table. Because even if we reach this goal, it will inevitable start to either value drift towards self replication, or get eaten from the inside by parts that has mutated towards self replication (AI cancer), or something like that. Cancer is an excellent analogy. Humans defeat it in a few ways that works together 1. We have evolved to have cells that mostly don't defect 2. We have an evolved immune system that attracts cancer when it does happen 3. We have developed technology to help us find and fight cancer when it happens 4. When someone gets can