Epimetheus

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I agree with Paul Christiano here. Let's call this rogue-AI preventing superintelligence Mr. Smiles. Let's assume that Mr. Smiles cannot find a "good" solution within a decade, and instead must temporarily spend much of his efforts preventing the creation of "bad" AGI.

How does Mr. Smiles ensure that no rogue actors, nation-states, corporations, or other organizations create AGI? Well, Mr. Smiles needs two things: sensors and actuators. The "mind" of Mr. Smiles isn't a huge problem, but the sensors and actuators are extremely problematic:

  1. Sensors: The world will have to be covered in sensors. A global surveillance states needs to be instantiated. Hidden or governmental networks? Remote areas of the world? Those are massive risks. Yet, for every system Mr. Smiles infiltrates, he has made an enemy of its owners. I don't think the PLA will appreciate a likely-Western AGI infiltrating and spying on its network. 
  2. Actuators: This is the real danger. Mr. Smiles has assessed a threat. What does Mr. Smiles do? How does he stop a human or an organization from working on AGI? Does he destroy humanity's basis of knowledge? Prevent humans from accessing computing technology? If a human gets too close, what means does he go to? How many humans are worth ending to prevent the creation of a maligned AGI? 

I have a love for the way we unknowingly instantiate religion into conversations on AGI. Mr. Smiles is God. Mr. Smiles is omnipotent, omniscient, and omnibenevolent. I do not need to relay the main arguments against the logical possibility of such an entity. Mr. Smiles cannot, and will not work. If Mr. Smiles "works," our world begins to look awfully dystopian. 


Instead, I agree with Christiano that we should look at the defense-industrial complex. We just have to continue to build defenses against offenses, in an ever-lasting battle that's raged since the beginning of life on this planet. Conflict is part of having agents in the world. 

The real issue is asymmetric warfare, where one AGI has outsized power, either due to its size, or the asymmetry of offensive weaponry. I will steal an excerpt from "Sapiens," and instead call it the defense-industrial-scientific complex. Our defense complex is not new to the asymmetric effects of technology, nor the ability to wage destruction, and the difficulty in promoting healing. Yet it has adapted countless times to new capabilities, threats, and societal orders. I do not think our current system is robust to the threats of AGI, but I imagine it can adapt into such a system. Further, while humans cannot solve the proposed game theories of superintelligent agents, superintelligent societies may just be able to. 

I think there's only one reasonable path towards a "good" future. We need to solve internal alignment. "Control" is nice, but it's main use is for a "first-mover." Once multiple actors acquire AGI technology, control is no longer a sufficient approach. If we solve alignment, we need to propagate a multitude of aligned agents across the globe, and help in shaping their new burgeoning society. 

If we are serious about creating "AGI," we need to understand that we are not creating a tool, but a new form of life. Will the world improve? Hopefully. Will the world become more complex? Dramatically so. I hate to advocate for this position, but our defense-industrial-scientific complex won't likely be discarded, but improved upon a thousandfold.