What will you do that makes your game better than the many other computer games out there?
I presume that random members of the public will be playing this game, not just a handful of experts.
Once the AI realizes its in a game and being tested, you have basically no reason to suspect its behaviour in the game to be correlated with behaviour in reality. Given random people who are making basically no attempt to keep this secret, any human level AI will realize this. Freely interacting with large numbers of gullible internet randos isn't the best from a safety standpoint either. I mean a truely superintelligent AI will persuade MIRI to let it out of any box, but for a top human level AI, it could more easily trick internet randos.
If you kept the controls much tighter, and only had a few AI experts interacting with it, you could possibly have an AI smart enough to be useful, and dumb enough to not realize its in a box being tested. But this makes getting enough training data hard.
In order to be the kind of healthy participant in the world AI ecosystem that you are describing, I have the sense that the object-level product you build must be good for the world independent of the goodness of the experiments that it enables or datasets that it generates. So I think you face the challenge of building a game that in addition to being successful, is on its own good for the world.
Also available on the EA Forum.
Preceded By: Encultured AI Pre-planning, Part 2: Providing a Service
If you've read to the end of our last post, you maybe have guessed: we’re building a video game!
This is gonna be fun :)
Our homepage: https://encultured.ai/
Will Encultured save the world?
Is this business plan too good to be true? Can you actually save the world by making a video game?
Well, no. Encultured on its own will not be enough to make the whole world safe and happy forever, and we'd prefer not to be judged by that criterion. The amount of control over the world that's needed to fully pivot humanity from an unsafe path onto a safe one is, simply put, more control than we're aiming to have. And, that's pretty core to our culture. From our homepage:
Our goal is to play a part in what will be or could be a prosperous civilization. And for us, that means building a successful video game that we can use in valuable ways to help the world in the future!
Fun is a pretty good target for us to optimize
You might ask: how are we going to optimize for making a fun game and helping the world at the same time? The short answer is that creating a game world in which lots of people are having fun in diverse and interesting ways in fact creates an amazing sandbox for play-testing AI alignment & cooperation. If an experimental new AI enters the game and ruins the fun for everyone — either by overtly wrecking in-game assets, subtly affecting the game culture in ways people don't like, or both — then we're in a good position to say that it probably shouldn't be deployed autonomously in the real world, either. In the long run, if we're as successful we hope as a game company, we can start posing safety challenges to top AI labs of the form "Tell your AI to play this game in a way that humans end up endorsing."
Thus, we think the market incentive to grow our user base in ways they find fun is going to be highly aligned with our long-term goals. Along the way, we want our platform to enable humanity to learn as many valuable lessons as possible about human↔AI interaction, in a low-stakes game environment before having to learn those lessons the hard way in the real world.
Principles to exemplify
In preparation for growing as a game company, we’ve put a lot of thought into how to ensure our game has a positive rather than negative impact on the world, accounting for its scientific impact, its memetic impact, as well as the intrinsic moral value of the game as a positive experience for people.
Below are some guiding principles we’re planning to follow, not just for ourselves, but also to set an example for other game companies:
So, that’s it. Make a fun game, make sure it remains a healthy and tolerant place for experiments with AI safety and alignment, and be safe and ethical ourselves in the ways we want all game companies to be safe and ethical. We hope you’ll like it!
If we're very lucky and the global development of AI technology moves in a really safe and positive direction — e.g., if we end up with a well-functioning Comprehensive AI Services economy — maybe our game will even stick around as a long-lasting source of healthy entertainment. While it's beyond our ability to unilaterally prevent every disaster that could avert such a positive future, it's definitely our intention to help steer things in that direction.
Also, we’re hiring! Definitely reach out to our team via contact@encultured.ai if you have any questions or ideas to share, or if you might want to get involved :)