I was thinking more of internal systems that a company would have enough faith in to deploy (a 1% chance of severe failure is pretty terrible!) or customer-facing things that would piss off customers more than scare them.
Getting these right is tremendously hard. Lots of companies are trying and mostly failing right now. There's a ton of money in just "making solid services/products that work with high reliability."
Thanks so much for that explanation. I've started to review those posts you linked to and will continue doing so later. Kudos for clearly outlining your positions, that's a lot of content.
> "We probably mostly disagree because you’re expecting LLMs forever and I’m not."
I agree that RL systems like AlphaZero are very scary. Personally, I was a bit more worried about AI alignment a few years ago, when this seemed like the dominant paradigm.
I wouldn't say that I "expect LLMs forever", but I would say that if/when they are replaced, I think it's more likely than not that they will be replaced by a system of a scariness factor that's similar to LLMs or less. The main reason being is that I think there's a very large correlation between "not being scary" and "being commercially viable", so I expect a lot of pressure for non-scary systems.
The scariness of RL systems like AlphaZero seems to go hand-in-hand with some really undesirable properties, such as [being a near-total black box] and [being incredibly hard to intentionally steer]. It's definitely possible that in the future some capabilities advancement might mean that scary systems have such a intelligence/capabilities advantage that this outweighs the disadvantages, but I see this as unlikely (though definitely a thing to worry about).
> I’m not sure what you mean by “subcomponents”. Are you talking about subcomponents at the learning algorithm level, or subcomponents at the trained model level?
I'm referring to scaffolding. As in, an organization makes an "AI agent" but this agent frequently calls a long list of specific LLM+Prompt combinations for certain tasks. These subcalls might be optimized to be narrow + [low information] + [low access] + [generally friendly to humans] or similar. This can be made more advanced with a large variety of fine-tuned models, but that might be unlikely.
I appreciate this post for working to distill a key crux in the larger debate.
Some quick thoughts:
1. I'm having a hard time understanding the "Alas, the power-seeking ruthless consequentialist AIs are still coming” intuition. It seems like a lot of people in this community have this intuition, and I feel very curious why. I appreciate this crux getting attention.
2. Personally, my stance is something more like, "It seems very feasible to create sophisticated AI architectures that don't act as scary maximizers." To me it seems like this is what we're doing now, and I see some strong reasons to expect this to continue. (I realize this isn't guaranteed, but I do think it's pretty likely)
3. While the human analogies are interesting, I assume they might appeal more to the "consequentialist AIs are still coming” crowd than people like myself. Humans were evolved for some pretty wacky reasons, and have a large number of serious failure modes. Perhaps they're much better than some of what people imagine, but I suspect that we can make AI systems that have much more rigorous safety properties in the future. I personally find histories of engineering complex systems in predictable and controllable ways to be much more informative, for these challenges.
4. You mention human intrinsic motivations as a useful factor. I'd flag that in a competent and complex AI architecture, I'd expect that many subcomponents would have strong biases towards corrigibility and friendliness. This seems highly analogous to human minds, where it's really specific sub-routines and similar that have these more altruistic motivations.
This looks really interesting to me. I remember when the Safety via Debate paper originally came out; I was quite curious to see more work around modeling debate environments and getting a better sense on how well we should expect it to perform in what kinds of situations. From what I can tell this does a rigorous attempt at 1-2 models.
I noticed that this is more intense mathematically than most other papers I'm used to in this area. I started going through it but was a bit intimidated. I was wondering if you may suggest tips for reading through it and understanding it. Do readers need to understand some of Measure Theory or other specific areas of math that may be a bit intense for what we're used to on LessWrong? Are there any other things we should read first or make sure we know to help prepare accordingly?
Thanks for the clarification.
> But the thing I'm most worried about is companies succeeding at "making solid services/products that work with high reliability" without actually solving the alignment problem, and then it becomes even more difficult to convince people there even is a problem as they further insulate themselves from anyone who disagrees with their hyper-niche worldview.
The way I see it, "making solid services/products that work with high reliability" is solving a lot of the alignment problem. As in, this can get us very far into making AI systems do a lot of valuable work for us with very low risk.
I imagine that you're using a more specific definition of it than I am here.