Looking at these, I feel like they are subquestions of "how do you design a good society that can handle technological development" -- most of it is not AI-specific or CAIS-specific.
For me this is the main point of CAIS. It reframes many AI Safety problems in terms of "make a good society" problems, but now you can consider scenarios involving only AI. We can start to answer the question of "how do we make a good society of AIs?" with the question "How did we do it with humans?". It seems like human society did n...
My point is just that "prior / equilibrium selection problem" is a subset of the "you don't know everything about the other player" problem, which I think you agree with?
I see two problems: one of trying to coordinate on priors, and one of trying to deal with having not successfully coordinated. I think that which is easier depends on the problem: if we're applying it to CAIS, HRI or a multipolar scenario. Sometimes it's easier to coordinate on a prior before hand, sometimes it's easier to be robust to differing pr...
I second Michael Dennis' comment below, that the infinite regress of priors is avoided in standard game theory by specifying a common prior. Indeed the specification of this prior leads to a prior selection problem.
Just to make sure that I was understood, I was also pointing out that "you can have a well-specified Bayesian belief over your partner" even without agreeing on a common prior, as long as you agree on a common set of possibilities or something effectively similar. This means that talking about "Bayesian agents without a comm...
I mean, in this case you just deploy one agent instead of two
If the CAIS view multi-agent setups like this could be inevitable. There are also many reasons that we could want a lot of actors making a lot of agents rather than one actor making one agent. By having many agents we have no single point of failure (like fault-tolerant data-storage) and no single principle has a concentration of power (like the bitcoin protocol).
It does introduce more game-theoretic issues, but those issues seem understandable and tractable to me and there is very little wor...
Note that when you can have a well-specified Bayesian belief over your partner, these problems don't arise. However, both agents can't be in this situation: in this case agent A would have a belief over B that has a belief over A; if these are all well-specified Bayesian beliefs, then A has a Bayesian belief over itself, which is impossible.
There are ways to get around this. The most common way in the literature (in fact the only way I have seen) gives every agent a belief over a set of common worlds (which contain both the state of the world an...
Yeah, I understand that. My point is that the same way society didn't work by default, systems of AI won't work by default, and that the interventions that will be needed will require AI researchers. That is, it's not just about setting up laws, norms, contracts, and standards for managing these systems. It is about figuring out how to make AI systems which interact with each other in the way that humans do in the presence of laws, norms, standards and contracts. Someone who is not an AI research would have no hope in solving this, since they cannot understand how AI systems will interact, and cannot offer appropriate interventions.