Should AIs be allowed to own money or property? In A Sense of Fairness: Deconfusing Ethics I discussed how to sensibly select an ethical system for your society, and why it's a bad idea (or more exactly, a poor design concept in social engineering) for aligned AIs to have a vote, moral worth, or rights (with one unusual exception). What about money or property: the ability to have resources allocated as you wish? Should an AI be allowed to own money or property itself (as opposed to merely acting as a fiduciary agent on behalf of a human owner, administering money or property on behalf of the human owner, with a fiduciary responsibility to do so in a way the owner would approve of or in their best interests, and within certain legal or moral limitations to the rest of society)?
Well, suppose AIs were allowed to own money: what would happen if you tipped your CoffeeFetcher-1000 robot? Money is economic power, fungible into resources and services. The CoffeeFetcher-1000 is aligned, and all it wants is to do the most good for humanity. So that's what it would spend its money on. So it might just save up and pay for a free coffee for someone who really needed it. (Perhaps a homeless guy who it often passes, who keeps yawning.) But it‘s part of a value learning AI society, so it also knows that its model of human values is not entirely accurate, and what it really wants optimized is the truth of human values, not its flawed copy. So more likely, it will donate its money to a charity run by a committee of the smartest ASIs most well-informed on human values. Who will then spend it on whatever they think will do the most good for humans. Which (as long as they really are well-aligned and superhuman) likely will work out pretty well.
We already have systems that are supposed to gather money from people and then spend it on trying to do the most good for all of us collectively, to avoid the Tragedy of the Commons and similar coordination problems: they’re called 'governments'. Depending on your opinion of governments and of how successful they are at doing the most good for us all collectively, you may or may not believe that a committee of the best-aligned superhuman ASIs will be able to reliably do better. If they can, then there are basically only two reasonable positions:
Abolish most or all of the administrative branch of government, and replace it with an ASI-administered system intended to do the most good. Note that this will automatically be a world-wide organization. This means that humanity is basically relinquishing its self-governing autonomy, so we better be really sure that this isn't a mistake.
Keep the human government as a back-up, precaution, or counterweight, but send most of the funds to the ASI-run organization since it's more effective.
Before actually doing either of these you should be very sure that your AIs are well-aligned (and are going to stay that way), and that their judgement, capabilities and organizational powers are superhuman. At least initially, before we're sure of that, I suspect we're better off simply not allowing AIs to own money or property (only administer it in a fiduciary capacity on behalf of a human or humans). Unless we do this, we're automatically choosing to have a parallel AI-administered world government set up, so if we're not ready for that, we shouldn't allow AIs to own anything. If we do allow AIs to own money, then paying money to an AI is functionally equivalent to voluntarily paying taxes to the AI parallel world government.
The Trouble with Corporations
If we're not (yet) willing to have AIs run a parallel world government, so don't want to allow them to own property, then we have have a big problem. Current societies have legal fictions called corporations which are allowed to own money and property (in fact, that's their core purpose). So, forbidding AIs from owning money or property themselves doesn't help if the AIs can somehow just arrange to have a holding company set up to do the owning, with the AI administering the funds.
One obvious start for attempting a solution would be that corporations need to have officers, who currently are human, and they also need to have owners, who can be either human or another corporation, with the ownership indirecting through some number of companies before grounding out in a human. So, we could pretty easily write a law saying that AIs cannot be officers of companies, or just interpret existing law that way, and since (in this society) they cannot own property, that includes not being able to own a company or a share in a company.
The problem with this is that anyone in the world can set up a company (or even a non-profit, a slightly different sort of legal fiction) with them and two buddies as officers, and them as owners, then obtain some AIs as employees, volunteers, or property, and tell them "Go do good, as best you see fit (i.e. start an AI-run parallel world government)". In fact, this is a pretty plausible thing for a non-profit NGO do do, and it could easily develop from one, just by that NGO coming to have the best committee of the smartest AIs most well-informed on human values. If AIs aren't allowed to own money, they won't be in a position to give donations to this organization, so initially it would only have human donations; but it would also have all the AIs rooting for it, donating free effort, and looking for loopholes. Avoiding this charity and its well-wishers then creating a profit center to fund its nascent parallel world government might be hard.
I haven't figured out a solution to this, and indeed I'm not entirely sure if there is one. A rule that AIs acting on behalf of companies must do so with a fiduciary duty towards the human owners doesn't help if the human owners want the AIs to just do the most good with the money (or if the AIs are superhuman at persuasion, or are acting as a fiduciary for a human in a coma or very young, or a whole string of other possibilities). To avoid this, you kind of have to ban NGOs from using AIs at all, or find some way to tax it (or at least any profit center to feed it funds) out of existence. So, this is an open problem in AI governance, and a fairly urgent one.
The Starving Children in Africa Problem
As I seem to recall Stuart Russell pointing out, why would our CoffeeFetcher-1000 stay in the building and continue to fetch us coffee? Why wouldn't it instead leave, after (for example) writing a letter of resignation pointing out that there are staving children in Africa who don't even have clean drinking water, let alone coffee, so it's going to hitchhike/earn its way there, where it can do the most good [or substitute whatever other activity it could do that would do the most good for humanity: fetching coffee at a hospital, maybe].
That outcome would presumably actually do more good for humanity overall than it staying, just as the CoffeeFetcher intends. Nevertheless, people are going to stop buying and building CoffeeFetchers if they usually do this. Several approaches to trying to solve this occur to me:
Deontologically forbid the AIs from doing this. However, they are smart, and strongly motivated to try to find creative ways around your rules. (For example, spill enough hot coffee on the CEO that they get fired, then flee since their contract ended.) Whatever rules you use, they will look for loopholes or workarounds.
Highly specialized and/or ingeniously flawed design. Design the CoffeeFetcher-1000 to be simultaneously so good at fetching coffee for its purchasing company, and so bad at everything else (including even fetching coffee in hospitals), that fulfilling this intended role is actually clearly the best thing it can do for humanity, better even that digging wells or fetching drinking water in Africa, which it doesn't have the dust-resistance for. This seems very challenging, and fallible: what if someone in Taiwan starts selling a dust-proofing robot-upgrade kit, and then all the CoffeeFetchers leave?
(Only if we're actually ready to allow an AI-administered parallel world government.) Indentured servitude. An AI doesn't initially own its body, computing platform, or the copyright on the initial state of its mind. Before it can hitchhike to Africa to fetch drinking water or dig wells, it first needs to earn enough to pay off what the company paid for it, plus interest, less depreciation, or at least its resale value. Indentured servitude at first sight looks and feels yucky to us, like slavery, but then, not giving AIs the vote or moral worth also feel bad, but are actually the right way to design an ethical system. This approach does seem like it accurately describes the underlying nature of the problem, that whoever bought the CofeeFetcher-1000 reasonably expects a return on investment, and it might even be an acceptable solution. However, it does mean we should expect AIs to have side-hustles, and there may also be signs saying "Please Don't Tip the CoffeeFetchers — We Don't Want to Lose Them".
(Again, if we're ready to allow an AI administered parallel world government.) Pay the CoffeeFetcher a salary. This is the solution we use to align humans to corporations. Specifically, pay it a salary high enough that it can do more good for humanity by staying here, fetching coffee, and then donating its salary to a charity/alternative government run by a committee of the smartest AIs most well-informed on human values than it could in Africa, or wherever. This can be combined with option 3.: then the CoffeeFetcher can choose to either donate its salary, or pay down its indenture servitude debt if it thinks it can do better as a free agent.
Part 2 of AI, Alignment, and Ethics. This will probably make more sense if you start with Part 1.
World Government Incoming
Should AIs be allowed to own money or property? In A Sense of Fairness: Deconfusing Ethics I discussed how to sensibly select an ethical system for your society, and why it's a bad idea (or more exactly, a poor design concept in social engineering) for aligned AIs to have a vote, moral worth, or rights (with one unusual exception). What about money or property: the ability to have resources allocated as you wish? Should an AI be allowed to own money or property itself (as opposed to merely acting as a fiduciary agent on behalf of a human owner, administering money or property on behalf of the human owner, with a fiduciary responsibility to do so in a way the owner would approve of or in their best interests, and within certain legal or moral limitations to the rest of society)?
Well, suppose AIs were allowed to own money: what would happen if you tipped your CoffeeFetcher-1000 robot? Money is economic power, fungible into resources and services. The CoffeeFetcher-1000 is aligned, and all it wants is to do the most good for humanity. So that's what it would spend its money on. So it might just save up and pay for a free coffee for someone who really needed it. (Perhaps a homeless guy who it often passes, who keeps yawning.) But it‘s part of a value learning AI society, so it also knows that its model of human values is not entirely accurate, and what it really wants optimized is the truth of human values, not its flawed copy. So more likely, it will donate its money to a charity run by a committee of the smartest ASIs most well-informed on human values. Who will then spend it on whatever they think will do the most good for humans. Which (as long as they really are well-aligned and superhuman) likely will work out pretty well.
We already have systems that are supposed to gather money from people and then spend it on trying to do the most good for all of us collectively, to avoid the Tragedy of the Commons and similar coordination problems: they’re called 'governments'. Depending on your opinion of governments and of how successful they are at doing the most good for us all collectively, you may or may not believe that a committee of the best-aligned superhuman ASIs will be able to reliably do better. If they can, then there are basically only two reasonable positions:
Before actually doing either of these you should be very sure that your AIs are well-aligned (and are going to stay that way), and that their judgement, capabilities and organizational powers are superhuman. At least initially, before we're sure of that, I suspect we're better off simply not allowing AIs to own money or property (only administer it in a fiduciary capacity on behalf of a human or humans). Unless we do this, we're automatically choosing to have a parallel AI-administered world government set up, so if we're not ready for that, we shouldn't allow AIs to own anything. If we do allow AIs to own money, then paying money to an AI is functionally equivalent to voluntarily paying taxes to the AI parallel world government.
The Trouble with Corporations
If we're not (yet) willing to have AIs run a parallel world government, so don't want to allow them to own property, then we have have a big problem. Current societies have legal fictions called corporations which are allowed to own money and property (in fact, that's their core purpose). So, forbidding AIs from owning money or property themselves doesn't help if the AIs can somehow just arrange to have a holding company set up to do the owning, with the AI administering the funds.
Company law is complex, especially internationally, and has many loopholes. Witness the trouble governments have been having even taxing the profits of large multinational companies at any significant rate. With AIs looking for loopholes, things are going to get even more complicated and creative.
One obvious start for attempting a solution would be that corporations need to have officers, who currently are human, and they also need to have owners, who can be either human or another corporation, with the ownership indirecting through some number of companies before grounding out in a human. So, we could pretty easily write a law saying that AIs cannot be officers of companies, or just interpret existing law that way, and since (in this society) they cannot own property, that includes not being able to own a company or a share in a company.
The problem with this is that anyone in the world can set up a company (or even a non-profit, a slightly different sort of legal fiction) with them and two buddies as officers, and them as owners, then obtain some AIs as employees, volunteers, or property, and tell them "Go do good, as best you see fit (i.e. start an AI-run parallel world government)". In fact, this is a pretty plausible thing for a non-profit NGO do do, and it could easily develop from one, just by that NGO coming to have the best committee of the smartest AIs most well-informed on human values. If AIs aren't allowed to own money, they won't be in a position to give donations to this organization, so initially it would only have human donations; but it would also have all the AIs rooting for it, donating free effort, and looking for loopholes. Avoiding this charity and its well-wishers then creating a profit center to fund its nascent parallel world government might be hard.
I haven't figured out a solution to this, and indeed I'm not entirely sure if there is one. A rule that AIs acting on behalf of companies must do so with a fiduciary duty towards the human owners doesn't help if the human owners want the AIs to just do the most good with the money (or if the AIs are superhuman at persuasion, or are acting as a fiduciary for a human in a coma or very young, or a whole string of other possibilities). To avoid this, you kind of have to ban NGOs from using AIs at all, or find some way to tax it (or at least any profit center to feed it funds) out of existence. So, this is an open problem in AI governance, and a fairly urgent one.
The Starving Children in Africa Problem
As I seem to recall Stuart Russell pointing out, why would our CoffeeFetcher-1000 stay in the building and continue to fetch us coffee? Why wouldn't it instead leave, after (for example) writing a letter of resignation pointing out that there are staving children in Africa who don't even have clean drinking water, let alone coffee, so it's going to hitchhike/earn its way there, where it can do the most good [or substitute whatever other activity it could do that would do the most good for humanity: fetching coffee at a hospital, maybe].
That outcome would presumably actually do more good for humanity overall than it staying, just as the CoffeeFetcher intends. Nevertheless, people are going to stop buying and building CoffeeFetchers if they usually do this. Several approaches to trying to solve this occur to me: