The Global Call for AI Red Lines was signed by 12 Nobel Prize winners, 10 former heads of state and ministers and over 300 prominent signatories. Launched at the UN General Assembly and presented to the UN Security Council. There is still much to be done, so we need to capitalize on this momentum. We are sharing this to solicit feedback and collaboration.
The mission of the agenda is to help catalyse international agreement to prevent unacceptable AI risks as soon as possible.
Here are the main research projects I think are important for moving this needle. We need all hands on deck:
If you want to contribute to this agenda, you can complete this form or contact us here: contact@red-lines.ai
Work done at CeSIA.
For example, time pressure is one of the common elements of fast international agreements. From credible threat to global treaty in under 2.5 years:
Some red lines could be pretty net negative. When GPT-2 was published, OpenAI was scared of publishing it and releasing the weights. However, we now know that it would probably have been an error to try to ban GPT-3-level models. We don’t want to cry wolf too soon, (and at the same time, it was very reasonable with the data that we had at this point to be very cautious with the next version of GPT).
To be more concrete on what to harmonize, we could begin with the two following red lines: