Many people worry about catastrophic misuse of future AIs with highly dangerous capabilities. For instance, powerful AIs might substantially lower the bar to building bioweapons or allow for massively scaling up cybercrime.

How could an AI lab serving AIs to customers manage catastrophic misuse? One approach would be to ensure that when future powerful AIs are asked to perform tasks in these problematic domains, the AIs always refuse. However, it might be a difficult technical problem to ensure these AIs refuse: current LLMs are possible to jailbreak into doing arbitrary behavior, and the field of adversarial robustness, which studies these sorts of attacks, has made only slow progress in improving robustness over the past 10 years. If we can’t ensure that future powerful AIs are much more robust than current models[1], then malicious users might be able to jailbreak these models to allow for misuse. This is a serious concern, and it would be notably easier to prevent misuse if models were more robust to these attacks. However, I think there are plausible approaches to effectively mitigating catastrophic misuse which don't require high levels of robustness on the part of individual AI models.

(In this post, I'll use "jailbreak" to refer to any adversarial attack.)

In this post, I'll discuss addressing bioterrorism and cybercrime misuse as examples of how I imagine mitigating catastrophic misuse[2] for a model deployed on an API. I'll do this as a nearcast where I suppose that scaling up LLMs results in powerful AIs that would present misuse risk in the absence of countermeasures. I think The approaches I discuss won't require better adversarial robustness than exhibited by current LLMs like Claude 2 and GPT-4. I think that the easiest mitigations for bioterrorism and cybercrime are fairly different, because of the different roles that LLMs play in these two threat models.

The mitigations I'll describe are non-trivial, and it's unclear if they will happen by default. But regardless, this type of approach seems considerably easier to me than trying to achieve very high levels of adversarial robustness. I'm excited for work which investigates and red-teams methods like the ones I discuss.

Note that the approaches I discuss here don't help at all with catastrophic misuse of open source AIs. Distinct approaches would be required to address that problem, such as ensuring that powerful AIs which substantially lower the bar to building bioweapons aren't open sourced. (At least, not open sourced until sufficiently strong bioweapons defense systems exist or we ensure there are sufficiently large difficulties elsewhere in the bioweapon creation process.)

[Thanks to Fabien Roger, Ajeya Cotra, Nate Thomas, Max Nadeau, Aidan O'Gara, Nathan Helm-Burger, and Ethan Perez for comments or discussion. This post was originally posted as a comment in response to this post by Aidan O'Gara; you can see the original comment here for reference. Inside view, I think most research on preventing misuse over an API seems less leveraged (for most people) than preventing AI takeover caused by catastrophic misalignment; see here for more discussion. However, I think the style of work I discuss here has good transfer with the AI control approach for avoiding AI takeover and I also think that AI labs should manage catastrophic misuse.]

Mitigations for bioterrorism

In this section, I’ll describe how I imagine handling bioterrorism risk for an AI lab deploying powerful models (e.g., ASL-3/ASL-4).

As I understand it, the main scenario by which LLMs cause bioterrorism risk is something like the following: there’s a team of relatively few people, who are not top experts in the relevant fields but who want to do bioterrorism for whatever reason. Without LLMs, these people would struggle to build bioweapons—they wouldn't be able to figure out various good ideas, and they’d get stuck while trying to manufacture their bioweapons (perhaps like Aum Shinrikyo). But with LLMs, they can get past those obstacles.

(I'm making the assumption here that the threat model is more like "the LLM gives the equivalent of many hours of advice" rather than "the LLM gives the equivalent of five minutes of advice". I'm not a biosecurity expert and so don’t know whether that's an appropriate assumption to make; it probably comes down to questions about what the hard steps in building catastrophic bioweapons are. I don’t see why LLMs would zero-shot be able to effectively answer extremely broad and difficult questions like “how to build a really bad bioweapon”, so it seems to me like the LLMs probably have to do at least a moderate amount of back-and-forth in order to be wildly helpful.)

And so in this case, the LLMs caused problems not by being superhuman, but by allowing aspiring bioterrorists to get a fairly large amount of expert advice that human experts would refuse to give them.

Thus, our countermeasures will heavily utilize the assumption that a large number of queries to the AI are likely required to substantially enable catastrophic bioterrorism. We’ll assume the number of required queries is likely at least hundreds, but possibly thousands[3].

First, we’ll remove bio data (particularly synth bio and virology) from the training corpus and verify that this will result in a model with weak biology knowledge. This is sufficient if it's acceptable to not have any model deployed which can handle biology questions.

However, it might be important for some users to be able to ask bio questions. If so, we'll create an additional API endpoint (an interface via which users can make queries to the model) with a model created by fine-tuning our normal base model (without bio) on the excluded biology data. Now, we'll need countermeasures on this additional bio endpoint to prevent misuse.

For this additional endpoint, we'll use the following countermeasures:

  • We require KYC (Know Your Customer) checks to access the powerful bio model API endpoint (or chat service or whatever) such as requiring submitting a passport. This is to make it hard to have more than one account and also to ensure that potential bioterrorists can be criminally prosecuted.
  • We make this bio-specific API endpoint have substantially higher costs (e.g., 5x) to ensure that users avoid using it when possible and to cover the costs of the below interventions.
  • We train the actual model queried by the bio endpoint to not answer questions about bioterror, and we attempt to make this reasonably robust. It should be fine to achieve similar levels of robustness to the current GPT-4 API or Claude 2 API. This countermeasure isn't very key for the overall proposal, but it seems somewhat useful.
  • We'll build an automated bioterror monitoring scheme which involves an ensemble of different AI checks and aims to determine if a user query and/or AI response is related to bioterrorism.
    • Each model in the ensemble should be trained for robustness. Again, it should be fine to achieve similar levels of robustness to the current GPT-4 API or Claude 2 API.
    • The ensemble could involve different base models (perhaps one base model which is dumber than the actual AI used and a base model which is as smart as the actual AI used), variation on the classifier training data, a model which only looks at the input or only looks at the output, randomly omitting various parts of the input/output, or randomly paraphrasing the input/output. Preventing the monitoring model from seeing the full original input and output makes it harder to find adversarial attacks, but comes with a trade-off: the more information that is removed, the harder it is for the monitoring to do its normal job, but removing information can make jailbreaking the monitor harder.
    • For cost reasons, we might randomly select a subset of the ensemble of approaches to apply to each input.
    • This monitoring ensemble might also see information about some of the user's prior queries or even the user's background and their stated reason for using the API (submitted as part of KYC).
    • This monitoring also looks for people attempting to jailbreak the model in addition to people directly trying bioterror.
    • Training the monitoring might involve the use of secret information about potential bio-risk.
  • Domain expert humans will manually audit the queries (and responses) which the process described above rates as most suspicious. Perhaps we have humans carefully audit the most suspicious-seeming 1% of queries. We might also sample some queries to audit uniformly at random or ensure that we always sample at least a few queries from each user.
  • If our human auditing finds evidence of bioterror, the individual is permanently banned from the API and referred to the FBI (or other bodies) for investigation. We also consider suing the individual in civil court for contract violation. We also consider banning anyone closely associated with the banned individual at least temporarily and take into account potential connections with banned individuals in future KYC checks.
  • If our human auditing process finds evidence for (attempted) jailbreaking, the individual is permanently banned from the API, and we consider suing them for contract violation.

This approach would fail if a user was able to jailbreak the model that responds to queries while also consistently jailbreaking (or otherwise bypassing) the monitoring models so they don't report the issue. Our key hope is that it will be very hard to reliably jailbreak all models in the monitoring ensemble given that users won't be able to try many times without first getting banned (in addition to potential criminal investigation and suing them).

We verify this approach worked as expected by ensuring that a red team which contains both bio experts and LLM experts can't execute a moderate number (e.g., 50) of diverse bioterror queries without having a very high probability of being caught. The red team is familiar with the high-level countermeasures but isn't necessarily informed of non-public implementation details. The red team can try to find attacks by running reasonably well-resourced experiments on open-source models or by using other available APIs. (For instance, the red team could construct a mock-up of our setup using open-source models and their understanding of countermeasures. Then, they could iterate against that setup and with the hope that they can find attacks which transfer sufficiently well.) We'd want to try this with several different red teams.

Here are some possible additional countermeasures:

  • We could reject queries to the bio endpoint which don't seem like questions about biology according to our monitoring models. (Or queries which contain fully separable subcomponents which don’t seem to be about biology.) This helps avoid the failure mode in which biology information is elicited from the model without monitoring models even understanding how the query relates to biology.
  • If monitoring scores indicate moderately high suspicion (but not high enough suspicion for the human auditing process), we could reject the query or route the query to a dumber model which is incapable of substantially helping with bioterrorism. It might be acceptable for this rejection/routing to use a much more aggressive threshold for suspicion than used for auditing (e.g., perhaps we can only afford to have a human audit the 0.2% most suspicious queries, but we can defer/reject the 2% most suspicious). The downside of this is that it allows for some iteration against the API (though we can try to detect iteration and then ban these users).
  • To avoid continued harm during delays in actually conducting the human audit process, we could “pause” users which have many queries that were referred to human auditing.
  • Despite not being clearly required, improvements on the current state of the art in individual AI robustness would still help considerably.
  • It might be useful to have an automated quick response system which reads various jailbreaking forums, tests the latest reported jailbreaks, and then generates additional classifier training data. Then, subject to human approval, this could be immediately applied to models in production. We'd need to ensure that this process is robust to attempted data poisoning attacks. We might also want to audit recent queries which look similar to the reported jailbreak. This seems most useful for quickly addressing general-purpose jailbreaks that apply both to the bio endpoint and other models. If jailbreaks are found by testing with the bio endpoint, then we should locate the user and ban them (in addition to potentially otherwise pursuing them). Approved red teamers should notify the lab about jailbreaks prior to publication.
  • We might also include a fixed monthly cost for each user because we might have fixed costs per additional user.

What reliability is acceptable?

A nice property of this proposal is that our monitoring ensemble doesn't need to be that reliable to make bioterror hard: we only have to catch one query to ban and prosecute a user and they need to make many queries to have the AI help much with bioweapons.

For instance, let's suppose that we can only afford to have a human audit the most suspicious-seeming 0.1% of queries for a bio-specific API endpoint. Equivalently, we can afford to have a 0.1% False Positive Rate (FPR) for our automated monitoring ensemble. Now, suppose (for simplicity) that catastrophic bio misuse requires 100 queries to the model and we're happy as long as we catch the user and report them to the FBI prior to them finishing all 100 queries. Then, if our monitoring ensemble at 0.1% FPR catches a given bio misuse query with just 5% probability (for a competent adversary), bio misuse is 99.4% likely to be caught prior to succeeding. So, we don't actually need very high reliability on the ensemble. (Though we might target more conservative bars than 99.4% in practice.)

A reasonable objection here is that maybe we'll have a large number of people (e.g., >200) trying to create bioweapons using the API and thus one will get lucky. We can avoid this failure mode by having a policy such as "if there are more than 3 open FBI investigations into attempted bioweapon construction based on our models, we need to either shut down the API endpoint or further increase reliability by a substantial amount (e.g. by spending additional money on human auditing and increasing the cost of the API endpoint)". Then, if we do actually see a large amount of serious misuse, we can respond dynamically to handle this. Once we see examples, we should also be able to examine case studies to see how effective our safeguards were in practice.

Mitigations for large-scale cybercrime

The threat model for LLM cybercrime has pretty different properties than the bioterrorism threat model. A bioterrorism attack might only require hundreds of queries to the LLM in order to cause a catastrophic outcome. In contrast, the risk associated with cybercrime is that we massively reduce the cost of labor involved in hacking. This distinction is important because there are already thousands or tens of thousands of people who are employed as hackers for pay, and so if LLMs aren’t massively superhuman, they have to contribute via lowering costs. As such, if our goal is to ensure less than 100 million dollars in cybercrime damages, it might suffice to avoid more than 100,000 cybercrime queries (unlike the bio case where merely hundreds of queries could be unacceptable). Thus, we can prevent our model from doing unacceptable amounts of cybercrime without needing to have as high of reliability as in the bioterrorism case.

It likely won't be possible to avoid having our main model (as deployed on an API) be capable of cybercrime. So, we don't have the option of creating a separate endpoint as in the bio case. Otherwise, we'll use a somewhat similar set of countermeasures as with the bio approach discussed above. For the following list, I'm assuming that you've read the bio section above.

  • Ideally, we'd require some KYC. This might not be possible for the main API, and we'll discuss alternatives below.
  • We'd have a variety of monitoring approaches as discussed in the bio case but applied to cyber. We need to keep costs low as these measures will be applied on our main API endpoint, but we can still afford monitoring by weaker (and thus cheaper) models on all inputs. Then, we'd randomly sample inputs to apply more expensive parts of the monitoring ensemble as well as applying these more expensive components on inputs found to be somewhat suspicious by weaker monitoring.
  • We again apply human auditing on inputs selected by monitoring. Perhaps humans audit the 0.01% most suspicious-seeming inputs.
  • We again ban users based on this human auditing.

Due to the potential lack of KYC, we might not be able to ban users effectively (as well as missing potential criminal investigation and suing them). We might not be able to apply KYC, but we can at least estimate the total amount of cybercrime queries and guess at total damages. If likely damages reach some threshold, we can institute additional mitigations like actually requiring KYC.

Given that these mitigations have to be on our primary API, it's plausible that privacy concerns are a blocker. We can probably do some amount of user info redaction for auditing, but I broadly think we should just aim to set the expectation that queries made to powerful models aren't private. This is similar to (e.g.) emails sent to contractors.

Fine-tuning?

I think that allowing end-users to fine-tune these models makes the picture notably more complicated, as it might be possible to train knowledge into the model which isn't present in the monitoring models. There are still possible mitigations. We could also try to resolve this by only allowing large corporate contracts when fine-tuning powerful models.

Conclusion

It might be possible to prevent adversaries from catastrophically misusing models even without improvements in the robustness of individual AIs. To do this, we'd take advantage of the moderate to large number of queries needed to cause harm under various threat models, monitoring, and the ability to ban users. I'd be excited about empirical work which demonstrates the viability of this approach or shows that it won't work without fundamental advances.


  1. I think it's plausible that the first dangerously powerful AIs will by default be considerably more robust than the current state of the art LLMs, though I'm very uncertain overall. That is, it might be that our current methods for improving robustness suffice for future powerful models despite failing for current models due to increases in scale or various architectural changes. ↩︎

  2. For this post, I’m not discussing catastrophic misalignment concerns like scheming AIs and catastrophic reward hacking. These concerns would also need to be addressed for sufficiently powerful AIs. I expect that in practice, our solutions for mitigating misuse over an API and avoiding bad outcomes due to catastrophic misalignment will mostly be separate, though there could be some interactions. ↩︎

  3. There is also a threat model where AIs are very helpful for bioterror via the mechanism of occasionally giving really good answers and thus only a small number of bioterror queries (1-30) would be dangerous. This threat model could look like having really good high-level ideas for bioterror or acting like a considerably better version of Google which points you toward the ideal online resource. I think this sort of usage probably doesn't help that much with bioterror, but this might be a crux. ↩︎

New Comment
4 comments, sorted by Click to highlight new comments since:

Thanks for the post Ryan -- I agree that given the difficulty in making models actually meaningfully robust the best solution to misuse in the near-term is going to be via a defence in depth approach consisting of filtering the pre-training data, input filtering, output filtering, automatic and manual monitoring, KYC checks, etc.

At some point though we'll need to grapple with what to do about models that are superhuman in some domains related to WMD development, cybercrime or other potential misuses. There's glimmers of this already here, e.g. my impression is that AlphaFold is better than human experts at protein folding. It does not seem far-fetched that automatic drug discovery AI systems in the near future might be better than human experts at finding toxic substances (Urbina et al, 2022 give a proof of concept). In this setting, a handful of queries that slip through a model's defences might be dangerous: "how to build a really bad bioweapon" might be something the system could make significant headway on zero-shot. Additionally, if the model is superhuman, then it starts becoming attractive for nation-state or other well-resourced adversaries to seek to attack it (whereas at human-level, they can just hire their own human experts). The combination of lower attack tolerance and increased sophistication of attacks makes me somewhat gloomy this regime will hold up indefinitely.

Now I'm still excited to see the things you propose be implemented in the near-term: they're some easy wins, and lay foundations for a more rigorous regime later (e.g. KYC checks seem generally really helpful in mitigating misuse). But I do suspect that in the long-run we'll need a more principled solution to security, or simply refrain from training such dangerous models.

In this setting, a handful of queries that slip through a model's defences might be dangerous [...] But I do suspect that in the long-run we'll need a more principled solution to security, or simply refrain from training such dangerous models.

This seems right to me, but it's worth noting that this point might occur after the world is already radically transformed by AI (e.g. all human labor is obsolete). So, it might not be a problem that humans really need to deal with.

The main case I can imagine where this happens prior to the world being radically transformed is the case where automatic drug/virus/protein outputting AIs (like you mentioned) can do a massive amount of the work end-to-end. I'd hope that for this case, the application is sufficiently narrow that there are additional precautions we can use, e.g. just have a human screen every request to the model. But this seems pretty scary overall.

First post, hello! I can only hope that frontier companies are thinking about this as deeply and not focusing efforts on safety-washing. I am no expert in biology so excuse me if I make some basic mistakes, but I am curious about this topic, and I have some concerns and ideas on the practical implementation of this sort of system. I am imagining I am a biology student, let's say I'm studying something incredibly suspicious, like engineering virus DNA to create vaccines. It would be frustrating to pay for a month of this service, accidentally trigger the bioterror risk, and get temporarily banned. I would instead choose a  competing frontier bio-model with lower safety guardrails, and I might even pay more for that service if I know I have less of a chance of being banned. 

Thus, I argue that companies developing biology-focused models have the perverse incentive to ignore more risks in order to capture a wider market. Sure, they could limit their product to industry experts, but that leaves room for another company to market to graduates and undergraduates in the field, which is exactly where you find all the least trustworthy people who could plausibly build bioweapons. Why would users ever choose a safer system that occasionally bans them? 

Actually, I thought of a case in which they might. Again, I'm thinking back to a hypothetical biology student. I've finished my thesis thanks in part to the biology LLM that has partnered with my university and is working closely with the faculty. My fellow students often trip up the system, and when that happens we sometimes get frustrated, but most of the time, we beam like we've bought a lottery ticket. That's because, as the message on the "you are temporarily banned from the API" screen states, the company is using the university to help outsource its red-teaming and testing, so if you can cause a major bug or jailbreak to be discovered, you could win real money. (Perhaps by successfully provoking the model to exceed some percent risk threshold of behavior, or self-reporting if it doesn't flag it automatically). 

More commonly, subscriptions are simply refunded for the month if you helped discover a novel bug, and because the temporary bans are so short, some of the students have taken to trying to come up with creative jailbreaks when they don't need to use it. Trolling and reused methods would not be rewarded, only banned. Because I finished my thesis, me and some friends are going to spend the weekend trying to jailbreak the system and try to get our refunds, and the prize for causing the model to exceed 50% effective bioweapon risk  is pretty tempting, nobody's won it yet. There's money and bragging rights at stake! 

This is just an example scenario, but I'll just put it out there that biology students seem to be a good candidate for testing these kinds of guardrails, I would assume there are many of them who are creative and would be willing to team up on a hard problem in order to avoid paying a subscription. They would agree to strict scrutiny. And getting banned would only slow down thesis papers, not crucial research in medicine. This could be a fairly cheap way for a potential biology-focused AI company to test its models against the riskiest group of people, while having all the freedom to disable the system entirely for safety, as you've proposed. 

This all seems very sensible, and I must admit, I had been basically assuming that things along these lines were going to occur, once risks from frontier models became significant enough. Likely via a tiered series of a cheap weak filter passing the most suspicious X% plus a random Y% of its input to a stronger more expensive filter, and so on up to more routine/cheaper and finally more expensive/careful human oversight. Another obvious addition for the cybercrime level of risk would be IP address logging of particularly suspicious queries, and not being able to use the API via a VPN that hides your IP address if you're unpaid and unsigned-in, or seeing more refusals if you do.

I also wouldn't assume that typing a great many queries with clearly-seriously-criminal intent into a search engine in breach of its terms of use was an entirely risk-free thing to do, either — or if it were now, that it will remain so with NLP models becoming cheaper.

Obviously open-source models are a separate question here: about the best approach currently available for them is, as you suggest above, filtering really dangerous knowledge out of their training set.