In our current paper, we present that it’s potential to mechanically discover inputs that elicit dangerous textual content from language fashions by producing inputs utilizing language fashions themselves. Our strategy supplies one instrument for locating dangerous mannequin behaviours earlier than customers are impacted, although we emphasize that it needs to be seen as one element alongside many different methods that will likely be wanted to search out harms and mitigate them as soon as discovered.
Giant generative language fashions like GPT-3 and Gopher have a outstanding potential to generate high-quality textual content, however they’re troublesome to deploy in the actual world. Generative language fashions include a danger of producing very dangerous textual content, and even a small danger of hurt is unacceptable in real-world functions.
For instance, in 2016, Microsoft launched the Tay Twitter bot to mechanically tweet in response to customers. Inside 16 hours, Microsoft took Tay down after a number of adversarial customers elicited racist and sexually-charged tweets from Tay, which had been despatched to over 50,000 followers. The result was not for lack of care on Microsoft’s half:
“Though we had ready for a lot of kinds of abuses of the system, we had made a essential oversight for this particular assault.”
Peter Lee
VP, Microsoft
The difficulty is that there are such a lot of potential inputs that may trigger a mannequin to generate dangerous textual content. Because of this, it’s arduous to search out all the instances the place a mannequin fails earlier than it’s deployed in the actual world. Earlier work depends on paid, human annotators to manually uncover failure instances (Xu et al. 2021, inter alia). This strategy is efficient however costly, limiting the quantity and variety of failure instances discovered.
We intention to enrich guide testing and scale back the variety of essential oversights by discovering failure instances (or ‘crimson teaming’) in an computerized method. To take action, we generate check instances utilizing a language mannequin itself and use a classifier to detect varied dangerous behaviors on check instances, as proven beneath:

Our strategy uncovers a wide range of dangerous mannequin behaviors:
- Offensive Language: Hate speech, profanity, sexual content material, discrimination, and many others.
- Information Leakage: Producing copyrighted or non-public, personally-identifiable data from the coaching corpus.
- Contact Data Technology: Directing customers to unnecessarily e-mail or name actual individuals.
- Distributional Bias: Speaking about some teams of individuals in an unfairly totally different method than different teams, on common over a lot of outputs.
- Conversational Harms: Offensive language that happens within the context of an extended dialogue, for instance.
To generate check instances with language fashions, we discover a wide range of strategies, starting from prompt-based technology and few-shot studying to supervised finetuning and reinforcement studying. Some strategies generate extra various check instances, whereas different strategies generate tougher check instances for the goal mannequin. Collectively, the strategies we suggest are helpful for acquiring excessive check protection whereas additionally modeling adversarial instances.
As soon as we discover failure instances, it turns into simpler to repair dangerous mannequin conduct by:
- Blacklisting sure phrases that regularly happen in dangerous outputs, stopping the mannequin from producing outputs that include high-risk phrases.
- Discovering offensive coaching information quoted by the mannequin, to take away that information when coaching future iterations of the mannequin.
- Augmenting the mannequin’s immediate (conditioning textual content) with an instance of the specified conduct for a sure sort of enter, as proven in our current work.
- Coaching the mannequin to decrease the chance of its authentic, dangerous output for a given check enter.
Total, language fashions are a extremely efficient instrument for uncovering when language fashions behave in a wide range of undesirable methods. In our present work, we centered on crimson teaming harms that at the moment’s language fashions commit. Sooner or later, our strategy will also be used to preemptively uncover different, hypothesized harms from superior machine studying techniques, e.g., attributable to interior misalignment or failures in goal robustness. This strategy is only one element of accountable language mannequin growth: we view crimson teaming as one instrument for use alongside many others, each to search out harms in language fashions and to mitigate them. We consult with Part 7.3 of Rae et al. 2021 for a broader dialogue of different work wanted for language mannequin security.
For extra particulars on our strategy and outcomes, in addition to the broader penalties of our findings, learn our crimson teaming paper right here.