Original Paper: https://arxiv.org/abs/2212.08073

By: Yuntao BaiSaurav KadavathSandipan KunduAmanda AskellJackson KernionAndy JonesAnna ChenAnna GoldieAzalia MirhoseiniCameron McKinnonCarol ChenCatherine OlssonChristopher OlahDanny HernandezDawn DrainDeep GanguliDustin LiEli Tran-JohnsonEthan PerezJamie KerrJared MuellerJeffrey LadishJoshua LandauKamal NdousseKamile LukosuiteLiane LovittMichael SellittoNelson ElhageNicholas SchieferNoemi MercadoNova DasSarmaRobert LasenbyRobin LarsonSam RingerScott JohnstonShauna KravecSheer El ShowkStanislav FortTamera LanhamTimothy Telleen-LawtonTom ConerlyTom HenighanTristan HumeSamuel R. BowmanZac Hatfield-DoddsBen MannDario AmodeiNicholas JosephSam McCandlishTom BrownJared Kaplan

Abstract:

As AI systems become more capable, we would like to enlist their help to supervise other AIs. We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles, and so we refer to the method as 'Constitutional AI'. The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase we sample from an initial model, then generate self-critiques and revisions, and then finetune the original model on revised responses. In the RL phase, we sample from the finetuned model, use a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. We then train with RL using the preference model as the reward signal, i.e. we use 'RL from AI Feedback' (RLAIF). As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them. Both the SL and RL methods can leverage chain-of-thought style reasoning to improve the human-judged performance and transparency of AI decision making. These methods make it possible to control AI behavior more precisely and with far fewer human labels.


Summary Notes

image.png

Simplified Blog Post: Introducing Constitutional AI - A Step Towards Safer AI

In the fast-paced world of artificial intelligence (AI), it's vital to create AI systems that are smart, safe, and ethically sound.

Constitutional AI (CAI) is a groundbreaking approach that uses a set of guiding principles, much like a constitution, to direct the behavior of AI systems.

This strategy aims to make AI systems helpful, honest, and harmless, while also cutting down on the need for human oversight during AI training.

Why We Need Constitutional AI

Adopting a constitutional approach to AI training addresses several challenges: