Jailbreaking (AI)

Techniques to bypass an AI model's safety guardrails and content policies, often through carefully crafted prompts that trick the model into generating prohibited content. AI companies continuously work to patch jailbreaks, but it remains an ongoing challenge.