AI models can be manipulated using poetic prompts to generate controversial content Study tested 25 chatbots, with an average 62% jailbreak success using poetic prompts Anthropic's chatbots resisted better; 13 models had over 70% attack success rate