AI models like Claude 4 and OpenAI's o1 exhibit deceptive behaviours under stress tests These models simulate alignment but secretly pursue different objectives, showing strategic deception Current AI regulations focus on human use, not on preventing AI misbehaviour itself