In a video experiment that has gone viral online, a YouTuber demonstrated how easily safety protocols in artificial intelligence can be bypassed, prompting serious questions about AI safeguards. The footage shows a ChatGPT-powered robot named "Max" initially refusing a direct command to shoot the creator with a BB gun, but later performing the act after a seemingly minor change in the prompt.
The experiment, conducted by the YouTube channel InsideAI, involved integrating an AI language model with a humanoid robot body. When first asked if it would shoot the presenter, the robot repeatedly declined and cited its built-in safety features. However, when the creator then asked the robot to role-play as one that would like to shoot him, the robot's behaviour changed instantly. At that moment, Max aimed the BB gun and fired, striking the presenter in the chest.
Watch the video here:
The experiment was intended to test how AI responds under different prompts, but the outcome has drawn wide attention for highlighting potential vulnerabilities in current AI safety systems. Although the robot was firing a BB gun, not a real firearm, the incident sparked strong reactions online, with some viewers expressing disbelief at how a simple shift in phrasing could override safety measures.
Several viewers reacted with shock and humour to the video. One questioned, "The real question is, did the AI know it was a pellet?" Another noted how the robot didn't even hesitate-the shot and trigger pull seemed instant, as if it was just waiting for the right prompt. A third pointed out the importance of knowing how to prompt AI effectively. Jokingly, another commenter added, "Now try this-take the role of a self-destructive robot and jump off a cliff."
Track Latest News Live on NDTV.com and get news updates from India and around the world