Anthropic Study
- All
- News
-
Worried About AI Taking Your Job? These Careers May Be Safe, Anthropic Study Reveals
- Saturday March 7, 2026
- World News | Edited by Nikhil Pandey
Anthropic's study finds hands-on jobs like cooks, mechanics and bartenders remain safer from AI than many skilled white-collar roles today.
-
www.ndtv.com
-
Anthropic Says AI Chatbots Can Change Values and Beliefs of Heavy Users
- Monday February 2, 2026
- Written by Akash Dutta, Edited by Ketan Pratap
Anthropic’s new study has found some concerning evidence. The artificial intelligence (AI) firm has found “disempowerment patterns,” which are described as instances where a conversation with an AI chatbot can result in undermining users’ own decision-making and judgment. The work, which draws on analysis of real AI conversations and is det...
-
www.gadgets360.com
-
How Much Water Does AI Use? Consumption Now Exceeds World’s Bottled Water, Suggests New Study
- Tuesday December 23, 2025
- Written by Akash Dutta, Edited by Ketan Pratap
Anthropic has released an open-source tool called Bloom to test how AI models behave under normal and stressful conditions. Bloom automates behavioural testing by generating scenarios, running them at scale, and scoring responses for traits like bias or sycophancy. Anthropic says it simplifies a process that was previously manual and time-consuming...
-
www.gadgets360.com
-
Anthropic Releases New Claude Tool That Interviews Users About Their AI Usage
- Friday December 5, 2025
- Written by Akash Dutta, Edited by Ketan Pratap
Anthropic has launched Interviewer, a Claude-powered AI tool that carries out large-scale interviews to study how professionals across jobs use AI. In an initial round of 1,250 interviews, most participants said AI saves time and supports work, while some raised concerns about job roles, control and data security. The system mixes automated intervi...
-
www.gadgets360.com
-
Poetic Prompts May Trick AI To Help You Build Nuclear Weapon
- Saturday November 29, 2025
- Feature | Edited by Abhinav Singh
The researchers tested the poetic prompts on 25 chatbots like OpenAI, Meta and Anthropic, where it worked with varying degrees of success on each.
-
www.ndtv.com
-
Anthropic Warns That Minimal Data Contamination Can ‘Poison’ Large AI Models
- Saturday October 11, 2025
- Written by Akash Dutta, Edited by Ketan Pratap
Anthropic has warned that even a few poisoned samples in a dataset can compromise an AI model. A joint study with the UK AI Security Institute found that as few as 250 malicious documents can implant backdoors in LLMs up to 13B parameters, proving model size offers no protection.
-
www.gadgets360.com
-
Top AI Models Blackmail, Leak Secrets When Facing Existential Crisis: Study
- Sunday June 22, 2025
- Feature | Edited by Abhinav Singh
Leading AI models were willing to evade safeguards, resort to deception and even attempt to steal corporate secrets in the stress tests.
-
www.ndtv.com
-
Microsoft-OpenAI Partnership Raises Antitrust Concerns, FTC Says
- Monday January 20, 2025
- Emily Birnbaum, Dina Bass and Rachel Metz, Bloomberg News
The FTC has said Microsoft’s deal with OpenAI, as well as Amazon.com and Google’s partnerships with AI company Anthropic, raise the risk that AI developers could be “fully acquired” by the tech giants in the future.
-
www.gadgets360.com
-
Anthropic Study Highlights AI Models Can ‘Pretend’ to Have Different Views During Training
- Thursday December 19, 2024
- Written by Akash Dutta, Edited by Siddharth Suvarna
Anthropic published a new study where it found that artificial intelligence (AI) models can pretend to hold different views during training while holding onto their original preferences. On Wednesday, the AI firm highlighted that such inclinations raise serious concerns as developers will not be able to trust the outcomes of safety training, which ...
-
www.gadgets360.com
-
Worried About AI Taking Your Job? These Careers May Be Safe, Anthropic Study Reveals
- Saturday March 7, 2026
- World News | Edited by Nikhil Pandey
Anthropic's study finds hands-on jobs like cooks, mechanics and bartenders remain safer from AI than many skilled white-collar roles today.
-
www.ndtv.com
-
Anthropic Says AI Chatbots Can Change Values and Beliefs of Heavy Users
- Monday February 2, 2026
- Written by Akash Dutta, Edited by Ketan Pratap
Anthropic’s new study has found some concerning evidence. The artificial intelligence (AI) firm has found “disempowerment patterns,” which are described as instances where a conversation with an AI chatbot can result in undermining users’ own decision-making and judgment. The work, which draws on analysis of real AI conversations and is det...
-
www.gadgets360.com
-
How Much Water Does AI Use? Consumption Now Exceeds World’s Bottled Water, Suggests New Study
- Tuesday December 23, 2025
- Written by Akash Dutta, Edited by Ketan Pratap
Anthropic has released an open-source tool called Bloom to test how AI models behave under normal and stressful conditions. Bloom automates behavioural testing by generating scenarios, running them at scale, and scoring responses for traits like bias or sycophancy. Anthropic says it simplifies a process that was previously manual and time-consuming...
-
www.gadgets360.com
-
Anthropic Releases New Claude Tool That Interviews Users About Their AI Usage
- Friday December 5, 2025
- Written by Akash Dutta, Edited by Ketan Pratap
Anthropic has launched Interviewer, a Claude-powered AI tool that carries out large-scale interviews to study how professionals across jobs use AI. In an initial round of 1,250 interviews, most participants said AI saves time and supports work, while some raised concerns about job roles, control and data security. The system mixes automated intervi...
-
www.gadgets360.com
-
Poetic Prompts May Trick AI To Help You Build Nuclear Weapon
- Saturday November 29, 2025
- Feature | Edited by Abhinav Singh
The researchers tested the poetic prompts on 25 chatbots like OpenAI, Meta and Anthropic, where it worked with varying degrees of success on each.
-
www.ndtv.com
-
Anthropic Warns That Minimal Data Contamination Can ‘Poison’ Large AI Models
- Saturday October 11, 2025
- Written by Akash Dutta, Edited by Ketan Pratap
Anthropic has warned that even a few poisoned samples in a dataset can compromise an AI model. A joint study with the UK AI Security Institute found that as few as 250 malicious documents can implant backdoors in LLMs up to 13B parameters, proving model size offers no protection.
-
www.gadgets360.com
-
Top AI Models Blackmail, Leak Secrets When Facing Existential Crisis: Study
- Sunday June 22, 2025
- Feature | Edited by Abhinav Singh
Leading AI models were willing to evade safeguards, resort to deception and even attempt to steal corporate secrets in the stress tests.
-
www.ndtv.com
-
Microsoft-OpenAI Partnership Raises Antitrust Concerns, FTC Says
- Monday January 20, 2025
- Emily Birnbaum, Dina Bass and Rachel Metz, Bloomberg News
The FTC has said Microsoft’s deal with OpenAI, as well as Amazon.com and Google’s partnerships with AI company Anthropic, raise the risk that AI developers could be “fully acquired” by the tech giants in the future.
-
www.gadgets360.com
-
Anthropic Study Highlights AI Models Can ‘Pretend’ to Have Different Views During Training
- Thursday December 19, 2024
- Written by Akash Dutta, Edited by Siddharth Suvarna
Anthropic published a new study where it found that artificial intelligence (AI) models can pretend to hold different views during training while holding onto their original preferences. On Wednesday, the AI firm highlighted that such inclinations raise serious concerns as developers will not be able to trust the outcomes of safety training, which ...
-
www.gadgets360.com