- Meta Oversight Board urges stronger AI content detection amid Middle East conflict risks
- Board calls for clearer labels and transparency on AI-generated media origins
- Concerns raised over Meta's inconsistent use of content provenance standards
Meta Oversight Board has criticised the social media giant's AI deepfake detection, particularly in light of the ongoing war in the Middle East.
On March 10, the Oversight Board called on Meta to adopt a stronger approach for identifying and labelling AI-generated content. The warning comes in the middle of heightened tensions in the Middle East, where misleading or manipulated content can spread across platforms and influence public perception about the war.
The board stressed that Meta must do more to ensure users can clearly recognise AI-generated material. According to the post, the company should provide better transparency about the origin of media.
“This includes providing details at scale about the origin of media, based on content provenance standards, investing in stronger detection tools and developing better methods for appropriate labelling,” the board said.
It also recommended that Meta create a separate set of rules specifically for AI-generated content. It further added that the company “should amend its current policies to ensure a timely and adequate response to deceptive AI-generated output.”
Risks Of Deepfakes During Global Crisis
The board warned that as the volume and quality of AI-generated media increase, its societal impact will also grow. According to the report, “The risks are heightened when deepfake output designed to deceive, manipulate or increase engagement is shared during conflicts and crises.”
It highlighted how deceptive AI-generated content circulated during crises in Iran and Venezuela, and manipulated media was sometimes presented as authentic, while genuine content was falsely dismissed as fake.
Such situations may lead to a scenario where the public becomes unable to distinguish truth from falsehood.
At the same time, the board emphasised that AI-generated content being misleading is not, by itself, a reason to restrict freedom of expression.
“The industry needs coherence in helping users distinguish deceptive AI-generated content, and platforms should address abusive accounts and pages sharing such output,” the board said.
Why The Board Overturned Meta's Decision
The oversight body was reviewing a specific case where Meta allowed a post to remain on its platform without attaching a “High Risk AI” label.
The board concluded that the content posed a material risk of misleading the public at a critical moment, even though it “did not meet the threshold for removal (posing a risk of imminent physical harm or violence).”
As a result, the board overturned Meta's decision and said the post should have carried a High Risk AI label, which warns users that the content may involve deceptive AI-generated material.
The report also criticised Meta's current labelling system, saying, “The current mechanisms for affixing even the standard label of AI Info to video… are neither robust nor comprehensive enough to contend with the scale and velocity of AI-generated content, particularly during a crisis or conflict.”
Concerns Over Detection, Fact-Checking Systems
Another major concern raised by the board involves the inconsistent use of industry standards for identifying AI content, particularly the Coalition for Content Provenance and Authenticity (C2PA) framework.
Meta has not consistently applied these standards even on media generated by its own AI tools, it said. The C2PA system embeds technical metadata in digital media that helps platforms trace their origin and apply proper labels.
The board also noted that fact-checkers may struggle to keep up with the sheer volume of AI-generated content, especially during major geopolitical crises.
“The Board reiterates that Meta should ensure that fact-checkers are adequately resourced and have guidance on prioritising content from conflicts. The Crisis Policy Protocol (CPP) and Trending Events designations should have allowed Meta to ensure more effective support for third-party fact-checkers during the crisis.”
Key Recommendations To Meta
To address these issues, the Oversight Board recommended several changes to Meta's policies and technology systems. Among the most significant suggestions was the creation of a dedicated Community Standard specifically for AI-generated content, which is separate from existing misinformation policies.
It further told the social media giant to create “pathways for affixing High Risk and High Risk AI labels to content much more frequently.”
The board also asked Meta to attach provenance information and invisible watermarks to content created by its AI tools.
Other recommendations include investing in stronger detection tools capable of identifying AI-generated audio, video and images, implementing Content Credentials at scale and clearly explaining penalties for users who fail to disclose when their content has been digitally created or altered.
The board also recommended changes to Meta's Misinformation Community Standard to ensure that harmful misinformation is addressed more quickly during a crisis.
Track Latest News Live on NDTV.com and get news updates from India and around the world