- Artificial Intelligence will scan every image much before anyone sees it
- Offensive posts could have hate speech, pornography, violence or nudity
- Facebook along with Twitter, YouTube agreed to new hate speech rules
When users upload something offensive to disturb people, it normally has to be seen and flagged by at least one person. Offensive posts include content that is hate speech, threatening or pornographic; incites violence; or contains nudity or graphic or gratuitous violence.
For example, a bully, jilted ex-lover, stalker, terrorist or troll could post offensive photos to someone's wall, a group, event or the feed, 'Tech Crunch' reported.
By the time the content is flagged as offensive and taken down by Facebook, it may have already caused the damage. Now, artificial intelligence is helping Facebook to unlock active moderation at scale by having computers scan every image uploaded before anyone sees it.
"Today we have more offensive photos being reported by AI algorithms than by people," said Joaquin Candela, Facebook's Director of Engineering for Applied Machine Learning. As many as 25 per cent of engineers now regularly use its internal AI platform to build features and do business, Facebook said.
This artificial intelligence helps rank news feed stories, read aloud the content of photos to the vision impaired and automatically write closed captions for video ads that increase view time by 12 per cent.
The artificial intelligence could eventually help social networking sites combat hate speech. Facebook, along with Twitter, YouTube and Microsoft yesterday agreed to new hate speech rules.