The hiring spree, announced by Chief Executive Mark Zuckerberg on Wednesday, is an acknowledgement by Facebook that it needs more than automated software to identify and remove offensive posts that have exploded online and made headlines in the traditional news media.
The problem has become more pressing since the introduction last year of Facebook Live, a service that allows any of Facebook's 1.9 billion monthly users to broadcast video, which has been marred by some violent scenes.
Some violence on Facebook is inevitable given its size, researchers say, but the company has been attacked for its slow response.
UK lawmakers this week accused social media companies including Facebook of doing a "shameful" job removing child abuse and other potentially illegal material.
In Germany, the company has been under pressure to be quicker and more accurate in removing illegal hate speech and to clamp down on so-called fake news.
German lawmakers have threatened fines if the company cannot remove at least 70 percent of offending posts within 24 hours.
So far, Facebook has avoided political fallout from U.S. lawmakers or any significant loss of the advertisers it depends on for revenue. Some in the ad industry have defended Facebook, citing the difficulty of policing material from its many users. Police agencies have said Facebook works well with them.
Facebook shares were down slightly on Wednesday, ahead of quarterly earnings after the bell.
Zuckerberg, the company's co-founder, said in a Facebook post the workers will be in addition to the 4,500 people who already review posts that may violate its terms of service. Facebook has 17,000 employees overall, not including contractors.
Last week, a father in Thailand broadcast himself killing his daughter on Facebook Live, police said. After more than a day, and 370,000 views, Facebook removed the video. Other videos from places such as Chicago and Cleveland have also shocked viewers with their violence.
Zuckerberg said the company would do better: "We're working to make these videos easier to report so we can take the right action sooner - whether that's responding quickly when someone needs help or taking a post down."
The 3,000 workers will be new positions and will monitor all Facebook content, not just live videos, the company said. The company did not say where the jobs would be located, although Zuckerberg said the team operates around the world.
The world's largest social network has been turning to artificial intelligence to try to automate the process of finding pornography, violence and other potentially offensive material. In March, the company said it planned to use such technology to help spot users with suicidal tendencies and get them assistance.
However, Facebook still relies largely on its users to report problematic material. It receives millions of reports from users each week, and like other large Silicon Valley companies, it relies on thousands of human monitors to review the reports.
"Despite industry claims to the contrary, I don't know of any computational mechanism that can adequately, accurately, 100 percent do this work in lieu of humans. We're just not there yet technologically," said Sarah Roberts, a professor of information studies at UCLA who looks at content monitoring.
In December, two people who monitored graphic material for Microsoft Corp's services such as Skype sued the company, saying it had failed to warn them about the risks to their mental health. They are seeking compensation for medical costs, wages lost from disability and other damages.
Microsoft has disputed their claims. The company said in a statement that it takes seriously the responsibility to remove and report imagery of child sexual exploitation and abuse, as well as the health and resilience of employees.
Mental health assistance plans sometimes fall by the wayside for such workers, and there was a risk that would happen for Facebook if it tries to find 3,000 new workers quickly, Roberts said. "To do it at this scale and this magnitude, I question that," she said.
Facebook says that every person reviewing its content is offered psychological support and wellness resources, and that the company has a support program in place.
When Facebook launched its live service in April 2016, Zuckerberg spoke about it as a place for "raw and visceral" communication.
"Because it's live, there is no way it can be curated," Zuckerberg told BuzzFeed News in an interview then. "And because of that it frees people up to be themselves. It's live; it can't possibly be perfectly planned out ahead of time."
Since then, at least 50 criminal or violent incidents have been broadcast over Facebook Live, including assault, murder and suicide, The Wall Street Journal reported in March.
In January, four African-Americans in Chicago were accused of attacking an 18-year-old disabled man on Facebook Live while making anti-white racial taunts. They have pleaded not guilty.
A man in Cleveland, Ohio, last month was accused of shooting another man on a sidewalk and then uploading a video of the murder to Facebook, where it remained for about two hours. The man later fatally shot himself.
Zuckerberg said the company would keep working with community groups and law enforcement, and that there have been instances when intervention has helped.
"Just last week, we got a report that someone on Live was considering suicide," he wrote in his post. "We immediately reached out to law enforcement, and they were able to prevent him from hurting himself. In other cases, we weren't so fortunate."
(Editing by Chizu Nomiyama, Jeffrey Benkoe and Bill Rigby)
(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)