Facebook axed 583 million fake accounts in the first three months of 2018, the social media giant said Tuesday, detailing how it enforces “community standards” against sexual or violent images, terrorist propaganda or hate speech.
Responding to calls for transparency after the Cambridge Analytica data privacy scandal, Facebook said those closures came on top of blocking millions of attempts to create fake accounts every day.
Despite this, the group said fake profiles still make up 3 to 4 per cent of all active accounts.
It claimed to detect almost 100 per cent of spam and to have removed 837 million posts assimilated to spam over the same period.
Facebook pulled or slapped warnings on nearly 30 million posts containing sexual or violent images, terrorist propaganda or hate speech during the first quarter. Improved technology using artificial intelligence had helped it act on 3.4 million posts containing graphic violence, nearly three times more than it had in the last quarter of 2017.
In 85.6 percent of the cases, Facebook detected the images before being alerted to them by users, said the report, issued the day after the company said about 200 apps had been suspended on its platform as part of an investigation into misuse of private user data.
The figure represents between 0.22 and 0.27 per cent of the total content viewed by Facebook’s more than two billion users from January through March.
“In other words, of every 10,000 content views, an estimate of 22 to 27 contained graphic violence,” the report said.
Responses to rule violations include removing content, adding warnings to content that may be disturbing to some users while not violating Facebook standards; and notifying law enforcement in case of a “specific, imminent and credible threat to human life”.
Improved IT also helped Facebook take action against 1.9 million posts containing terrorist propaganda, a 73 per cent increase. Nearly all were dealt with before any alert was raised, the company said.
It attributed the increase to the enhanced use of photo detection technology. Hate speech is harder to police using automated methods, however, as racist or homophobic hate speech is often quoted on posts by their targets or activists.
“It may take a human to understand and accurately interpret nuances like self-referential comments or sarcasm,” the report said, noting that Facebook aims to “protect and respect both expression and personal safety”. – AFP