News: 0153824409

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

New Internal Documents Contradict Facebook's Claims that AI Can Enforce Its Rules (livemint.com)

(Sunday October 17, 2021 @11:34PM (EditorDavid) from the status-update dept.)


Today [1]in the Wall Street Journal , Facebook's head of integrity, Guy Rosen, admitted that from April to June of this year, one in every 2,000 content views on Facebook still contained hate speech. ( [2]Alternate URL here , with shorter versions [3]here and [4]here .)

Head of integrity Rosen was calling that figure an improvement over mid-2020, when one in every 1,000 content views on Facebook were hate speech. But at that same moment in time Mark Zuckerberg was [5]telling the U.S. Congress that "In terms of fighting hate, we've built really sophisticated systems!" "Facebook Inc. executives have long said that artificial intelligence would address the company's chronic problems keeping what it deems hate speech and excessive violence as well as underage users off its platforms," reports the Wall Street Journal.

"That future is farther away than those executives suggest, according [6]to internal documents reviewed by The Wall Street Journal . Facebook's AI can't consistently identify first-person shooting videos, racist rants and even, in one notable episode that puzzled internal researchers for weeks, the difference between cockfighting and car crashes."

> On hate speech, the documents show, Facebook employees have estimated the company removes only a sliver of the posts that violate its rules — a low-single-digit percent, they say. When Facebook's algorithms aren't certain enough that content violates the rules to delete it, the platform shows that material to users less often — but the accounts that posted the material go unpunished.

>

> The employees were analyzing Facebook's success at enforcing its own rules on content that it spells out in detail internally and in public documents like its community standards. The documents reviewed by the Journal also show that Facebook two years ago cut the time human reviewers focused on hate-speech complaints from users and made other tweaks that reduced the overall number of complaints. That made the company more dependent on AI enforcement of its rules and inflated the apparent success of the technology in its public statistics.

>

> According to the documents, those responsible for keeping the platform free from content Facebook deems offensive or dangerous acknowledge that the company is nowhere close to being able to reliably screen it. "The problem is that we do not and possibly never will have a model that captures even a majority of integrity harms, particularly in sensitive areas," wrote a senior engineer and research scientist in a mid-2019 note. He estimated the company's automated systems removed posts that generated just 2% of the views of hate speech on the platform that violated its rules. "Recent estimates suggest that unless there is a major change in strategy, it will be very difficult to improve this beyond 10-20% in the short-medium term," he wrote.

>

> This March, another team of Facebook employees drew a similar conclusion, estimating that those systems were removing posts that generated 3% to 5% of the views of hate speech on the platform, and 0.6% of all content that violated Facebook's policies against violence and incitement.

Facebook does also take some other additional steps to reduce views of hate speech (beyond AI screening), they told the Journal — also arguing that the internal Facebook documents the Journal had reviwed were outdated. But one of those documents showed that in 2019 Facebook was spending $104 million a year to review suspected hate speech, with a Facebook manager noting that "adds up to real money" and proposing "hate speech cost controls."

Facebook told the Journal the saved money went to better improving their algorithms. But the Journal reports that Facebook "also introduced 'friction' to the content reporting process, adding hoops for aggrieved users to jump through that sharply reduced how many complaints about content were made, according to the documents."

Facebook told the Journal that "some" of that friction has since been rolled back.



[1] https://www.wsj.com/articles/facebook-ai-enforce-rules-engineers-doubtful-artificial-intelligence-11634338184

[2] https://www.livemint.com/industry/media/facebook-says-ai-can-enforce-its-rules-but-the-company-s-own-engineers-are-doubtful-11634484622857.html

[3] https://www.marketwatch.com/story/facebook-is-counting-on-ai-to-clean-up-its-platform-but-its-own-engineers-have-doubts-11634512872

[4] https://www.foxbusiness.com/technology/facebook-ai-will-clean-up-the-platform-engineers-doubt

[5] https://www.govinfo.gov/content/pkg/CHRG-116hhrg41317/html/CHRG-116hhrg41317.htm

[6] https://archive.ph/o/7Zdqg/https://www.wsj.com/articles/the-facebook-files-11631713039?mod=article_inline



AI isn't actually Artificial Intelligence (Score:1)

by cats-paw ( 34890 )

It's not even remotely close. We're going to lose this battle just like we lost the hacker vs cracker battle. Correction, it's already been lost.

AI doesn't have the faintest amount of common sense of any sort.

I think Machine Learning is a pretty good moniker, although still inaccurate, because it also lacks complete introspection.

I real AI, or even something capable of "learning", would be able to at least get some idea that things that don't make sense when they don't actually make sense.

However ML will do

giraffiti:
Vandalism spray-painted very, very high.