Facebook debuted a host of new efforts Thursday in its fight against fake news.
In a blog post from product manager Tessa Lyons, Facebook announced a series of new partnerships and expansions to its fact-checking endeavors, including fact-checking viral photos and images and the use of machine learning to stop the spread of hoaxes and fake news.
The new features it announced are, according to Lyons’ blog post:
Expanding our fact-checking program to new countries
Expanding our test to fact-check photos and videos
Increasing the impact of fact-checking by using new techniques, including identifying duplicates and using Claim Review
Taking action against new kinds of repeat offenders
Improving measurement and transparency by partnering with academics
Lyons explained that algorithms detect and flag pages with suspicious or otherwise unsavory behavior—plagiarized text, shady ads, targeting users in other countries, and more. Once a viral news story is debunked, Facebook will use machine learning to flag duplicates of the story—posting a story from multiple sites is a common practice among false information peddlers—identifying it across varying domains and news pages.
“Using machine learning we’re able to identify and demote duplicates of articles that were rated false by fact-checkers,” Lyons said to BuzzFeed. “These pages often copy and paste content [from other sources], and another signal is that the website themselves are covered in low-quality ads. We also see a common pattern in that page admins based in one country are targeting people in other countries. These admins often have suspicious accounts that are not fake but are identified in our system as having suspicious activity.”
Interestingly, being flagged for hoaxes doesn’t necessarily lead to a ban. Facebook says it plans to warn and demonetize pages that violate the rules, but then reinstate these pages if they stop sharing hoaxes.
“There is that ability to kind of rehabilitate [your page],” Lyon said.
Facebook and other major content platforms like YouTube have long floated the idea of using AI or machine learning techniques to moderate content, either in detecting terrorism, child porn, fake news, or hate speech. In all but the most extreme, expletive-laden examples, the technology simply isn’t up to par for the enormous amount of content uploaded to the platform, and concerns still remain on handing that much power over to automation—what happens if someone is erroneously flagged?
These kinds of errors are already happening as part of the social network’s efforts to clean up its platform. This week, Facebook rejected an ad for a news story on child detention centers for its “political” content, prompting backlash and familiar accusations of bias. With the midterm elections fast approaching, Facebook can expect even more scrutiny of all of its moderation efforts, automated or not.