Home Platforms Zuckerberg Gets Real About Fake News At Facebook’s Annual Shareholders Meeting

Zuckerberg Gets Real About Fake News At Facebook’s Annual Shareholders Meeting

SHARE:

Investors are putting Facebook’s feet to the fire about fake news.

At the company’s annual shareholder’s meeting on Thursday, investors questioned CEO Mark Zuckerberg about how Facebook is combatting the spread of false news on its platform.

Advertisers have become increasingly sensitive to brand safety issues since the YouTube/Google Display Network debacle kicked off in earnest in March, and Facebook’s role in the dissemination of fake news is a closely related issue.

As sites replete with malicious content and crappy ads proliferate, their creators turn to Facebook to generate clicks. Facebook responded to the problem in early May with an algorithm tweak that de-prioritized links to low-quality sites in the newsfeed.

Facebook has a responsibility to keep its platform safe for users and brands, and it’s been called out, most recently by Hillary Clinton on Wednesday at the Code conference, for unduly influencing the last presidential election and the public discourse in general by spreading false news reports.

But most of the people who spread hoaxes and false news aren’t doing it for ideological reasons, Zuckerberg told investors. They’re just spammers trying to make money with clickbait.

“They know that what they’re saying isn’t true. They’re just trying to come up with the most outrageous thing they can … [to] get you to click on it because it sounds crazy … and [then] they show you ads on the landing page,” he said, noting that Facebook is focusing on “disrupting the economics for these folks.”

By applying measures that reduce the sharing of fake news, “we can make sure that these kinds of spammers aren’t using our ad systems and monetization tools to make money,” he said.

Facebook has also said that it’s planning to add 3,000 new people to its community ops and content safety teams this year, in the ongoing fight against harmful and violent content.

While that will help in the short term, the volume of content on Facebook is such that no amount of people will ever realistically be able to stem the negative flow and root out the bad stuff.

The solution, Zuckerberg declared, is technology.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

“There are tens of billions of messages, comments and pieces of content that get shared through our service every day,” Zuckerberg said. “Long-term, the only way that we will get to the quality level we all want is not by adding thousands or tens of thousands more people to help review – but by building AI and technical systems that can look at this stuff more proactively.”

That’s fine, but the technology isn’t there yet, and with 1.9 billion MAUs – about one-quarter of all humans – moderation is a monster.

As The Guardian’s recent report on Facebook’s leaked content moderation policies revealed, the platform is awash with questionable content and creating rules to police it all is an ethical quagmire.

For example, consider the gradations of violence speech. “Kick a person with red hair,” or “Let’s beat up fat kids,” is OK because the hateful language is not directed at a specific person, whereas Facebook’s policies would call out “Someone shoot Trump,” or “#stab and become the fear of the Zionist,” as credible threats of violence.

“We acknowledge that we have more to do,” said Elliot Schrage, VP of global communications, marketing and public policy at Facebook. “The product we deliver and the services we provide have become increasingly popular, and as a result of that, they get more and more use – and frankly speaking, we have not been able to keep pace as much as we thought we would be able to do.”

But Schrage doggedly maintained Facebook’s status as a neutral provider of technology.

Although Facebook does render decisions on which content is and isn’t appropriate – most people would call those editorial decisions – Facebook just calls it adhering to community standards and guidelines.

“Our focus is not to be in the business of being an editor in the sense of determining what people should see,” Schrage said. “It’s to help people share what they want to share in an environment that is safe, an environment that is secure, but that also lets people express their opinions.”

Must Read

Readers Are Flocking To Political News, Says WaPo – And Advertisers Are Missing Out

During certain periods this year, advertisers blocked more than 40% of The Washington Post’s inventory over brand safety concerns.

Monopoly Man looks on at the DOJ vs. Google ad tech antitrust trial (comic).

Spicy Quotes You’ll Be Quoting From The Google Ad Tech Antitrust Trial

A lot has already been said and cited during the Google ad tech antitrust trial, with more to come. Here are a few of the most notable quotables from the first two weeks.

The FTC's latest staff report has strong message for social media and streaming video platforms: Stop engaging in the "vast surveillance" of consumers.

FTC Denounces Social Media And Video Streaming Platforms For ‘Privacy-Invasive’ Data Practices

The FTC’s latest staff report has strong message for social media and streaming video platforms: Stop engaging in the “vast surveillance” of consumers.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

Publishers Feel Seen At The Google Ad Tech Antitrust Trial

Publishers were encouraged to see the DOJ highlight Google’s stranglehold on the ad server market and its attempts to weaken header bidding.

Albert Thompson, Managing Director, Digital at Walton Isaacson

To Cure What Ails Digital Advertising, Marketers And Publishers Must Get Back To Basics

Albert Thompson, a buy-side veteran with 20+ years of experience, weighs in on attention metrics, the value of MFA sites, brand safety backlash and how publishers can improve their inventory.

A comic depiction of Google's ad machine sucking money out of a publisher.

DOJ vs. Google, Day Five Rewind: Prebid Reality Check, Unfair Rev Share And Jedi Blue (Sorta)

Someone will eventually need to make a Netflix-style documentary about the Google ad tech antitrust trial happening in Virginia. (And can we call it “You’ve Been Ad Served?”)