News

Actions

Facebook makes changes in its ongoing attempt to limit misinformation

Posted at 9:16 PM, Apr 10, 2019
and last updated 2019-04-10 21:18:36-04

Facebook is doing a lot of little things to try to address its bigger problems.

On Wednesday, the company announced more than a dozen updates about how it is addressing misinformation and other problematic content on Facebook, Instagram and Messenger. To promote the various efforts, the company held a four-hour long event at its Menlo Park headquarters for around 20 reporters where employees for various Facebook products recapped changes and answered questions.

For years, Facebook has grappled with the spread of controversial content on its platform, such as misinformation about elections, anti-vaccination stories, violence and hate speech.

Facebook has been trying to remove things faster that are against its rules, and “reduce” the spread of content that doesn’t explicitly violate its policies, but is still troublesome, such as clickbait and misinformation.

“We don’t remove information from Facebook just because it’s false. We believe we have to strike a balance,” Facebook’s VP of integrity Guy Rosen said at the event. “When it comes to false information by real people, we aim to reduce distribution and provide context.”

For example, Facebook said it will lessen the reach of groups that often share misinformation. When users in a group frequently share content that has been deemed false by Facebook’s third-party fact checkers, that group’s content will be pushed lower in News Feed so fewer people see it.

There will also be a “click-gap” signal, which will affect a link’s position in the News Feed. With this feature, Facebook hopes to reduce the spread of websites that are disproportionately popular on Facebook compared to other parts of the web.

It is working with experts to identify new ways to combat fake news on the platform. The Associated Press is expanding the work it does for Facebook’s independent fact-checking program, too.

The company has frequently described its issues with problematic content as “adversarial.” In the company’s framing, it is fighting an enemy that learns and changes tactics. The bundle of changes it announced on Wednesday are its newest weapons.

Facebook policy bans content that it determines can result in “imminent physical violence.” Employees on Wednesday defended its decision to not ban all misinformation or anti-vaccination content on its products.

“When it comes to thinking about harm, it is really hard … to draw a line between a piece of content and something that happens to people offline,” said Tessa Lyons, Facebook’s head of News Feed integrity.

She said some of the posts that appeared to be anti-vaccination involved people asking questions, seeking information and having conversations around the topic.

“There is a tension between enabling expression and discourse and conversation and ensuring that people are seeing authentic and accurate information. We don’t think that one private company should be making decisions about what information can or cannot be shared online,” she said.

Renee Murphy, principal analyst at research firm Forrester who covers security and risk, said that while Facebook’s steps are positive, they don’t do nearly enough to address some of its larger problems.

“Part of me says ‘awesome [this content] won’t go as far as it used to,” she said. “The other part says ‘I have no trust in any of this.’ At the end of the day, what is any of this going to do? How will they manage it?”

Facebook is also trying to be more transparent with users about how and why it makes decisions. As part of the effort, the company is adding a new section to its Community Standards website where users can see the updates Facebook makes to its policies every month.

Another update lets users remove comments and other content they posted to a Facebook Group after they leave it.

Meanwhile, Facebook-owned Instagram is trying to squash the spread of inappropriate posts that don’t violate its policies. For example, a sexually suggestive photo would still pop up in a feed if a user follows that account, but it may no longer be recommended for the Explore Page or in pages for hashtags.

Facebook also announced a few updates to its chat service Messenger, including a Facebook verified badge that would show up in chats to help fight scammers who impersonate public figures.

Another tool called the Forward Indicator will pop up in Messenger when a message is forwarded by the sender. WhatsApp, another Facebook-owned app, has a similar function, which is part of an effort to stop the spread of misinformation. WhatsApp has had major issues with viral hoax messages spreading on the platform, which have resulted in more than a dozen lynchings in India.

Forrester’s Murphy believes the company should do more to address major issues such as violence being livestreamed and going viral on the platform. Last month, a suspected terrorist was able to stream live video to Facebook of a mass murder in New Zealand. The company said its AI systems failed to catch the video, and it took down 1.5 million videos of the attack in the first 24 hours.

“They have bigger problems. I’m sure [these updates] will help sometimes, but there are bigger problems at foot,” she said. “Facebook has a lot more to do.”