Context is for Kings — Facebook’s Latest Innovation Against Fake News

By

Facebook has recently announced a new feature being tested in their newsfeed which adds context to shared links.

More Information Available. The button to access more information related to links in your Facebook news feed.

This contextual information is “pulled from across Facebook and other sources, such as information from the publisher’s Wikipedia entry… trending articles or related articles about the topic”. The hope is that this contextual information will give users the information they need to decide whether the link in question is reliable or not.

This is the best move I have seen any of the platforms take in response to fake news so far. This approach provides a non-invasive system which helps people make up their own minds about what they want to believe and share, rather than deciding what is and is not true for them.

Professional Fact Checkers Investigate ‘Laterally’

This approach also works perfectly in line with the findings of Sam Wineburg’s team at the Stanford History Education Group that professional fact checkers tend to go ‘lateral’ in their investigation. Compared to the students who tended to assess the articles and websites :

Fact checkers approached unfamiliar content in a completely different way [to the students studied]. They read laterally, hopping off an unfamiliar site almost immediately, opening new tabs, and investigating outside the site itself. They left a site in order to learn more about it.

It is exciting to think that this new context-adding feature that Facebook are testing, if executed well, could actually provide a powerful tool for professional fact checkers.

“If Executed Well”

The challenge with this system will always be in the execution. Facebook’s job here will be to maintain the quality of the contextual information. No doubt people will be trying to game this from day one, trying to do the equivalent of a “google bomb” on selected articles.

Also, the decision to preference certain websites (like Wikipedia) could contribute to pushing out the portions of the population who think that those websites are biased and/or outright dishonest.


But honestly, both of those ‘concerns’ are pretty inconsequential. There is not a lot to criticise here. I think this is a really strong move on Facebook’s behalf. They are on to something genuinely valuable that is actually pretty hard to game, and sufficiently open to user choice without telling people what they should be accepting as true and false.

Adding to the Data and Improving the Interface

Hopefully the interface continues to evolve so that as much value as possible can be extracted from this tool. The demonstration seen in the announcement video looks very similar to what is already on offer, providing just the addition of an excerpt about the publisher from wikipedia and a map of where the link has been shared. I hope more ideas continue to be integrated into this tool so that the tool continues to gain value.

One idea I’d like to see implemented is to categorise or label the ‘Related Articles’ based on what key value the article might bring to the situation. Some simple labels could be ‘Most shared’, ‘First to publish’, ‘Local to you’, or things like that. No doubt Facebook is using this sort of data to figure out what should be in the list already. They don’t need to reveal the algorithm that determines the list, but they could highlight the most relevant reason for each article being there in a way which helps people understand what value it brings with it. (The context of the context…?)

A stronger example comes from the Socratic Web idea that I advocate. Facebook could easily ensure that ‘critical’ responses always get preferential placement and are labeled as such: “Critique” or “Rebuttal” for example. Giving preferential listing to critical responses helps guarantee access to arguments outside of any potential echo chamber while also helping to ensure that this context engine provides the widest context possible with limited resources.

It also helps highlight the work done by professional fact checkers who have already researched the wider context in a more robust and reliable way. In this way Facebook would be providing a shortcut to a thoroughly investigated fact check/debunking/critique which would help people who are looking for that depth, while still provide access to the ‘tools’ to do their own research alongside that critique.

“No Information on This Website”

Another small feature mentioned in the video was that the context system would still be available even when there was no information available. It would simply report that there is “No Information on This Website”, as pictured below.


I was very pleased to see Facebook include this. This really shows they are listening.

A study was published just a few weeks ago which found that an unintended consequence of Facebook’s fake news flagging system was that some Facebook users would incorrectly trust any story which wasn’t flagged.

The existence of flags on some — but not all — false stories made Trump supporters and young people more likely to believe any story that was not flagged, according to the study published Monday by psychologists David Rand and Gordon Pennycook.

Being able to accurately and reliably demarcate every story in Facebook as True and Untrue is an almost impossible task, adding yet another reason to think of this approach as heavily flawed. However, the new approach that Facebook is now taking seems to have corrected that problem entirely.

The context providing approach is something which can be done algorithmically for most articles, and for the articles that it doesn’t work with, that fact alone is “helpful context” as Andrew Anker observes in the announcement video. Implying: “No one knows anything about this website/author/subject.” [maybe it is made up?]

All round, great work Facebook!

One thought on “Context is for Kings — Facebook’s Latest Innovation Against Fake News”

Leave a Reply