HOW FACEBOOK CAN QUARANTINE FAKE NEWS

fb-qu-01


It should be obvious that a society can’t solve real problems if it can’t first distinguish the real from the unreal. Irreality may have its pleasures, but reality will always kick its ass in the end. Fans of cheap irony (like myself) can legitimately assert that irreality is an actual, real thing, which has a profound and often corrosive impact on reality. For democracy to function – for even basic conversation to function – we have to have a way of separating lies from truth. And if institutions that distribute and disseminate lies have no accountability, they can pollute our information landscape with impunity. Any media organization that doesn’t explicitly flag, and de-incentive the sharing of, fake news is essentially monetizing irreality.

Facebook doesn’t want to define itself as a media company – Mark Zuckerberg has said “We are a tech company, not a media company.” He argued Facebook isn’t a media company because it doesn’t produce content. However, it’s undeniable that Facebook is a media distribution company – according to a study by the Pew Research Foundation and the Knight Foundation, 44% of Americans get news from Facebook. Facebook is a dominant player in a media ecosystem that has far more content, and far more sources, than the media environment that preceded the internet. It has more power and reach, as a distributor, than any single news distributor in the pre-internet era. And it can play a crucial role – in fact it DOES play a crucial role, whether it wants to or not – in determining whether our current media ecosystem is a healthy or a toxic one.

fb-qu-02


fb-qu-03


Unfortunately, to date Facebook has provided a climate that is – without being designed explicitly for this purpose – quite welcoming to distortions and outright fabrications. It’s as if they built a terrarium meant to grow a wide variety of flora, but oops –  the soil is the perfect composition for black mold. And even worse – to stretch this fungoid metaphor to the breaking point – it turns out they can make a lot of money selling black mold, so there’s little pecuniary incentive to slow or stop its growth. The only problem is the pesky detail that most people who come into contact with it get memory loss, brain fog, confusion – a seething bouquet of neurological pathologies.

This state of being – with Facebook as a largely passive conduit for irreality – is neither inevitable nor insoluble. Here are three steps Facebook could take to make itself a less “fake news friendly” platform.

1. Evaluate and rank “news” and “newsy” sites. Do it in a transparent, information-rich way.

For news or “newsy” sites that have amassed a critical mass of shares, randomly sample a statistically significant number of their news stories. Have an editorial staff with journalistic training rate those stories for their degree of journalistic integrity. Make the ratings public on a “News Site Rating” microsite, which could display the evaluated stories, with a summary of each evaluation, and an annotated copy of the articles pointing out their lapses, falsehoods, and distortions (or in the case of stories that pass muster, underline their journalistic best practices). This evaluation should be ongoing, so news sites that improve the quality of their journalism can improve their rating, and sites that fall off in the quality of their reporting get flagged for it.

fb-qu-04a

fb-qu-04b


Facebook did have a team of human editors, or “news curators,” working on their trending news section. There was a degree of journalistic goalkeeping being performed by the news curators, but protocols for what constituted “legitimate” trending news seemed to be murky, and one news curator said the effort was “an experiment… to see what would increase engagement. At the end of the day, engagement was the only thing they wanted.” Other news curators have reported that their efforts were being used to essentially develop software that would automate the trending news section. And in fact, this past August (and after a stretch of criticism that trending news selections were politically biased) Facebook fired its team of news curators, replacing them with algorithms – and the algorithms failed badly, promoting a fake news story about Megyn Kelly three days after the news curators were axed. As the Washington Post’s “Intersect” team has documented, more fake news stories have continued to trend even after that high-profile embarrassment.

A News Site Rating staff would have to be treated very differently than the “news curators” – upholding journalistic integrity would have to be their core mission, rather than an ancillary derivative of the Holy Click. And their position shouldn’t be a prelude to automation. While it’s possible to imagine algorithms getting better over time, I’m skeptical that an algorithm could, for example, determine whether a quote was taken out of context – and it certainly couldn’t pick up the phone to confirm details from sources named or quoted in an article. Algorithms have proved easy to trick – the common technique of prioritizing a webpage based on the number of other pages linked to it is like an aphrodisiac to a professional bullshitter – any organization willing to fake news isn’t going to have a problem making fake cross-referencing websites. And there’s an element of theatrical hand-washing, sometimes, to passing functions over to algorithms – where “eliminating bias” becomes a euphemism for “bypassing standards.”

2. Visually identify stories from low- or no-quality “journalistic” sources.

Shared stories that come from unreliable sources should have some sort of visual tag that identifies them as such. Tags could be color-coded to indicate the degree of journalistic integrity – say, red for sites that mostly promulgate falsehoods, yellow for sites that hew mostly to the facts, but practice advocacy journalism. In the feed, the share could have a one-pixel colored border, and in addition, there could be a clickable tab attached to the shared news, which would lead users to the News Site Rating microsite. Users could then see the types and methods of manipulation and misinformation they’re subjecting themselves to.

fb-qu-05


There could even be an option, for Facebook users who really don’t want to be taken for suckers ever again, to “dial up” the width of the borders on their feed. Maybe they could go all the way back to the 90s and make the border blink.

fb-qu-06


The important thing is that Facebook users shouldn’t be able to turn off the visual tag entirely, and Facebook itself can’t shrug off responsibility for visually tagging fake news by letting third-party plug-ins pick up the slack. If visual tagging of bad journalism is an “opt in” or an “opt out” affair, it’s a doomed endeavor – users who care about accuracy would be sealed away in their own bubble of rectitude, and users who don’t would be free to be metastasis agents for hoaxes and propaganda.

If Facebook were to roll out a site-wide visual tagging strategy for news, I can imagine third-party developers creating a plug-in for aggravated users to scrub that visual tag from their feed. I doubt that would be widely adopted, however, and since by definition the market for such a plug-in would be highly gullible information-impoverished users, they’d be easy marks for identity thieves and malware scams.

fb-qu-07


In addition to a basic visual tag, when a user clicks to share a story from an unreliable source, they could be shown a pop-up alert: “This story comes from a source that has been rated as ‘unreliable.’ Do you still want to share?” This could dampen the proliferation of BS shares in the first place.

fb-qu-08


How would users react to this? If Facebook denies them the privilege of marinating their brains in a soup of misinformation, would a significant number of users decamp from Facebook entirely, or participate less regularly? Or would they just shift to sharing more posts about the latest celebrity nip-and-tuck, pictures of their current lasagna-in-progress, and cats being funny? What might be a blow to the BS-news-complex could be a boon for the feline pianist industry – and we may finally, finally find out how many cats on keyboards it takes to accidentally bang out a Beethoven composition.

fb-qu-09


3. Use algorithms that prioritize fact-checking and myth-busting media organizations to “footnote” shared news stories. 

In my experience, Facebook’s algorithms already seem to be doing this to some degree – I’ve noticed some shared news stories of doubtful veracity with “related articles” linked at the bottom of the shared post, and sometimes a Snopes article weighing the story has been at the top of the list. This is something an algorithm could accomplish pretty successfully – cross-referencing keywords and sources from a shared story with articles from vetted fact-checking sites like Snopes, Politifact and Factcheck, and prioritizing the display of those analytical assessments. That sort of analytical correlation could be given more visual prominence by giving the fact-check links a little higher profile, perhaps by bumping the fact-checked link into the space of the original story, as a small tab. Part of the rationale of rating news sites for truthfulness, rather than every news story, is that there’s such a proliferation of BS, it would take a small nation of human beings to fact-check every story (just make sure that small nation isn’t Macedonia). But an editorial staff evaluating news sites, aided by algorithms that target truth at the news story level, could attack fake news at both the institutional and the granular level.

fb-qu-10


By adopting information-sharing principles like those outlined above, Facebook could use its power to incentivize good journalism, and actively de-incentivize bad journalism. It could drive positive growth in the news sector. It would benefit their customers – who might get a better sense of how to evaluate sources – and reduce the irreality in their lives, so they’re better equipped to deal with reality, when it deigns to knock on their door. And in fact – and hearteningly – in the couple weeks it’s taken me to assemble these thoughts, Facebook has already rolled out some strategies, currently being framed as “tests,” to combat hoaxes and fake news. These include leaning on the flagging of content by their users, and partnering with fact-checking organizations to evaluate, tag, and raise some obstacles to the sharing of patent BS.

Design and staffing choices like these are rooted in Facebook’s relationship to its customers. Their business model is to data-mine their customers – and there has always been a tension between whether Facebook sees its customers as a constituency for which they are providing services, or as a natural resource to be carelessly exploited. Is their business model one that improves the capacities and the lives of their customers, or one that thrives by diminishing the capacities of their customers and caving in to their lowest cravings and instincts? Facebook may not want to see itself as a media company, which would compel them to wear a mantle that’s heavy with ethical and civic embroidery. But what sort of models are they comfortable with, in terms of the way they view their customers? Do they want to think of themselves as a sort of clean water utility of the tech industry, or are they a cigarette company? If Facebook ran a supermarket, would they be okay with ingredient listings on food packaging, or would they strip those off before the product hits the shelf?

fb-qu-11


This proposed system of course wouldn’t be completely fool-proof, and would be open to accusations of “bias” – this is why transparency for any journalistic rating system is essential. It should also be said that many of the people complaining about bias would not be little snowflakes, their feelings bruised because their ideological hobbyhorses aren’t getting enough spots on the merry-go-round. Many of them will be bad actors, trying to game the refs to get their propaganda in front of as many eyeballs as they can. It’s also easy to imagine propaganda or fake-news-cash-in sites changing domain names to “start fresh” and shed the stigma of a bad rating – but organizations that try this could be identified, and then penalized by an outright sharing ban.

This system would also do nothing about slanderous dank memes, and it doesn’t get at what might be the more fundamental psychological problem that people enjoy sharing fake news. It gives a sort of negative pleasure, the toxic pleasure of luxuriating in your prejudices and biases.

fb-qu-12


This system would, however, ameliorate some of the negative socio-informational tendencies that are built into the current structure of Facebook. Some people argue that technology itself is neutral, and the real choice comes from human beings who somehow live outside of technology, who then decide whether to use that technology for good or evil. But the fact is no disruptive and pervasive technology is truly neutral – technology changes social relationships, often in ways that were not intended by the architects of that technology. Technologies have tendencies (and one tendency of Facebook is to put truth and BS on the same footing, with equal visual claims on authenticity, and equal ease of distribution). It’s up to us to identify those tendencies, and if we determine those technologies have ill consequences, we have to figure out how to adapt that technology, abandon it or replace it.

It would be comforting to believe technology is a self-correcting force – I think the idea that engineers can write some algorithms to sort everything out is a cousin to the idea that the free market, left to its own devices, will solve our social problems. But algorithms and markets can be gamed, and the invisible hand often belongs to an invisible idiot, whether it’s setting prices while externalizing costs, or tapping out some code that makes a UI more intuitive while also making it more addictive. As big data and business models based on data mining become a larger feature of our society – not just in social media, but in health care, law enforcement, and everywhere else – we are going to have to create infrastructures of curation, transparency and oversight to avoid being overwhelmed by bad information and bad actors. And we’re also going to need design – both visual design and institutional design – that separates signal from noise, reality from irreality.

Sources/References:

Zuckerberg says Facebook is not a media company:

http://www.reuters.com/article/us-facebook-zuckerberg-idUSKCN1141WN

44% of American adults get news on Facebook:

http://www.journalism.org/2016/05/26/news-use-across-social-media-platforms-2016/

College students develop a Chrome plug-in to identify fake news:

https://www.washingtonpost.com/news/inspired-life/wp/2016/11/18/fake-news-on-facebook-is-a-real-problem-these-college-students-came-up-with-a-fix/?utm_term=.e4c617a17871

Trending of fake news on Facebook after Facebook fired its human editors:

https://www.washingtonpost.com/news/the-intersect/wp/2016/10/12/facebook-has-repeatedly-trended-fake-news-since-firing-its-human-editors/?utm_term=.42b646e33e8f

The status of “news curators” at Facebook:

http://gizmodo.com/want-to-know-what-facebook-really-thinks-of-journalists-1773916117

Facebook begins to roll out strategies to address fake news:

http://newsroom.fb.com/news/2016/12/news-feed-fyi-addressing-hoaxes-and-fake-news/

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s