An adaptation of the Department of Justice’s Ferguson Police Department Report, condensing it and translating it into comics and infographics. The intention is to expose the way policy decisions can result in racist and exploitative outcomes, giving a roadmap for engaged communities to press for more just and equitable policing (excerpt).
A collaboration with Katie Jean Dahlaw and Kristin Heavey, using a 360- degree Pixpro camera. Childrens’ games play out in overlapping circles; a backyard idyll becomes tinged with vertigo.
Photoprints on aluminum, variable sizes, 2016. A series of “configurations” of images taken from my instagram, organized around certain themes. These ones are tagged #onementrecomposed and #obscureselfie.
It should be obvious that a society can’t solve real problems if it can’t first distinguish the real from the unreal. Irreality may have its pleasures, but reality will always kick its ass in the end. Fans of cheap irony (like myself) can legitimately assert that irreality is an actual, real thing, which has a profound and often corrosive impact on reality. For democracy to function – for even basic conversation to function – we have to have a way of separating lies from truth. And if institutions that distribute and disseminate lies have no accountability, they can pollute our information landscape with impunity. Any media organization that doesn’t explicitly flag, and de-incentive the sharing of, fake news is essentially monetizing irreality.
Facebook doesn’t want to define itself as a media company – Mark Zuckerberg has said “We are a tech company, not a media company.” He argued Facebook isn’t a media company because it doesn’t produce content. However, it’s undeniable that Facebook is a media distribution company – according to a study by the Pew Research Foundation and the Knight Foundation, 44% of Americans get news from Facebook. Facebook is a dominant player in a media ecosystem that has far more content, and far more sources, than the media environment that preceded the internet. It has more power and reach, as a distributor, than any single news distributor in the pre-internet era. And it can play a crucial role – in fact it DOES play a crucial role, whether it wants to or not – in determining whether our current media ecosystem is a healthy or a toxic one.
Unfortunately, to date Facebook has provided a climate that is – without being designed explicitly for this purpose – quite welcoming to distortions and outright fabrications. It’s as if they built a terrarium meant to grow a wide variety of flora, but oops – the soil is the perfect composition for black mold. And even worse – to stretch this fungoid metaphor to the breaking point – it turns out they can make a lot of money selling black mold, so there’s little pecuniary incentive to slow or stop its growth. The only problem is the pesky detail that most people who come into contact with it get memory loss, brain fog, confusion – a seething bouquet of neurological pathologies.
This state of being – with Facebook as a largely passive conduit for irreality – is neither inevitable nor insoluble. Here are three steps Facebook could take to make itself a less “fake news friendly” platform.
1. Evaluate and rank “news” and “newsy” sites. Do it in a transparent, information-rich way.
For news or “newsy” sites that have amassed a critical mass of shares, randomly sample a statistically significant number of their news stories. Have an editorial staff with journalistic training rate those stories for their degree of journalistic integrity. Make the ratings public on a “News Site Rating” microsite, which could display the evaluated stories, with a summary of each evaluation, and an annotated copy of the articles pointing out their lapses, falsehoods, and distortions (or in the case of stories that pass muster, underline their journalistic best practices). This evaluation should be ongoing, so news sites that improve the quality of their journalism can improve their rating, and sites that fall off in the quality of their reporting get flagged for it.
Facebook did have a team of human editors, or “news curators,” working on their trending news section. There was a degree of journalistic goalkeeping being performed by the news curators, but protocols for what constituted “legitimate” trending news seemed to be murky, and one news curator said the effort was “an experiment… to see what would increase engagement. At the end of the day, engagement was the only thing they wanted.” Other news curators have reported that their efforts were being used to essentially develop software that would automate the trending news section. And in fact, this past August (and after a stretch of criticism that trending news selections were politically biased) Facebook fired its team of news curators, replacing them with algorithms – and the algorithms failed badly, promoting a fake news story about Megyn Kelly three days after the news curators were axed. As the Washington Post’s “Intersect” team has documented, more fake news stories have continued to trend even after that high-profile embarrassment.
A News Site Rating staff would have to be treated very differently than the “news curators” – upholding journalistic integrity would have to be their core mission, rather than an ancillary derivative of the Holy Click. And their position shouldn’t be a prelude to automation. While it’s possible to imagine algorithms getting better over time, I’m skeptical that an algorithm could, for example, determine whether a quote was taken out of context – and it certainly couldn’t pick up the phone to confirm details from sources named or quoted in an article. Algorithms have proved easy to trick – the common technique of prioritizing a webpage based on the number of other pages linked to it is like an aphrodisiac to a professional bullshitter – any organization willing to fake news isn’t going to have a problem making fake cross-referencing websites. And there’s an element of theatrical hand-washing, sometimes, to passing functions over to algorithms – where “eliminating bias” becomes a euphemism for “bypassing standards.”
2. Visually identify stories from low- or no-quality “journalistic” sources.
Shared stories that come from unreliable sources should have some sort of visual tag that identifies them as such. Tags could be color-coded to indicate the degree of journalistic integrity – say, red for sites that mostly promulgate falsehoods, yellow for sites that hew mostly to the facts, but practice advocacy journalism. In the feed, the share could have a one-pixel colored border, and in addition, there could be a clickable tab attached to the shared news, which would lead users to the News Site Rating microsite. Users could then see the types and methods of manipulation and misinformation they’re subjecting themselves to.
There could even be an option, for Facebook users who really don’t want to be taken for suckers ever again, to “dial up” the width of the borders on their feed. Maybe they could go all the way back to the 90s and make the border blink.
The important thing is that Facebook users shouldn’t be able to turn off the visual tag entirely, and Facebook itself can’t shrug off responsibility for visually tagging fake news by letting third-party plug-ins pick up the slack. If visual tagging of bad journalism is an “opt in” or an “opt out” affair, it’s a doomed endeavor – users who care about accuracy would be sealed away in their own bubble of rectitude, and users who don’t would be free to be metastasis agents for hoaxes and propaganda.
If Facebook were to roll out a site-wide visual tagging strategy for news, I can imagine third-party developers creating a plug-in for aggravated users to scrub that visual tag from their feed. I doubt that would be widely adopted, however, and since by definition the market for such a plug-in would be highly gullible information-impoverished users, they’d be easy marks for identity thieves and malware scams.
In addition to a basic visual tag, when a user clicks to share a story from an unreliable source, they could be shown a pop-up alert: “This story comes from a source that has been rated as ‘unreliable.’ Do you still want to share?” This could dampen the proliferation of BS shares in the first place.
How would users react to this? If Facebook denies them the privilege of marinating their brains in a soup of misinformation, would a significant number of users decamp from Facebook entirely, or participate less regularly? Or would they just shift to sharing more posts about the latest celebrity nip-and-tuck, pictures of their current lasagna-in-progress, and cats being funny? What might be a blow to the BS-news-complex could be a boon for the feline pianist industry – and we may finally, finally find out how many cats on keyboards it takes to accidentally bang out a Beethoven composition.
3. Use algorithms that prioritize fact-checking and myth-busting media organizations to “footnote” shared news stories.
In my experience, Facebook’s algorithms already seem to be doing this to some degree – I’ve noticed some shared news stories of doubtful veracity with “related articles” linked at the bottom of the shared post, and sometimes a Snopes article weighing the story has been at the top of the list. This is something an algorithm could accomplish pretty successfully – cross-referencing keywords and sources from a shared story with articles from vetted fact-checking sites like Snopes, Politifact and Factcheck, and prioritizing the display of those analytical assessments. That sort of analytical correlation could be given more visual prominence by giving the fact-check links a little higher profile, perhaps by bumping the fact-checked link into the space of the original story, as a small tab. Part of the rationale of rating news sites for truthfulness, rather than every news story, is that there’s such a proliferation of BS, it would take a small nation of human beings to fact-check every story (just make sure that small nation isn’t Macedonia). But an editorial staff evaluating news sites, aided by algorithms that target truth at the news story level, could attack fake news at both the institutional and the granular level.
By adopting information-sharing principles like those outlined above, Facebook could use its power to incentivize good journalism, and actively de-incentivize bad journalism. It could drive positive growth in the news sector. It would benefit their customers – who might get a better sense of how to evaluate sources – and reduce the irreality in their lives, so they’re better equipped to deal with reality, when it deigns to knock on their door. And in fact – and hearteningly – in the couple weeks it’s taken me to assemble these thoughts, Facebook has already rolled out some strategies, currently being framed as “tests,” to combat hoaxes and fake news. These include leaning on the flagging of content by their users, and partnering with fact-checking organizations to evaluate, tag, and raise some obstacles to the sharing of patent BS.
Design and staffing choices like these are rooted in Facebook’s relationship to its customers. Their business model is to data-mine their customers – and there has always been a tension between whether Facebook sees its customers as a constituency for which they are providing services, or as a natural resource to be carelessly exploited. Is their business model one that improves the capacities and the lives of their customers, or one that thrives by diminishing the capacities of their customers and caving in to their lowest cravings and instincts? Facebook may not want to see itself as a media company, which would compel them to wear a mantle that’s heavy with ethical and civic embroidery. But what sort of models are they comfortable with, in terms of the way they view their customers? Do they want to think of themselves as a sort of clean water utility of the tech industry, or are they a cigarette company? If Facebook ran a supermarket, would they be okay with ingredient listings on food packaging, or would they strip those off before the product hits the shelf?
This proposed system of course wouldn’t be completely fool-proof, and would be open to accusations of “bias” – this is why transparency for any journalistic rating system is essential. It should also be said that many of the people complaining about bias would not be little snowflakes, their feelings bruised because their ideological hobbyhorses aren’t getting enough spots on the merry-go-round. Many of them will be bad actors, trying to game the refs to get their propaganda in front of as many eyeballs as they can. It’s also easy to imagine propaganda or fake-news-cash-in sites changing domain names to “start fresh” and shed the stigma of a bad rating – but organizations that try this could be identified, and then penalized by an outright sharing ban.
This system would also do nothing about slanderous dank memes, and it doesn’t get at what might be the more fundamental psychological problem that people enjoy sharing fake news. It gives a sort of negative pleasure, the toxic pleasure of luxuriating in your prejudices and biases.
This system would, however, ameliorate some of the negative socio-informational tendencies that are built into the current structure of Facebook. Some people argue that technology itself is neutral, and the real choice comes from human beings who somehow live outside of technology, who then decide whether to use that technology for good or evil. But the fact is no disruptive and pervasive technology is truly neutral – technology changes social relationships, often in ways that were not intended by the architects of that technology. Technologies have tendencies (and one tendency of Facebook is to put truth and BS on the same footing, with equal visual claims on authenticity, and equal ease of distribution). It’s up to us to identify those tendencies, and if we determine those technologies have ill consequences, we have to figure out how to adapt that technology, abandon it or replace it.
It would be comforting to believe technology is a self-correcting force – I think the idea that engineers can write some algorithms to sort everything out is a cousin to the idea that the free market, left to its own devices, will solve our social problems. But algorithms and markets can be gamed, and the invisible hand often belongs to an invisible idiot, whether it’s setting prices while externalizing costs, or tapping out some code that makes a UI more intuitive while also making it more addictive. As big data and business models based on data mining become a larger feature of our society – not just in social media, but in health care, law enforcement, and everywhere else – we are going to have to create infrastructures of curation, transparency and oversight to avoid being overwhelmed by bad information and bad actors. And we’re also going to need design – both visual design and institutional design – that separates signal from noise, reality from irreality.
Zuckerberg says Facebook is not a media company:
44% of American adults get news on Facebook:
College students develop a Chrome plug-in to identify fake news:
Trending of fake news on Facebook after Facebook fired its human editors:
The status of “news curators” at Facebook:
Facebook begins to roll out strategies to address fake news:
Two short videos addressing themselves to the polarized state of American politics. One shows portions of Donald Trump’s speech at the Republican National Convention with the audience of the Democratic National Convention edited in, and the other shows portions of Hillary Clinton’s speech at the Democratic National Convention with the audience of the Republican National Convention edited in. The implicit question is whether we can imagine these particular audience members cheering for the particular Presidential Candidates they’ve been paired with.
Excerpts from an “abstract” comic – partly inspired by Tom Phillips’ A HUMUMENT, where Phillips painted over the pages of W.H. Mallock’s novel A HUMAN DOCUMENT, leaving some of the words visible to form networks of what could be considered “excavated poetry” from the original prose. In my case, I took some pages from a public domain AMAZING MAN comic, drew over them, scanned them, and removed the most recognizable narrative elements, resulting in a comic composed of action lines, blanked-out word balloons, and other graphic effluvia.
At the Tahoe Gallery, Incline Village, NV, and the University of Nevada, Reno. I co-curated an exhibition (with Keith Knight and Sarah Lillegard) of Knight’s artwork, which was shown along with a cartooning workshop and a presentation of his slideshow of police brutality cartoons, “They Shoot Black People, Don’t They?” Through my Media Studio class, we produced a short interview video about his visit, below.
All artwork below is by Keith Knight. Photos by Sarah Lillegard.
I scripted, directed and animated this piece for the Marin Community Foundation, who wanted a short video making the case for the importance of arts education in a vibrant, fully-rounded curriculum. It serves as an entry point to the website artsedworks.org, which includes short documentary videos and other resources for integrating the arts into schools.
I was contacted by Lynette Hunter, Professor of the History of Rhetoric and Performance at UC Davis, to adapt one of her lectures, The Face, the Mask, and Classical Tragedy in the Household, into comics form. The adaptation was published in a book of her lectures, Disunified Aesthetics, from the publisher McGill-Queen’s Press. This particular lecture was about the writer Alice Munro, and my adaptation weaves elements of Hunter’s performance with the Munro story itself.