We have all heard about the vast network of fake news sites that spread disinformation during the recent US Presidential Campaign. These sites use the same clickbait strategies that propelled sites like Upworthy to the top of the digital media scrapheap – inflammatory headlines, sensationalist stories and catchy hooks that tempt you to click just once more.
What Upworthy’s content strategy revealed was a unique combination of skilled teams, data and insights would help the organisation create content that was “viral ready”. As Joseph Lichterman explained in this Nieman Lab article:
Using the user data it’s collected, Upworthy found that elements like humor and a story structure that built in suspense would draw in readers and keep them on the page and better engaged.
This meant that even to tell a story with real information and verifiable facts, the goal for Upworthy was to grab and own the attention of readers as a priority, delivering news and information as a lower priority. As Amy O’Leary, Upworthy’s Editorial Director explained, “If I were to tell you, ‘Hey, I’ve got a 5,000-word piece on fast-food workers’ wages,’ very few people would be excited about that”. Instead the story would focus on building rapport with the audience, engaging through an imaginative framework of shared experience and emotionally engaging writing and opening up into the ethical challenges that come with enjoying something you eat while knowing the background and facts of its production. As O’Leary suggests, “I think we’re reaching deeper into people, because the approach is one of openness and not judgment”.
It’s worth reading more of the article to learn how Upworthy used data to drive its curation process – but what is fascinating (and concerning) is the way that this model has been co-opted by the fake news movement. By ignoring facts as the basis of news, these fake news sites have effectively defined a whole new genre of content catering to our own sense of digital isolation and disconnectedness. If we have learned anything from the last decade in this Age of Conversation, it is that when we (as consumers) come face-to-face with the vast anonymity of the internet, we rapidly seek our tribe – and we do so through the media at our fingertips – visuals, text, keywords. We seek the connection via keyword and conversation – and naturally enough find ourselves in an echo chamber.
Those of us who work with digital technology and audience strategy have – to be honest – been taking advantage of this approach for years. I often say that both love and hate Facebook and its targeting for I know how useful and powerful it is as a marketer, but equally how invasive and manipulative it is as a consumer. So much so that I consciously manage my engagement and sharing on Facebook and limit what I click on etc. But I also know that even my limited engagement there – and on every other digital channel – leaves enough breadcrumbs to be valuable to the brands and businesses seeking my attention. These days my choice to click comes down to context and location.
Because I know that every click rewards not only the brand but the advertiser too.
With the massive rise in programmatic advertising over the last two or three years, most advertisers and planners are unlikely to even know where their branded advertising will appear. It could appear on alt-right websites (the term used to mask white supremicist oriented websites), pornographic websites or even across the dark web. The powerful retargeting tools now in the hands of marketers and their trained algorithms means that ads that you first see on a mainstream website will follow you wherever you may go online. And while the web has some amazing resources, it also has some deep and nasty crevices.
So what do you do when your brand starts advertising in this murky digital world?
Imagine, for example, that you visited a fake news site with outrageous headlines and you did so out of curiosity. What kind of advertiser, you wonder, would support a platform that knowingly creates fake news and information that demonises your own audiences (the people who are your customers and supporters). This NY Times article explains such a situation:
One day in late November, an earth and environmental science professor named Nathan Phillips visited Breitbart News for the first time. Mr. Phillips had heard about the hateful headlines on the site — like “Birth Control Makes Women Unattractive and Crazy” — and wondered what kind of companies would support such messages with their ad dollars. When he clicked on the site, he was shocked to discover ads for universities, including one for the graduate school where he’d received his own degree — Duke University’s Nicholas School of the Environment. “That was a punch in the stomach,” he said.
Rather than to let this slide, the professor sent a tweet to his Duke questioning its affiliation with a “sexist and racist” site. Eventually this was sorted, as the NY Times revealed.
But in the background, a movement known as “Sleeping Giants” was arising to combat this kind of fake news. This shared Twitter account and network of followers are using a similar approach – naming and shaming the brands that support these fake news networks. The Sleeping Giants publish a list of brands who have discontinued their support for fake news sites – starting with the Breitbart network. But we can expect more of this kind of activity in the coming months and years. The question for brands in all this – do you know where and who your ad dollars go to? And how will you respond when you find your brand in places you don’t expect or want?