An address to the New Zealand Skeptics Conference 18 November 2018
It would be comforting to think that a society dedicated to the promotion of critical thinking would be immune to the effects of disinformation. As Skeptics, you are supposed to question the validity of statements, to examine whether there is objective support for the claims made. Your powers of critical thinking will stand you in good stead but even you are not immune to disinformation.
You’ll note that I have not used the phrase ‘fake news’, even though it appears alongside my name in the programme. I will try to avoid the F-word but you’ll forgive me if I need to invoke it …sparingly. I avoid it because I have gifted it to Donald Trump. The United States president’s use of the term to disparage his country’s journalists – “the fake news media are the enemy of the American people” – has so debased the term that it can no longer be used to describe widely-propagated falsehoods. Mr Trump can have ‘fake news’: It now means ‘legitimate but embarrassing disclosures that I don’t happen to like, so nor should you’. I’ll stick with ‘disinformation’ which is defined as “false, incomplete, or misleading information that is passed, fed, or confirmed to a targeted individual, group or country”.In the not-too-distant past we would have called it black propaganda.
I said that even you – card-carrying critical thinkers – are not immune to disinformation. I say that for several reasons. First, because despite your best endeavours, your eyes can deceive you – don’t tell me you didn’t do a double take when you see those clever pieces of pavement art that depict very realistic holes in the ground. However, now we are seeing a level of sophistication that is unprecedented. Secondly, because Artificial Intelligence is allowing the deceivers to construct vehicles that are indistinguishable from reality.And finally, because even you are not entirely free of beliefs that predispose you to the confirmation bias on which disinformation feeds. Combine these reasons and in the right circumstances you can become as susceptible as everyone else.
I’m not going to dwell today on the deceivers – be they Russian, Chinese, Alt-Right, Alt-Left or simply malcontents – and their motives. Nor am I going to pay a lot of attention to the types of disinformation that affect modern society. Suffice to say that there are basically five forms of disinformation, each of which can have the same detrimental effect when it passes for truth.
- Fabricated content: completely false content;
- Manipulated content: distortion of genuine information or imagery.
- Imposter content: impersonation of genuine sources, for example by using the branding of an established news agency;
- Misrepresented content: misleading use of information.
- Satire and parody: Appropriating humorous but false stories and representing them as true.
Motives also vary. It may be a falsehood deliberately distributed to undermine a person, an institution or a country. It may be a falsehood distributed for financial gain through click-based advertising on social media. Or it could simply be bad journalism, as was the case with Buzzfeed’s publication of an unverified dossier alleging sexual impropriety on Donald Trump’s visit to Moscow.
Now, I’m sure you think that on the basis of the few examples I’ve given it should be relatively easy to give the lie to disinformation by applying a little critical thinking – sometimes a very little amount – to the subject. The majority of examples of disinformation that are trawled up in a Google search will set an alarm bell ringing even before you interrogate the content. That’s because your search terms will signal your scepticism – ‘disinformation’, or ‘fake news’ doesn’t suggest you are seeking empirical truths. Quite the opposite.
In the same way that Google uses algorithms to seek out material that will appeal to you – it will look for ‘fake’ before ‘news’ and ‘dis’ before ‘information’ – the perpetrators of disinformation tailor their communications to appeal to their target audience.
Truth would gain the upper hand if everyone employed systematic information processing, that is, evaluated a message by assessing characteristics like the presence (or absence) of quality argument. However, in this crazy environment of information and sensory overload, we don’t always do that and some of us are much more predisposed to heuristic information processing that relies on cognitive shortcuts such as perceived trustworthiness and links to past experience. The result is that rational and objective assessment is replaced by personal biases, the most powerful of which is confirmation bias. The term confirmation bias was originally coined by psychologist Peter Watson in 1960. Watson’s hypothesis is that people tend to subconsciously favour information that confirms their already existing beliefs over information that might challenge them.
Confirmation bias is often combined with selective exposure. According to a Reuters Institute study, almost two-thirds of people receive their news via what is called side-door access, through search and social media that is tailored to their interests and world view. Continual reinforcement through social media is usually sufficient to overcome inconvenient truths or cognitive dissonance. Disinformation is also constructed in ways that produce immediate emotional responses, especially negative emotional responses that have be found to be strong triggers for sharing on social media. These effects are all the stronger for disinformation’s targeting: Aim a lie at a specific group then watch it proliferate among like-minded people through social media sharing.
Sharing, however, may be haphazard so perpetrators help disinformation on its way. They use bots — scripts running in fake accounts that may simply share disinformation or even chat with human social media followers using artificial intelligence or natural language processing. These chatbots have the added advantage of being able to further spin the message. In the last quarter of 2017 and first quarter of 2018 Facebook disabled almost 1.3 billion fake accounts. In June Twitter confirmed that it was eliminating a million fake and suspicious accounts a day.Of course, only a small percentage of such accounts were set up to spread the sort of disinformation that concerns us. Most were irritating spam generators. However, with numbers like these you can imagine that disinformation bots will be operating on a significant scale and, like the old Space Invaders arcade game, more will pop up to replace those that have been destroyed.
State-organised disinformation also employs ‘sock puppets’, humans who operate social media accounts purporting to be something or someone they are not. A 2016 study by Harvard University estimated, for example, that 448 million posts a year are produced by sock puppets employed by the Chinese government. There is a popular belief that these are produced by ordinary citizens paid the equivalent of 50 cents per post. The Harvard researchers found evidence that the creators are, in fact, mainly government employees. The majority of their posts are aimed at a domestic audience but Taiwan and Japan are also targeted. I saw evidence of it during a recent visit to Taiwan, which is subject to both automated and human disinformation attacks from the mainland.
Sock puppet disinformation is harder to detect than automated bots but critical analysis of language and sentence structure has been employed to detect foreign-sourced messages. Sophisticated tracking analysis can also locate the source. A report in July by the UK Parliamentary Committee enquiring into disinformation and fake newsimplicated Russia in a wide range of activities and quoted a joint research project by Swansea and Berkeley universities, that identified 156,252 Russian accounts tweeting about #Brexit. They posted over 45,000 Brexit messages in the last 48 hours of the campaign. Much of the activity was directed by the Internet Research Agency, a company based in the outskirts of St Petersburg and undoubtedly Kremlin-controlled.
Unfortunately, Facebook and Twitter are diminishing in importance as vehicles for disinformation. The odds of being detected have shortened so some disinformation sources are moving to end-to-end encrypted services such as WhatsApp (which is owned by Facebook). The detection of disinformation on these services is harder to detect and may not be discovered outside its target group until it has moved to other platforms.
These forms of disinformation creation and distribution result in a war a war of attrition – production, then detection and debunking that has only limited success because those who want to believe, will believe. Wiser women and men may be less susceptible and we hope that, in the long run, their views will prevail. However, false information tends to stay around longer than the truth. Social media fact checkers in Taiwan were able to infiltrate an end-to-end encrypted messaging service called LINE. They found that a quarter of the false messages they detected were still being accessed 90 days to a year after first posting and one message remained current for 415 days.
And there is a new form of disinformation that presents a danger that may be infinitely more difficult to overcome – realistic audio-to-video conversion or, to use the very recent buzzword, Deep Fakes.
Publicly-available programmes like Adobe’s After Effects allow us to synthesise voices – to get famous people to say our words – and Fake App allows us to face swap and substitute our lips for that famous person. What fun to have Donald Trump saying what wewant him to say!
This relies on machine learning or iteration after iteration of audio and video data until a reasonable match is achieved. In the vast majority of examples produced to date there are tell-tale signs. The US Defense Department’s Advanced Research Projects Agency, for example, has found that deep fakes don’t blink or when they do there is unnatural eye movement. That sort of forensic examination, however, is (a) beyond the reach of the average member of the public and (b) it takes too long or, to update Jonathan Swift, the Deep Fake has travelled around the world while the truth is still putting on its shoes.
However, developments in machine learning and Artificial Intelligence are improving the ‘quality’ of Deep fakes at an alarming rate of progress. University of Washington researchers last year demonstrated how good their lip-sync fakes had become. They say – naïvely in my view – that the research “has a range of important practical applications” including allowing hearing-impaired people to lip-read on their smartphones and providing new ways for Hollywood to seek box office success. There is no mention of its application in disinformation but, to demonstrate their technology, they use audio from video clips of Barak Obama in the past and put those words into the mouth of a constructed version of the former president. To the average viewer it’s the real thing.Perhaps the most disturbing aspect is that it interferes with deep-seated perceptions of objective reality. Walter Lippmann’s re-examination of public opinion after the First World War noted the kind of authority that photographs had over the imagination: “They seem utterly real. They come, we imagine, directly to use without human meddling, and they are the most effortless food for the mind conceivable.”So, when we see a video of Obama we expect it to be captured version of what the American philosopher John Searle calls ‘direct realism’: The camera as a surrogate for our own eyes. Perhaps it is inevitable that we are even less equipped to question the validity of a machine-created moving image than we are an AI-driven chatbot that can mimic human responses in text.
Let me turn briefly to ‘fake news’. The assault on the news media led by Donald Trump, and his use of the phrase, seeks to label inconvenient news organisations as immoral purveyors of disinformation: “The fake news media [are] the true enemy of the people”. This denigration of journalists and news outlets for doing their job of holding power to account has contributed to erosion of trust in the media as a democratic institution. The effect is not limited to the United States: Trust in media in many countries has declined. In 22 of the 28 markets included in the Edelman Trust Barometer this year, less than half those surveyed trusted news media. Media are now the least trusted institution and New Zealand is not immune. Less than a third of average Kiwis trust the media, the same result as Australia, but below the 28-market average also shown in the graph below.
It is now time for a stocktake. As it has done so many times in the past, disinformation has harnessed available technology to propagate itself among its target audiences and beyond. This technology, however, is unique in human history. For the first time there is no need to either control the means of mass communication or to suborn those who do (although there are added benefits from their participation, unwitting or otherwise). The ability to reach targets in any democracy in the world is unprecedented. And within many of those democracies we are witnessing increased partisanship, intolerance and populism at the same time as we are seeing the erosion of trust in institutions. The emotional temperature is rising and with it the ability to push the emotional triggers on which disinformation relies.
Imagine a society in which truth has been replaced by layers of virtual reality, a landscape that individuals can shape to suit their own prejudices and worldview. One in which individuals are able to reinforce their sense of rightness and self-worth with an unchallenged and escalating diet of resonating rhetoric. Perhaps you don’t need to imagine it, because the ability to create it is already here. And for some it clearly is already in the here-and-now. But now imagine a society where even its so-called informed citizens are no longer able to be sure of their own ability to discern truth, to apply their usual processes of critical thinking because too many of the reference points have been artfully removed. Who can they trust to tell the truth?
Not so long ago we might have rejected such a proposition as unacceptably dystopic for a functioning democracy. Can we be so sure when a functioning democracy can produce disinformation – a doctored version of video showing an intern attempting to take a microphone away from CNN White House correspondent Jim Acosta during a presidential press conference – in an attempt to remove a troublesome priest from the White House Press Corps?
There is no certain remedy to this situation. Long-term the answer is a strong emphasis throughout our education system on civics and critical thinking. Short-term we need to do two things and I fear that New Zealand is currently not well-equipped to accomplish either very quickly.
We need to invest in comprehensive fact-checking but our newsrooms are depleted. Sub-editors are virtually a thing of the past and do our media organisations have the resources in a cash-strapped environment to re-dedicate senior staff to that gatekeeping role? Or can they provide journalists with the tools and the time to make comprehensive and sometimes forensic tests from disinformation?
Of course, some subjects will set off alarm bells in reporters’ heads. Unfortunately, the digital-first, news-now, instant-deadline culture in which they are forced to work sometimes sees the alarm bells turned off or at least muted in the interests of being first online. Sections of the digital community see nothing wrong with publish first-fix later. It is an attitude that could have expensive consequences. Personally, I believe in the verify-then-publish approach.
I am equally concerned that journalists regard social media as primary sources – as good as talking to a subject and requiring no further verification. In January a story appeared about New Year revellers building a sand island in an estuary at Tairua, declaring they were in international waters and thereby avoiding a liquor ban. Yes, the ‘island’ was built – one of those silly things that young men do on holiday — but the liquor ban avoidance was an afterthought concocted on a Facebook page carrying a picture of the revellers on their sandy perch. In other words, the story, which then ran in other media, was a fabrication…perhaps an attempt at satire. The Facebook entry was taken at face value and, no doubt, the photograph was “direct realism” and built-in verification.
Taking social media content as primary sources is fraught with danger, or as the cartographers of old used to say: “Here be monsters”.
Disinformation has yet to become a major issue in New Zealand. Therefore, we are in the lucky position of being able – if we are willing – to take steps to hinder its progress.
It would be relatively easy for news organisations to instruct staff that no social media sources are to be used in stories without independent verification. It would be equally possible for them to club together to buy a subscription to Storyful or Full Fact that offer checking services.
However, the most enduring way in which the news media can play their part in the fight against disinformation is to become trusted sources of information., You’ve seen their starting position on the trust barometer so they have to work hard to establish that trust. They do so by transparently practicing good journalism, by displaying and discussing their ethics and guidelines, and by applying all of the tools of critical thinking.
I want to end not by talking about disinformation but by emphasising the absolute need for its opposite. I often quote the British philosopher Julian Baggini. He wrote A Short History of Truth(subtitled Consolations for a Post-Truth World) and in it he retains an innate optimism for the future of humankind. He ends that small book by noting that the defence of truth often takes the form of battles to defend particular truths that divide us when we shouldbe defending the shared values we place on truth and the virtues that flow from it. He concludes: “Those who stand up for this are pushing at an open door because ultimately we all recognise that truth is not a philosophical abstraction. Rather it is central to how we live and make sense of ourselves, the world and each other, day by day.”
Isn’t that the truth?
Shultz, R.H. & Godson, R. (1984). Dezinformatsia: Active measures in Soviet Strategy. Washington, Pergamon-Brassey’s, p. 41.
Soon Wan Ting, C. and Goh Ze Song, S. (2017) What Lies Beneath the Truth: A literature review on fake news, false information and more.National University of Singapore Institute of Policy Studies, Working Paper 31, pp 18-38.
Reuters Digital News Report 2018
 “Twitter is sweeping out fake accounts like never before, putting user growth at risk”. Washington Post, 6 July 2018
Disinformation and ‘fake news’: Interim Report on the House of Commons Digital, Cultural, Media & Sport Committee, HC 363 24 July 2018.
Suwajanakorn, S., Seitz, S.M., & Kemelmacher-Shlizerman, I., “Synthesizing Obama: Learning Lip Sync from Audio”. ACM Transactions on Graphics, Vol. 36, No. 4, Article 95.
Lippmann, W. (1922/2004), Public Opinion. Mineola N.Y., Dover, p.50.
Edelman Trust Barometer 2018, p. 17. NZ supplement p. 4.
Mackay, P., “Just a little bit fake — does it count if the story is charming and harmless?”, Walkley Magazine6 June 2018.
Baggini, J. (2017). A Short History of Truth: Consolations for a Post-Truth World. London, Quercus, P.108.