Statistricks, part 5: the remainder

“Statistical thinking will one day be as necessary for efficient citizenship as the ability to read and write” – Samuel S Wilks, 1951

Authority figures can no longer be trusted to tell the truth. And since most of the news media is now in the hands of private owners with conspicuous agendas, and the few remaining outlets with a shred of integrity are running on fumes, journalists can no longer be relied upon to catch those authority figures out.

Which means there’s really only one gatekeeper left to protect you from disinformation: you.

The lies we’re most familiar with – and therefore the best at seeing through – are the verbal kind: sequences of words that bend, break or obfuscate the truth. But as I hope my last few posts have illustrated, those who wish to mislead us are just as adept at manipulating sequences of numbers, and it turns out we’re not half as good at spotting that.

This is a problem. And since these lies have real, measurable impact (if they didn’t, no one would bother lying), it’s your problem.

No one’s asking you to sign up for a master’s in statistics. You just need to know enough to be able to spot the red flags. So my last post on this subject will be a recap of my previous warnings on the subject, plus a wee list of other common examples of statistical chicanery.

Beware big numbers

We can all easily imagine what 10 items looks like. With a bit of effort, 100. And most of us can probably conjure a vague picture of 1,000 things. But when it comes to millions and billions and trillions, our mental gearboxes just seize up. This is what the propagandists are counting on.

The example I chose, because it is arguably one of the best known and certainly one of the most damaging, was the “£350m a week” claim by the Brexit campaign.

The Remainers were quick, ish, to point out the falsehood. But the battle was already lost. People were no less outraged by the true figure of £150m a week, because all that mattered was that it was a bafflingly large amount.

Big numbers in isolation are meaningless to the average person. To get an idea of their true significance, we need context: in this case, the cost of things of a comparable scale, like, say, the NHS budget (£2bn a week), or defence spending (£1bn a week). Most of all, we needs to know exactly what that money bought. Sure, EU membership cost a lot of money, but did it offer value for that money?

Since its wildly successful field test in the Brexit debate, this tactic is deployed on a daily basis. Whenever something within state competence is revealed as being even slightly less than ideal, the response from the state press officers is the same: trot out a big number.

“A spokesperson for the DfE said education was a top priority for the government, with an extra £2bn for schools for each of the next two years included in the autumn statement.”

Ooh, two billion! That’s a lot! Everything must be fine then.

But without the proper context, this means nothing. An extra £2bn on top of what? Was the annual increase in base funding, if it existed at all, in line with inflation? How does the total compare with the funding levels last year, or 10 years ago? How much is being spent per pupil, and how does this compare with other countries’ efforts? Most importantly of all, is this money enough to meet the current needs of the education system?

To sum up, don’t let your brain switch off when it sees big numbers. If anything, it should move to high alert.

Be on your guard against glitter

Advertisers have long made liberal use of “glitter” – words or phrases that make things sound superficially attractive, but are devoid of substance. Two of the more popular zingers are “more than” and “over”. I once saw a billboard ad for a breakfast cereal that proudly proclaimed: “Contains more than 12 vitamins!”

The reason this works (on the unwary, anyway) is the anchoring effect: the tendency of the human brain to evaluate everything with reference to the first value it encounters. In this case, the anchor value is 12, and “more than 12” signals the set of all numbers greater than 12 – loads! – when a moment’s reflection will tell us that the true figure is 13.

Now it seems politicians and journalists have learned a trick or two from copywriters, and no figure is deemed complete unless it comes with a side of comparatives or superlatives.

Recently, in the course of my subediting duties, I happened across an (unedited) article containing the line “the family were awarded over £8,129 11s 5d in reparation”. My God! Are you telling me those lucky sods received compensation of 8,129 pounds, 11 shillings and six pence?

Another word that sounds great but never survives scrutiny is “record”.

“That is why, despite facing challenging economic circumstances, we are investing a record amount in our schools and colleges.”

Well, Department for Education, I should hope you were investing a record amount every year, given that the population rises every year and that inflation is a thing.

One of the truth-twisters’ favourite buzzwords in the early days of Brexit was “fastest-growing”. Never mind those tired old European countries; we’re going to concentrate on trading with countries that actually have a future!

Here again, crucial context is missing, and the context is that these wonderful new trading partners are growing so fast because they’re starting from a much lower base. As even one prominent Brexit advocate once admitted (about a year before it became their favourite go-to gotcha), the real meaning of “fastest-growing” is “tiny”.

“Of course, if you start from nothing, it’s not hard to become the ‘fastest-growing’ campaign” – Isabel Oakeshott, 20/11/2015

Look at the IMF’s predictions for 2024.

The top five performers on this metric are Guyana (GDP $15bn), Macao ($24bn), Palau ($233m), Niger ($15bn) and Senegal ($28bn).

The GDP of the EU (even without the UK that it desperately needed to survive) is $17.2 TRILLION. That’s more than 200 times the GDP of those five countries combined. Not to mention that they’re all a lot closer and they make a lot more things that British people actually want to buy. Who is it more important to have barrier-free trade with?

Reporters and politicians are still making this same blunder today (“Next PM likely to inherit improved economy after UK growth revised up”).

If this were a sustained trend, it might tell us something significant. But the period over which the data was measured is three months. This is more likely just a course correction after a rough patch for the UK economy than a sign of sunlit uplands. At the very least, we should wait a while before leaping to any conclusions.

Be vigilant with visuals

Graphical representations of information – data visualisations, or datavis – are useful ways of communicating a lot of information quickly. And because creating them requires a modicum of expertise, they are often deployed as gotchas: “Quiver, mortal, as I blow your puny argument out of the water with my BAR CHART!”

The trouble is, in the wrong hands, datavis is as susceptible to abuse as any other mode of expression.

Be sceptical of surveys

Polling firms are businesses. Businesses serve the needs of customers. And customers have political, or commercial interests, which do not necessarily align with yours, or society’s. (Moreover, it seems an increasing number of polling firms have agendas of their own.)

Pollsters regularly use samples that are too small, fail to publish their methodology, and use daft or leading questions. Even broadly decent organisations like the WHO are not above such silliness.

One of the questions in the survey was “Have you ever tried alcohol?” 57% of 15-year-olds in the UK said they had. The WHO then quoted this answer, in the press release (which is all most time-strapped journalists ever read), under the heading “Alcohol use widespread”.

Suddenly, sipping a shandy once on a family visit to a pub garden is lumped in together with downing a bottle of Jack Daniel’s a day. Furthermore, we have no way of knowing whether these answers were completely honest. How many British 15-year-olds would be embarrassed to admit they’d never tried booze?

Polls can be tools to shape opinion as much as reflect it, first because they can influence government policies, and second because waverers in the general populace can be won over to what they perceive to be the majority view.

I could caution you to be wary of surveys that aren’t upfront about their methodology, surveys with a small sample size, surveys conducted by firms with murky political connections, or surveys whose funding is not declared. But to keep things simple: ignore polls.

Are those figures really significant?

Something else that should set the alarm bells ringing, along with big numbers, is long strings of numbers, as seen in this article.

“The data released on Monday, from the Chinese ministry of public security, showed the number of new birth registrations in 2020 was 10.035 million, compared with 11.8 million in 2019.”

The second figure in this sentence is expressed with three significant figures: 1, 1, 8. So why is the first given to five significant figures? Did data collection methods become a thousand times more reliable in a year?

Most sums bandied around in the public domain – especially those derived from polls, but also anything involving average values, like fuel prices, which are also estimated using samples– are only approximations to begin with. That is, the true value may deviate from the estimated value by 1% or more.

Say 78.5% of 1,000 people surveyed think Dominic Cummings is a giant Gollum-faced twat, and about a third of those want to punch him in his stupid Gollum face. A sizeable proportion of reporters these days would whip out their calculators and proudly conclude that 26.1666666% of all people want to assault Specsavers Boy. While that’s mathematically precise, it’s not accurate (it can’t be, unless there’s a fraction of a person out there somewhere who wants to lay Cummings out). To say anything beyond 26% is meaningless and misleading.

Similarly, if you’re performing an operation on a quantity that’s already been rounded, then it’s senseless to use more significant figures for the result.

“A slew of commercial and critical hits, including The Super Mario Bros Movie, which made $1.36bn (£1.094bn) at the global box office, has led to market experts comparing them to Marvel adaptations.”

Long strings of numbers are invariably a sign of false precision. If a politician, journalist or broadcaster is being hyper-precise with their figures in this way, they’re not necessarily consciously lying to you. But they are conveying an important truth: while they may know how to to type numbers on a keypad, and even use basic mathematical operations, they haven’t a clue how statistics works, and therefore can’t be trusted to properly understand, verify or convey the information they’ve been given.

On a related point, thanks to the uncertainty inherent in big data, running news stories about a “rise” or “fall” in something when the change is infinitesimal is just. Plain. Wrong.

In January 2018, the BBC published an article claiming that unemployment in the UK had fallen by 3,000 to 1.44 million.

That’s a whopping drop of 0.2%. But there’s no way there isn’t at least 0.5% room for error in these figures – so it may well be the case that unemployment has risen slightly. What you’re looking at here is not a news story; it’s a rubber-stamped government press release.

Why aggregates don’t add up

A few years ago, a newspaper I worked for (rightly) banned the practice of adding together jail sentences in the headlines of articles on court cases with multiple defendants. You know the sort of thing: “Members of Rochdale paedophile ring sentenced to total of 440 years”. The reasoning was that it was a) sensationalist and b) meaningless.

Because, uh, how many people were involved? (Sure, you could work it out by reading the article, but that’s an extravagance that fewer and fewer people seem to willing to stretch to.) Moreover, how do those numbers break down? If 48 people were involved, did four get put away for 55 years, and the other 44 for five? Or was the punishment more evenly spread, and they got just over nine years each?

Similar practices, however, still abound in other areas.

“UK homeowners face £19bn rise in mortgage costs as fixed-rate deals expire”

Wow, that’s going to put a dent in the holiday fund! Oh wait, they mean all UK mortgagors combined. But … context. How many people even have mortgages in the UK?

Recent figures suggest about 15.5 million homes in England and Wales are occupied by their owners, of which just under half are mortgaged. (There are separate figures for Scotland and Northern Ireland, but they’re relatively small and for our current purposes can be disregarded.) That means on average, mortgage payments would rise by about £2,600 per year per household, or £217 a month. Woop. That’s how much my rent just went up by.

A deeper dive into the figures reveals that fewer than a million households were facing monthly rises of £500 or more by 2026. Not half as sexy as the £19bn figure (and certainly not deserving of the lead slot on the front page of a global news provider), but twice as informative.

Unhappy mediums

People toss the word “average” around a lot, but as you may dimly recall from your schooldays, in the mathematical sphere, there are three distinct types: the mean, the median, and the mode. While they often give similar results, there’s sometimes significant divergence, and one kind of average is often more useful than another.

Take wages. Using the mean on a given group of people (adding up all the salaries and dividing that figure by the number of subjects) isn’t always terribly informative, because if the variance in wages is high, extreme figures skew the picture. Let’s say you have 10 people: two earn £10,000 a year, seven earn £20,000 a year, and one earns £200,000 a year. Calculating the mean would give you ((2 x £10,000) + (7 x £20,000) + (1 x £200,000))/10 = £36,000, which is a million miles from what any of the participants actually earn. The median, however – the figure in the middle if you line them up from smallest to largest – gives you £20,000, which is a much better reflection of the situation. (The mode – the figure that occurs most frequently – in this case gives the same result.)

So it’s vital to know, when someone is talking about averages, which kind they median.

Pushing your panic buttons

Barely a week goes by without the Daily Mail’s health pages shrieking about the latest thing that gives you cancer. They’re usually drawing on a “landmark report” – that is, a press release from a no-mark university – and they’re almost always lying with numbers.

The headline “Eating bacon increases your chances of getting cancer by 18%” is quite alarming, but remember, this is a relative risk, compared with the chances of someone who doesn’t eat bacon. It turns out that the absolute probability of succumbing to cancer among non-bacon eaters is pretty low – about six in 100 will get bowel cancer in their lifetimes – so an 18% increase on that doesn’t actually represent that big a jump. The unimaginable will strike only seven in 100 bacon eaters.

(There’s a fab and doubtless far from complete list of everything the Daily Mail says can give you cancer here, although the links are a bit screwy.)

Proportional misrepresentation

Some news organisations have improved their efforts in this department lately, but it’s a pit they still fall into depressingly often.

Before it was spotted and corrected, an article published in 2021 about the impact of Covid on education said: “While there was an across-the-board fall of a fifth in the proportion of children working at a level consistent with their age, those pupils in year 1 in 2019-20 appear to have suffered the most significant losses … 81% of year 1 pupils achieved age-related expectations in March 2020 … by the summer of 2020, this had dropped to 60%.”

The reporter is starting from the wrong baseline. The actual numbers are irrelevant, but for the sake of argument, let’s say there were 100 kids. If 81% of them (ie 81 kids) met the requirements in March and only 60% in June, that’s a fall of 21 percentage points, not 21 per cent. Comparing the new figure with the baseline, 81, gives a drop of a quarter rather than a fifth.

If you lack confidence in your ability to check percentages, use an online percentage checker, like this one: https://percentagecalculator.net/

Unusual? Suspect

I’m singling out the Mirror here, but virtually all the major news outlets reported this story in the same uncritical fashion. “The Royal National Lifeboat Institution has raised more than £200,000 in a single day … Its donations had increased by 2,000% from Tuesday, when it raised just £100.”

The alpha numerics among you will notice that the Mirror – and most other news providers – got their basic maths wrong here: £200,000 is an increase of not 2,000%, but almost two hundred thousand per cent on £100. But that’s not my main gripe.

The Mirror reporters (or should I say, the writers of the RNLI’s press release) have compared the latest figure with the figure from the day before – which ordinarily would not be a problem. However, we’re dealing here with not one, but two highly unusual days. Later in the piece, we discover that the average daily donation to the RNLI is not £100 (a very low outlier for the lifeboat folk), but £7,000 – a much more instructive figure against which to stand today’s total.

The most useful way to present the information would be “£200,000, around 30 times the average daily donations that RNLI receives”– but once again, the drive for a sexy headline has trumped all considerations of sense.

Finktanks

It doesn’t matter if it’s a study, a survey, a graph or a sweetie. Show nothing but scorn to anything that comes from a self-declared “thinktank” that refuses to declare its funding. The list currently includes, but is by no means limited to, the TaxPayers’ Alliance, the Adam Smith Institute, Civitas, Policy Exchange, the Centre for Policy Studies and the Institute for Economic Affairs. All, front organisations set up to advance the cause of neoliberal economics by whatever means necessary, are proven experts in weasel words, sharp practice and low-quality “studies”.

Things that should make you go “Hmm”

If you’re baffled as to why I’ve spent so much time droning on about this tedious statistics malarkey, it’s because it’s really fucking important to know when people are lying to you with numbers.

An awful lot of what’s wrong with the UK today – high prices, low pay, crumbling services, the erosion of workers’ rights, medicine shortages, rivers full of shit – has come about at least in part because people have failed to robustly challenge the falsehoods and of politicians, thinktanks and the media.

Some will shrug and say, “Meh, politicians have always lied, and things have always worked out OK.”

But disinformation is now being pumped out on a scale beyond anything we’ve ever seen. Whereas just a few years ago, politicians would do the honourable thing and resign if they were caught lying, now they’re happy to do so repeatedly, on TV, on social media, in parliament.

Campaign organisations and rogue nations are pouring unprecedented resources into their propaganda ops, much of it targeting people directly through social media and thus bypassing all scrutiny. Soon AI will be churning this stuff out faster than checkers can find it, never mind check it. All at a time when our traditional defences against disinformation are collapsing.

And because of our lack of confidence with numbers, it’s the statistical lies that are most likely to slip through.

If that sounds scary … well, good. You should be scared. But don’t panic. What I’ve been trying to communicate with these posts is that spotting this sort of deviousness isn’t as hard as you think. 

Just bear the above points in mind. Don’t assume that something’s true just because a source you personally approve of published or repeated it. Is the source reliable? Does this claim tally with what others say? Do these numbers support a particular political agenda rather too neatly?

Or to boil it down to one rule of thumb: if a number seems too good or too interesting to be true, it almost certainly is. 

The 3M test: how to upgrade your bullshit detector

Graphic: bullshit meter

In the time of coronavirus, the ability to tell good info from bad is more vital than ever. How do you sort the gold from the garbage?

Graphic: bullshit meter

Minds greater than mine have been grappling with the reasons for society’s gaping divisions for years. Convincing cases have been made for the role of shorter attention spans, echo chambers, smaller families and spoiled kids and “me” culture, inequality, consumerism, the rise of lowest-common-denominator infotainment at the expense of grown-up news.

But from my perspective – a language graduate who has spent 30 years working in media and communications – the main problem is bullshit.

As individual, ephemeral human beings, we can’t possibly find out all the information we need at first hand. We have to rely on input from other sources: parents, teachers, friends, newspapers, TV, social media. But a lot of that input is contradictory. Some sources are clearly more reliable than others. In the time of coronavirus, the ability to tell good info from bad is more vital than ever. So how do you sort the gold from the garbage?

NBC report: 5G mobile phone masts set on fire amid bogus coronavirus theories
Not content with waging economic warfare on innocent civilians, Putin’s goons have now upgraded to biological warfare.

In the SnapChattin’, TikTokin’, Lyftin’, Zoomin’, Zooskin’ 21st century, whenever we come across a piece of new information, we tend to respond in one of two ways: automatic belief (“Yeah, that sounds about right, retweet”) and automatic disbelief (“Bollocks, obviously biased/brainwashed/stupid, block”).

That’s your system 1 brain – your primeval, emotional, semi-automatic brain – barging to the front and bellowing, “Don’t panic, everyone, I’ve got this, piece of piss,” when you should, it hasn’t and it isn’t. New information is precisely what your system 1 brain sucks at.

If you want to navigate your way through the morass of conflicting input, you’ve got to cast off this binary good/bad mindset, and prod your system 2 brain into activating a process called scepticism.

Scepticism (from Greek skepsis, “inquiry, doubt”) involves suspending your belief and disbelief and looking at things neutrally. (That’s as distinct from cynicism, which is closer to the wholesale rejection of everything.) Scepticism means checking, comparing, investigating – essentially, asking questions. And the questions you need to be asking when you encounter new information you find fall into three categories: medium, message, and marketplace.

Medium (the source, or context)

Believe it or not, there was a time not so long ago when most media, and even most politicians, could broadly be trusted. They might screw up; they might have vague ideological leanings one way or another. But they’d rarely blatantly tell you, with a straight face, that black was white or up was down.

Then the cutthroat chase for advertising revenue and votes and clicks began, leading to a rapid erosion of standards. Formerly august news organs gave us the Hitler diaries, the Sun’s reporting of the Hillsborough disaster, phone hacking and the fake Abu Ghraib torture photos, and trust in the “mainstream media” withered away. At the same time, ever larger numbers of news organisations fell into the hands of unscrupulous, openly partisan kleptocrats, who whittled the concept of editorial independence to the bone.

Paradoxically, this paved the way for even more unreliable purveyors of “news” – thinly disguised state-sponsored propaganda outlets, contrarian tweeters and YouTube demagogues – who snapped the bone clean in two. Accountable to no watchdog, bound by no editorial code, subject to no scrutiny, untouchable by law, never compelled to publish corrections or give right of reply, they used the shield of “free speech” to publish what they goddamn pleased. The increasingly erratic, sometimes biased, but still mostly principled news organisations had been abandoned in favour of shamelessly partisan hucksters.

In theory, it’s wrong to dismiss information purely on the basis of its source. That’s the crux of the ad hominem fallacy: it’s logically unsound to state that someone’s character or history has any bearing on the value of what they say. Just because Tony Blair says two plus two equals four, doesn’t mean the real answer is nine.

But in practice, we don’t have the means to verify every assertion. And some individuals and organisations have such abysmal track records with the truth, and such transparent agendas, that it is now not just permissible but a damn good idea to inspect the messenger as carefully as the message.

So the first thing you should do when you come across new information is check where that information came from. If it’s an article, find out who owns the newspaper or website. Are they widely trusted? Do they have a clear political agenda? Is all or most of their output devoted to a narrow range of subjects? (How can anyone who stumbles across one of those cesspit Twitter accounts that consist of nothing but retweets of negative stories, real and fabricated, about Muslims, really think they’re curated in good faith?)

If you’re looking at a post on a random social media account, check the author’s bio. Does it seem authentic? Does it mention where the story came from – the original source (the urtext)? If not, place it firmly in the holding category labelled “DODGY AF”. In the absence of verification, a news “story” is just that: a fable.

If you can find the ultimate source, ask the same questions you would of a news organ. How long has the platform been around? Is it approvingly cited by other respected media outlets?

Now do your due diligence on the writer, if one is credited. What else has this person written? Do they have any experience of or expertise in the field they are writing about? What are their credentials other than a glib turn of phrase and a cool byline pic?

Reminder: columnists are commentators. Radio shock jocks are commentators. Vox-popped pensioners in seaside towns who voted for Brexit are commentators. Representatives of thinktanks are commentators. Populist politicians, because they listen only to the advice they want to hear, from the lickspittles they surround themselves with, are no better than commentators. And commentators are not experts. They might have a way with words, but they have no such dominion over facts; they deal in opinions, and those opinions are often based solely on what sounds or feels good.

If we’re talking about an epidemic, I want to be hearing from epidemiologists. If we’re talking about international trade, I want to be hearing from economists. Not from failed fucking fashion students.

If you can’t quickly establish the identity, background and financing of a source, then suspect (but don’t assume) the worst. No reputable media organisation has any reason to withhold where their money comes from – if you’re acting on behalf of private interests, then you’re not acting in the public interest – and most journalists would happily take credit for a fart at a funeral.

Lastly, is your source Donald Trump? Well, if you’ve decided to give the slightest credence to that 50-faced, triple-chinned, flint-hearted, atom-brained, snake-tongued, gossamer-skinned, matchstick-spined, lily-livered, mushroom-cocked lardass, then the chances you’re reading this – or indeed anything – are infinitesimal; but in that vanishingly unlikely event, know this: Trump’s mis- and disinformation has already killed people, and may yet kill tens of thousands more.

Message (the story, or text)

The focus of your inquiry, of course, should be on the information itself. Putting the content aside for a moment, you can garner some clues from the presentation. Is this a polished, professional product, or does it feel … tossed off somehow?

Are the spelling and grammar of a high standard? (Again, it’s a mistake to write something off solely because of a stray “your” for “you’re”, but if someone is sloppy with something as simple as an apostrophe, it does raise a question mark over the accuracy of their statements.)

Is the tweet or article or passage of speech delivered clearly, accurately and succinctly, with specifics rather than generalisations? Are the words all used in their correct senses?

Is the use of language fresh and original, or cluttered with clichés and buzzwords? Is the meaning clear and unambiguous? Does the author or speaker illustrate their point with relevant examples? Does the piece contain any obvious inaccuracies, or things you know or suspect to be untrue? Is it internally consistent?

If the author uses statistics, are they sound? (I know it’s hard for those without the appropriate background to rigorously examine any particular numerical claim. And unfortunately, since even most trained journalists and interviewers don’t know their bell curves from their bell-ends, they’re not often a great help either. My next post will deal with a few of the most common abuses of statistics.)

Have any of the people mentioned been approached to give their side of events (this is regarded as good practice by traditional news outlets)? Have any dissenting voices been quoted? Has the background to the developments been fully expounded?

If there are any pictures or video accompanying the story, are they attributed to anyone? (Photographers and filmmakers, even amateur ones, are no shier about taking credit for their work than writers.) Has this picture or video been used elsewhere, and if so, are there any differences between the two versions? If not, has it independently been verified as authentic?

Yep, they tried to claim that Obama was a Black Panther.

Now look more closely at the language used. Is the piece relatively free of adjectives, adverbs and otherwise emotionally loaded words? It is a reporter’s job to tell readers what has happened, not what opinion to have on what has happened; they’re reporters, after all, not explainers or influencers. When someone is introduced as “terrorist sympathiser Jeremy Corbyn”, you can be fairly sure you’re not listening to a neutral voice.

Good news organisations take great care to draw a thick line between objective news reporting and subjective interpretations of the news. Opinion pieces are clearly badged as such, and published in a separate section of the paper or website.

But bad practice is proliferating, and more and more media outlets, particularly those under the control of moguls, are beginning to see as their duty as being not to inform, but to influence. They, the openly partisan “news” operations funded by God knows who and self-appointed champions of truth like Tommy Robinson and Paul Joseph Watson have abandoned all pretence of balance and neutrality.

Good news reporting is not fun or edgy or stylish or provocative; it is dry. Functional. Dull, even. The text should have no subtext. Scroll to the end for some recent examples.

If you don’t have time to go through this rigmarole every time you come across new information – and let’s face it, you don’t – one little short cut will often point you in the right direction. Read the story, and re-read the headline. Now do your best to consider this objectively: does the headline accurately reflect the content of the story?

Once upon a time, headlines had a single purpose: to pithily summarise the words beneath it. But as the media ecosystem became more competitive, headlines evolved. Accuracy was no longer enough; they had to be quirky, grabby, funky. The Sun enjoyed some success for a while by crowbarring in terrible puns (but trust me, guys, that era is long past). Meanwhile, the Mail (and all newspapers, to some extent) got round the problem by stretching, or sometimes breaking, the truth. Take this gem from last August.

If you read the article, the reasons for the billionaires’ departure are, in fact, purely the opinion of a single lawyer – and her exact words are, “Brexit uncertainty is driving out many of the wealthiest non-doms … The prospect of a Labour government is also very unappealing to high net worth people.” So Corbyn isn’t even mentioned, and fears about Labour (in the opinion of this solitary lawyer) are only a secondary factor in capital flight. The headline grossly misrepresents the article, to the benefit of the Mail’s anti-left agenda.

Much as I hate to be even glancingly fair to the chuntering ninnyhammer that is Daniel Hannan, his recent wankpiece for ConservativeHome, headlined “Alarmism, doom-mongering, panic – and the coronavirus. We are nowhere near a 1919-style catastrophe”, wasn’t quite as irresponsible as it first seemed. The text actually reads, “You’re unlikely to die of coronavirus,” which is quite true – if perhaps not the most useful message to be sending to society at this time.

But to return to being deservedly harsh on the chuntering ninnyhammer that is Daniel Hannan, he then chose to tweet the following link to his own story, with a headline of his own devising that said something completely different, purely in the interest of harvesting more clicks. Instead he harvested widespread vilification, and deleted the tweet.

Just before the 2016 EU referendum, InFacts did a round-up of the most misleading stories on the issue published in the rightwing press. In most of the cases, the offence involved not an outright untruth, but a duplicitous headline.

But the last word in headline shenanigans goes to this Express story from 2016, to which I dedicated an entire post (and for which trouble I was threatened with legal action). Accurate headlines are more important today than they’ve ever been because much of the time, people simply don’t read any further – and even when they do, the headline is what they take away with them.

One more little thing to look out for: if what you’re reading is online, has the author provided any external links to something that might corroborate it? If someone believes their information is legit, they’ll be happy to share their source. (It should go without saying that links to opaque websites with clear political agendas don’t count.)

The marketplace (the metatext)

So, you’ve carried out a full background check on the potato salesman. You’ve examined his potatoes. Now you need to check to see what other consumers are saying about his potatoes, and how rival tradesmen’s potatoes compare.

First, look to your fellow spud seekers. What rating have people given the merchant on ChipAdvisor? If it’s a tweet, what are people saying in the replies? If it’s an online article, what are they saying in the comments underneath? If it’s an interview, did the interviewer challenge the remark, or ask any follow-up questions?

One-word responses can be safely ignored. “Bollocks”, “Nonsense”, “Twat”: that’s just the opposing side’s system 1 brigade reflexively rubbishing the point because it threatens their world-view. Pay no more heed to those trying to dismiss the article with reference to the platform or writer. “Typical Remoaner”, “You expect me to believe something published in the Guardian?!!”, etc.

The comments worth considering are the detailed, level-headed, rational ones: people pointing out factual errors, highlighting contradictory evidence, logical flaws, providing relevant context. Pay special attention to those who can actually back up their points with evidence from a reputable third-party source. Do these responses, individually or collectively, cast any doubt on any of the claims in the original article or post?

Now consider the rival salesmen. If there’s any substance to a story, then the chances are, other individuals or news outlets will have picked up on it. So hunt down some other versions. (Word for word repetitions don’t count. What you’ve found there is not a separate source, but one source copying a second one, or two sources copying a third, which suggests an orchestrated propaganda campaign rather than an independently verified scoop.)

Now, how reliable is this source? Is its information usually of high quality? Once you’re satisfied that it has no connection with the first source and upholds basic journalistic standards, compare the two takes. Do any of the details in the new version contradict any of those in the first? Does it omit any details, provide any additional context, or interpret them differently? Why might that be?

Let me stress: none of these red flags, in and of itself, is sufficient reason to dismiss any piece of information outright. But each one should push the needle on your bullshit-meter further to the right.

I know this seems like an awful lot of work do to just to establish some approximation of the truth; but the truth is under attack as never before, and it’s the only weapon we have short of actual weapons against the dark forces of illiberalism and authoritarianism. And while Finland’s response to fake news has been to launch a nationwide campaign to educate and protect its citizens, their British counterparts have instead chosen to become its most prolific purveyors.

The task of saving democracy falls to you and you alone.

Starmer chameleon

Now let’s put those principles into practice and examine the different approaches of various media outlets to the same news item. On the day I went out to mass-buy the papers, April 4th, one of the main non-coronavirus stories was the news that Keir Starmer was poised to win the Labour leadership election.

Guardian: Keir Starmer poised to be announced new Labour leader

(900 words, page 27 of 35 news pages)
Thrust of story: Starmer likely to win, Corbyn supporters fear they will be purged
Introduced as: Keir Starmer
Referred to subsequently as: Former director of public prosecutions, shadow Brexit secretary
Background/context: Age (57), election defeat, antisemitism inquiry, forthcoming NEC elections, efforts to unify party wings, likely shadow ministerial appointments
Other people cited: Unnamed allies of Starmer, unnamed allies of Corbyn, one former Corbyn aide, Tulip Siddiq, associate of Rebecca Long-Bailey
Subjectivity: “Devastating 80-seat defeat to Boris Johnson”
Errors: “After … an ongoing inquiry”, incorrect dashes, “Starmer’s had successfully targeted”
Bullshit factor: 2

Daily Mail: Sir Keir and a question of cowardice

(2,700 words, p32/45; badged as “special investigation”)
Thrust: Starmer has not done enough to combat antisemitism in the Labour party, according to several conversations with unnamed party sources and a cursory analysis of 340 online articles
Introduced as: Party figure more moderate than Jeremy Corbyn
Referred to subsequently as: Shadow Brexit secretary, hot favourite to succeed Corbyn, Sir Keir, QC and former DPP
Background/context: Starmer’s Jewish family, leadership candidates’ records on condemning antisemitism, first elected to parliament in 2015
Other people cited: Unnamed sources in Jewish community and on far left of party, “a friend of a rabbi”, “a source”, “a source at the Jewish Chronicle”, “another Jewish former Labour politician”, “one former Labour MP”, “prominent members of the Jewish community”, “a friend of Luciana Berger”, “one of Starmer’s former colleagues”. In an article 2,700 words long, consisting mostly of quotations, not a single source is named
Subjectivity: “Cowardice”, “troubling issue”, “Sir Keir’s surprise promotion of his previously discreet Jewish ties”, “desperate for leadership votes”, “deeply disillusioned Jewish membership”, “cosy interviews”, “hardly gladiatorial tone”; “these mild critiques”; “sympathetic interview”, “Left-leaning New Statesman magazine”, “previously shrouded Jewish ties”, “Sir Keir replies, no doubt sadly”, “habitual fence-sitting”
Errors: Incorrect punctuation around speech; missing quotation mark; missing final full stop
Bullshit factor: 8/10

Sun: Labour’s Keir and present danger

(p24/36 news/celebrity gossip pages, 230 words)
Thrust: Corbyn will cause trouble from back benches
Introduced as: Millionaire barrister Keir Starmer
Referred to subsequently as: Former chief prosecutor
Background/context: Age; a podium has been sent to Starmer’s house so that he can practise his acceptance speech
Other sources cited: “A source”, Jeremy Corbyn’s Facebook page
Subjectivity: “Bitterly divided party”; “Marxist policies”
Errors: “While we exist on lockdown”, “bitterly-divided”, stray full stop
Bullshit factor: 7/10, plus a bonus 1 for that godawful must-pun-at-all-costs headline

Times: Labour’s women will rise again under Sir Keir

(p18/31, 400 words)
Thrust: Several MPs who were overlooked or declined to serve under Corbyn are likely to be called to the shadow cabinet
Introduced as: Sir Keir Starmer
Referred to subsequently as: Sir Keir, exclusively
Background/context: Shadow cabinet will not meet in person until social distancing rules relaxed; Corbyn allies will be discarded
Other people cited: Lord Wood of Anfield. Lots of speculation couched in terms of “X might/could/is expected to …”
Subjectivity: Article is basically all guesswork
Errors: None
Bullshit factor: A surprising 3/10

Telegraph: Corbyn plans ‘farewell tour’ as Starmer takes reins

(p16/20, 400 words)
Thrust: Corbyn may become Tony Benn-style thorn in Starmer’s side
Introduced as: Sir Keir Starmer
Referred to subsequently as: Sir Keir, former director of public prosecutions
Background/context: Starmer’s efforts to rebuild relations with marginalised elements of party; rebellious tendencies of Benn and Corbyn
Other people cited: Corbyn, “close ally of Angela Rayner”, “one insider”
Subjectivity: Purports to know Starmer’s vision for party; idea of “farewell tour” appears to be invention of reporter
Errors: Double “as” in opening sentence
Bullshit factor: 4/10

Tweet: Kier Starmer is a charmless posh sod

(31 words)
Thrust: Keir Starmer is a charmless posh sod
Introduced as: Sir Kier Starmer QC
Background/context: Former director of public prosecutions
Other people cited: None
Subjectivity: All of it
Errors: Can’t even spell the guy’s fucking name right
Bullshit factor: 10/10