Filtering the pseudoscience

We live in a world where science has an undeniably large impact on our lives. People rightly show interest. But often this interest is manipulated and misdirected by believable pseudoscience and badly reported science. Aside from wasting the reader’s time, at its worst this misinformation can lead to decisions that are damaging to health and society as a whole.

An unrealistic goal?

I recently taught on a public policy course, with the goal of making policy-makers of the future more able consumers of scientific data. One question that was asked often of me was “how are we supposed to know what to pay attention to and what to discard when we have no scientific training?” It’s a good question. With a burgeoning amount of information out there and limited time, is it possible for the would-be consumer of science to know what to pay attention to and what to disregard without formal scientific training?

The short answer, I believe, is yes. Too often it is assumed that science is a bunch of facts that non-scientists don’t or even can’t have access to. Whilst it’s true that explicit knowledge is a crucial facet of science, how that knowledge is approached and applied is just as important. Scientific thinking doesn’t need to be an impossible goal. With a bit of direction, awareness and practice, it’s an achievable mind-set. Read on to find out the basics on how to avoid being taken in by bad science.

Be curious

A no-brainer really but possibly the most fundamental ingredient in the scientific mix so worth underlining.

But whilst curiosity is essential and a great starting point, in its raw state it’s not enough. Unfocused curiosity is distracting. Moreover, if that curiosity is satisfied with a single answer without any scepticism or critical assessment of the supporting evidence then the curiosity won’t get you far down the road to the truth. Good science is always evidence-based and curiosity can be put to good use by enquiring after the presence and quality of this evidence.

The Dunning-Kruger effect

Welcome to the interesting world of cognitive bias. We all have these biases whether we like to admit it or not. They’re built-in blind spots in the way our brain processes our experience of the world. The Dunning-Kruger effect is a manifestation of one such bias – a correlation which states that the lower your expertise on a topic, the more certain you are you’re right. In a nutshell, you don’t know enough to know what you don’t know.

In terms of distinguishing science from pseudoscience the Dunning-Kruger effect can be of help. When asked how I know whether to trust an article or not, one of the first indicators that springs to mind is the certainty of the language it uses. A non-expert will have little or no appreciation of the nuances and caveats of the topic they’re writing about and so will communicate these poorly, making statements which are more categorical and absolute.

If an article contains many categorically stated facts, don’t trust what they say at face value. This is easier said than done since the world primes us to value certainty more than uncertainty. However, simple awareness of the Dunning-Kruger effect can overcome this priming. Scientists are trained to know the limits of their knowledge. They support conclusions with evidence and give conflicting arguments the same scrutiny and airtime as supporting arguments. These qualities should be visible in a good piece of scientific writing.

The more black and white a piece is the less credible it is likely to be. Real life is invariably messy and science embraces this!

In fact don’t accept anything at face value

Maintain a healthy scepticism alongside the above. Apply your curiosity not just to the questions that interests you but to the possible answers too.

How do we know that? What is the evidence that backs up that statement? Are there any other explanations which might account for that?

These are all excellent starting points for starting to interrogate an article. If it’s been written with a good scientific approach the answers to these questions should already be given. If not, alarm bells should start ringing. Citation of sources is a bonus too (although I’ve seen pseudoscience articles cite unconnected studies to increase credibility, presumably with the knowledge that most people won’t check). In a full scientific paper, you must cite sources to back up every statement you make so that the evidence behind your argument can easily be found. In the pop-science domain, this requirement is less stringent (although I’m not sure why). The more citations the better. Transparency is perhaps the most reliable foundation for trust.

A note on scepticism

It’s worth saying that scepticism is distinct from not believing. Scepticism means not taking things at face value, but instead seeking both the supporting and contradicting evidence. Once this is gathered, both sides should be weighed and a balanced conclusion drawn. This conclusion should never go further than the evidence allows. If the support and contradiction is equal, then the conclusion must be inconclusive. If people wish to conclude more than the evidence supports then that is their prerogative but that conclusion will not be scientific, whether it is on a science topic or not.

Correlation does not equal causation

Despite this being perhaps the most basic tenet of science, and indeed logic, the media and internet still get it very wrong. Confusion of these two phenomena is a major indicator that you’re dealing with pseudoscience or bad reporting rather than the real thing.

A hypothetical study finds that as the number of cats a person owns increases so does the chance of getting cancer. This is a correlation. The headline reports that cats may cause cancer. This implies causation, which the data cannot support. Why? Because another factor that covaries (i.e changes similarly) with cat ownership and cancer may be the causative factor. Removing cats from the equation then will have no effect at all if the causative factor remains.

For instance, the number of cats you own might also correlate with a higher income. Income may in turn correlate with different working conditions and higher exposure to a known carcinogen. In this example the number of cats you own is simply a covariant – a marker of carcinogen exposure (which in itself can be useful) but nothing to do with the actual cancer causing process. Alternatively it is also possible that the correlation occurs completely by chance, with no hidden causative factors.

Knock-on consequences

For those thinking that this is a trivial distinction, imagine a scenario where a funding body decides to invest money on the say-so of the headline. Millions of pounds could be pumped into investigating how cats cause cancer. Saying nothing of the distress caused by people getting rid of their cats!

To prove causation, removal of the factor or blockade of its action must make the outcome it is supposed to cause disappear. Similarly, reinstatement of the causal factor should, predictably reinstate the outcome. Without this evidence do not assume correlation implies causation, no matter how convincing it sounds.

That said correlation is a good starting point for investigating causation so don’t dismiss correlation out of hand – just ensure that the conclusion drawn from it is appropriate.

Is it worth the effort?

I think so. We live in a time where social participation has the potential to impact decision-making and problem solving like never before. Equipping citizens to distinguish good science from the bad will maximise the number of minds and perspectives contributing meaningfully to this process. In turn, this will give us the best chance of meeting the challenges we face, optimising problem-solving and finding solutions. For more information about becoming discerning over science, watch this space. I’ll be moving on to more complex and in depth analysis in later posts. Also, check out Sense About Science and Bad Science by Ben Goldacre which are both excellent resources.

Shares