Bad Science: Quacks, Hacks, and Big Pharma Flacks

Bad Science: Quacks, Hacks, and Big Pharma Flacks

"Bad Science" angered me more than any book I've read in the last few years. Dr. Goldacre lays bare the tactics used by people committing the worst crime I can think of - deliberate falsification and misrepresentation of scientific data. I get so worked up about this because:

  1. Fraudulent science directly undermines the central good idea of Western Civilization - that systematic, rational inquiry is the best path to discovering truth.

  2. The scale of the impact can be enormous. Not only does bogus science directly harm those who unknowingly rely on it (like the tens of thousands who died of Vioxx-induced heart attacks), but it also destabilizes the scientific endeavor by adding rotten timbers to the edifice of scientific knowledge. False findings can send scores of other researchers down unproductive paths and derail the forward progress of the scientific endeavor. The opportunity cost is hidden but enormous.

  3. Infuriatingly, these crimes are committed by people who absolutely know better. It's not surprising that individual scientists have the same foibles as the rest of us - their laziness, sloppiness, greed, and ambition occasionally tempt them to take shortcuts. But in a world that sanctifies science as the source of truth, society is absolutely justified in holding scientists to a higher standard. If you're a scientist who is falsifying or misrepresenting your work... you know better, you are a bad person, and you need to find a new job right now.

Somewhat disappointingly, Goldacre focuses almost exclusively on medical science. This isn't surprising, as he is a trained doctor and, as he points out, "half of all science stories in the media are medical." I had been hoping for a broader view, but Goldacre clearly has an excellent command of the full spectrum of medical fraud.

He spends a lot of the book talking about the fraud machine that is industry-funded pharmaceutical research, the media's role in the MMR vaccine scare, the political HIV/AIDS disinformation campaign in Africa, and the larger cultural/economic context in which bad science occurs. His writing is clear and he does a good job of communicating complex ideas without oversimplifying them. I found the book overall to be a bit repetitive and it bounced around a bit too much - I would have preferred more depth on some of his case studies.

My favorite bits were when Goldacre went into the nitty-gritty of tactics for manipulating data and deceptively communicating results in the media. This should be required reading for every high school student.

Goldacre also covers defense against the dark arts. He talks about some of the "big ideas" of modern thought such as systematic review and meta-analysis. He also provides some useful checks for evaluating whether research is potentially fraudulent. I wish someone would make a browser plugin that automatically scores scientific papers and news articles based on his checklists.

Probably the most interesting idea in the entire book was his idea of an experimental registry:

What’s truly extraordinary is that almost all these problems — the suppression of negative results, data dredging, hiding unhelpful data, and more — could largely be solved with one very simple intervention that would cost almost nothing: a clinical trials register, public, open, and properly enforced. This is how it would work. You’re a drug company. Before you even start your study, you publish the protocol for it, the methods section of the paper, somewhere public. This means that everyone can see what you’re going to do in your trial, what you’re going to measure, how, in how many people, and so on, before you start.

This seems like a slam-dunk idea. Probably not a lot of money to be made running this, but I don't know why the American citizenry shouldn't demand that all federally-funded research be registered in a central place and automatically monitored. Maybe a good project for after I count all the trees...

This was the first book club book for my 2017 reading theme on the "Integrity of Western Science". Surprisingly, it wasn't a very good book club book. Besides frustration with bad science and some discussion of ways to manipulate data, there wasn't actually much to debate about. Goldacre does a great job of laying out his case and there's not a lot to disagree with him about. I did pick up some interesting ideas for future reading (mesothelioma, Vioxx, the Cochrane Collaboration, the Helsinki Declaration) and this book gave me a good initial perspective for thinking about scientific integrity. Goldacre's "Bad Pharma" book is on my list now too.

My highlights are below.


Preface

We are obsessed with health — half of all science stories in the media are medical — and are repeatedly bombarded with sciencey-sounding claims and stories. But as you will see, we get our information from the very people who have repeatedly demonstrated themselves to be incapable of reading, interpreting, and bearing reliable witness to the scientific evidence.

Nutritionists are alternative therapists but have somehow managed to brand themselves as men and women of science. Their errors are much more interesting than those of the homeopaths, because they have a grain of real science to them, and that makes them not only more interesting but also more dangerous, because the real threat from cranks is not that their customers might die —
there is the odd case, although it seems crass to harp on about them — but that they systematically undermine the public’s understanding of the very nature of evidence.

I will show you evidence that a vanguard of startling wrongness is entering British universities, alongside genuine academic research into nutrition.

Next we will examine how the media promote the public misunderstanding of science, their single-minded passion for pointless nonstories, and their basic misunderstandings of statistics and evidence, which illustrate the very core of why we do science: to prevent ourselves from being misled by our own atomized experiences and prejudices.

1 - Matter

To focus on the methods is to miss the point of these apparent “experiments”: they aren’t about the methods; they’re about the positive result, the graph, and the appearance of science.

2 - Brain Gym

More than this, perhaps we all fall for reductionist explanations about the world. They just feel neat somehow.

But Brain Gym perfectly illustrates two more recurring themes from the industry of pseudoscience. The first is this: you can use hocus pocus — or what Plato euphemistically called a noble myth — to make people do something fairly sensible like drink some water and have an exercise break.

The second theme is perhaps more interesting: the proprietorialization of common sense. You can take a perfectly sensible intervention, like a glass of water and an exercise break, but add nonsense, make it sound more technical, and make yourself sound clever. This will enhance the placebo effect, but you might also wonder whether the primary goal is something much more cynical and lucrative: to make common sense copyrightable, unique, patented, and owned.

Most people know what constitutes a healthy diet already. If you want to make money out of it, you have to make a space for yourself in the market, and to do this, you must overcomplicate it, attach your own dubious stamp.

This process of professionalizing the obvious fosters a sense of mystery around science and health advice that is unnecessary and destructive. More than anything, more than the unnecessary ownership of the obvious, it is disempowering.

3 - The Progenium XY Complex

Just like the lottery, the cosmetics industry is playing on people’s dreams, and people are free to waste their money. I can very happily view fancy cosmetics — and other forms of quackery — as a special, self-administered, voluntary tax on people who don’t understand science properly.

More than that, these ads sell a dubious worldview. They sell the idea that science is not about the delicate relationship between evidence and theory. They suggest, instead, with all the might of their international advertising budgets, their Microcellular Complexes, their Neutrilium XY, their Tenseur Peptidique Végétal, and the rest, that science is about impenetrable nonsense involving equations, molecules, sciencey diagrams, sweeping didactic statements from authority figures in white coats, and that this sciencey-sounding stuff might just as well be made up, concocted, confabulated out of thin air, in order to make money.

4 - Homeopathy

So here we address one of the most important issues in science: How do we know if an intervention works?

Homeopathy makes the clearest teaching device for evidence-based medicine for one simple reason: homeopaths give out little sugar pills, and pills are the easiest thing in the world to study.

Conventional medicine in Hahnemann’s time was obsessed with theory and was hugely proud of basing its practice on a “rational” understanding of anatomy and the workings of the body. Medical doctors in the eighteenth century sneeringly accused homeopaths of “mere empiricism,” an overreliance on observations of people getting better. Now the tables are turned; today the medical profession is frequently happy to accept ignorance of the details of mechanism, as long as trial data shows that treatments are effective (we aim to abandon the ones that aren’t), whereas homeopaths rely exclusively on their exotic theories and ignore the gigantic swath of negative empirical evidence on their efficacy. It’s a small point, perhaps, but these subtle shifts in rhetoric and meaning can be revealing.

We do not know how general anesthetics work; but we know that they do work, and we use them despite our ignorance of the mechanism.

As Voltaire said, “The art of medicine consists in amusing the patient while nature cures the disease.”

Even if we had one genuine, unambiguous, and astonishing case of a person’s getting better from terminal cancer, we’d still be careful about using that one person’s experience, because sometimes, entirely by chance, miracles really do happen. Sometimes, but not very often.

As the researchers made clear in their own description, claims for miracle cures should be treated with caution, because “miracles” occur routinely, in 1 percent of cases by their definition, and without any specific intervention. The lesson of this paper is that we cannot reason from one individual’s experience or even that of a handful, selected out to make a point. So how do we move on? The answer is that we take lots of individuals, a sample of patients who represent the people we hope to treat, with all of their individual experiences, and count them all up. This is clinical academic medical research, in a nutshell, and there’s really nothing more to it than that: no mystery, no “different paradigm,” no smoke and mirrors. It’s an entirely transparent process, and this one idea has probably saved more lives, on a more spectacular scale, than any other idea you will come across this year.

If antiauthoritarian rhetoric is your thing, then bear this in mind: perpetrating a placebo-controlled trial of an accepted treatment — whether it’s an alternative therapy or any form of medicine — is an inherently subversive act. You undermine false certainty, and you deprive doctors, patients, and therapists of treatments that previously pleased them.

I go for lunch, entirely unaware that I am calmly and quietly polluting the data, destroying the study, producing inaccurate evidence, and therefore, ultimately, killing people (because our greatest mistake would be to forget that data is used for serious decisions in the very real world, and bad information causes suffering and death).

Some of the biggest figures in evidence-based medicine got together and did a review of blinding in all kinds of trials of medical drugs and found that trials with inadequate blinding exaggerated the benefits of the treatments being studied by 17 percent.

Does randomization matter? As with blinding, people have studied the effect of randomization in huge reviews of large numbers of trials and found that the ones with dodgy methods of randomization overestimate treatment effects by 41 percent. In reality, the biggest problem with poor-quality trials is not that they’ve used an inadequate method of randomization; it’s that they don’t tell you how they randomized the patients at all. This is a classic warning sign and often means the trial has been performed badly.

As it happens (I promise I’ll stop this soon), there have been two landmark studies on whether inadequate information in academic articles is associated with dodgy, overly flattering results, and yes, studies that don’t report their methods fully do overstate the benefits of the treatments, by around 25 percent.

Overall, doing research robustly and fairly does not necessarily require more money; it simply requires that you think before you start. The only people to blame for the flaws in these studies are the people who performed them.

This will be our last big idea for a while, and this is one that has saved the lives of more people than you will ever meet. A meta-analysis is a very simple thing to do, in some respects: you just collect all the results from all the trials on a given subject, bung them into one big spreadsheet, and do the math on that, instead of relying on your own gestalt intuition about all the results from each of your little trials. It’s particularly useful when there have been lots of trials, each too small to give a conclusive answer, but all looking at the same topic.

As I said, information alone can be lifesaving, and one of the greatest institutional innovations of the past thirty years is undoubtedly the Cochrane Collaboration, an international not-for-profit organization of academics that produces systematic summaries of the research literature on health care research, including meta-analyses.

Clinicians, pundits, and researchers all like to say things like “There is a need for more research,” because it sounds forward-thinking and open-minded. In fact, that’s not always the case, and it’s a little-known fact that this very phrase has been effectively banned from the British Medical Journal for many years, on the ground that it adds nothing; you may say what research is missing, on whom, how, measuring what, and why you want to do it, but the hand-waving, superficially open-minded call for “more research” is meaningless and unhelpful.

5 - The Placebo Effect

In the real world of clinical practice, patients and doctors aren’t so interested in whether a new drug works better than nothing; they’re interested in whether it works better than the best treatment they already have.

Drug companies, more than most, know the benefits of good branding; they spend more on PR, after all, than they do on research and development.

People I know still insist on buying brand-name painkillers. As you can imagine, I’ve spent half my life trying to explain to them why this is a waste of money, but in fact, the paradox of Branthwaite and Cooper’s experimental data is that they were right all along. Whatever pharmacology theory tells you, that brand-named version is better, and there’s just no getting away from it. Part of that might be the cost; a recent study looking at pain caused by electric shocks showed that a pain relief treatment was stronger when subjects were told it cost $2.50 than when they were told it cost 10 cents. (And a paper currently in press shows that people are more likely to take advice when they have paid for it.)

We must remember, specifically, that the placebo effect — or the meaning effect — is culturally specific.

Once again, it’s not just that they have no evidence for their claims about how their treatments work: it’s that their claims are mechanistic, intellectually disappointing, and simply less interesting than the reality.

6 - The Nonsense Du Jour

These intellectual crimes are ferried to you by journalists, celebrities, and, of course, “nutritionists,” members of a newly invented profession who must create a commercial space to justify their own existence. In order to do this, they must mystify and overcomplicate diet and foster your dependence upon them.

Forty years ago a man called Austin Bradford Hill, the grandfather of modern medical research, who was key in discovering the link between smoking and lung cancer, wrote out a set of guidelines, a kind of tick list, for assessing causality and a relationship between an exposure and an outcome. These are the cornerstone of evidence-based medicine, and often worth having at the back of your mind:

  • it needs to be a strong association, which is consistent, and specific to the thing you are studying, where the putative cause comes before the supposed effect in time;
  • ideally there should be a biological gradient, such as a dose-response effect;
  • it should be consistent or at least not completely at odds with what is already known (because extraordinary claims require extraordinary evidence);
  • and it should be biologically plausible.

There have been an estimated fifteen million medical academic articles published so far, and five thousand journals are published every month.

That solution is a process called systematic review. Instead of just mooching around online and picking out your favorite papers to back up your prejudices and help you sell a product, in a systematic review you have an explicit search strategy for seeking out data (openly described in your paper, even including the search terms you used on databases of research papers), you tabulate the characteristics of each study you find, you measure — ideally blind to the results —
the methodological quality of each one (to see how much of a “fair test” it is), you compare alternatives, and then finally you give a critical, weighted summary.

In the nineteenth century, as the public health doctor Muir Gray has said, we made great advances through the provision of clean, clear water; in the twenty-first century we will make the same advances through clean, clear information. Systematic reviews are one of the great ideas of modern thought. They should be celebrated.

But the early evidence in favor of antioxidants was genuinely promising and went beyond mere observational data on nutrition and health; there were also some very seductive blood results. In 1981 Richard Peto, one of the most famous epidemiologists in the world, who shares the credit for discovering that smoking causes 95 percent of lung cancer, published a major paper in Nature. He reviewed a number of studies that apparently showed a positive relationship between having a lot of beta-carotene on board (this is an antioxidant available in the diet) and a reduced risk of cancer.

But the editor of Nature was cautious. A footnote was put onto the Peto paper that read as follows: “Unwary readers (if such there are) should not take the accompanying article as a sign that the consumption of large quantities of carrots (or other dietary sources of beta-carotene) is necessarily protective against cancer.” It was a very prescient footnote indeed.

There’s also an important cultural context for this rush of activity that cannot be ignored: it was the tail end of the golden age of medicine. Before 1935 there weren’t too many effective treatments around: we had insulin, liver for iron-deficiency anemia, and morphine — a drug with superficial charm at least — but in many respects, doctors were fairly useless. Then suddenly, between about 1935 and 1975, science poured out a constant stream of miracles. Almost everything we associate with modern medicine happened in that time: treatments like antibiotics, dialysis, transplants, intensive care, heart surgery, almost every drug you’ve ever heard of, and more. As well as the miracle treatments, we really were finding those simple, direct, hidden killers that the media still pine for so desperately in their headlines. Smoking, to everybody’s genuine surprise — one single risk factor — turned out to cause almost all lung cancer. And asbestos, through some genuinely brave and subversive investigative work, was shown to cause mesothelioma.

The epidemiologists of the 1980s were on a roll, and they believed that they were going to find lifestyle causes for all the major diseases of humankind. A discipline that had got cracking when John Snow took the handle off the Broad Street pump in 1854, terminating that pocket of the Soho cholera epidemic by cutting off the supply of contaminated water.

It’s interesting to note, while we’re here, that carrots were the source of one of the great disinformation coups of World War II, when the Germans couldn’t understand how our pilots could see their planes coming from huge distances, even in the dark. To stop them from trying to work out if we’d invented anything clever like radar (as we had), the British instead started an elaborate and entirely made-up nutritionist rumor. Carotenes in carrots, they explained, are transported to the eye and converted to retinal, which is the molecule that detects light in the eye (this is basically true and is a plausible mechanism, like those we’ve already dealt with), so, went the story, doubtless with much chortling behind their excellent RAF mustaches, we have been feeding our chaps huge plates of carrots, to jolly good effect.

The people having the antioxidant tablets were 46 percent more likely to die from lung cancer, and 17 percent more likely to die of any cause, than the people taking placebo pills. This is not news, hot off the presses; it happened well over a decade ago.

The pediatrician Dr. Benjamin Spock wrote a record-breaking bestseller titled Baby and Child Care, first published in 1946, that was hugely influential and largely sensible. In it, he confidently recommended that babies should sleep on their tummies. Dr. Spock had little to go on; but we now know that this advice is wrong, and the apparently trivial suggestion contained in his book, which was so widely read and followed, has led to thousands, and perhaps even tens of thousands, of avoidable crib deaths.

But of course, there is a more mundane reason why people may not be aware of these findings on antioxidants, or at least may not take them seriously, and that is the phenomenal lobbying power of a large, sometimes rather dirty industry, which sells a lifestyle product that many people feel passionately about. The food supplement industry has engineered itself a beneficent public image, but this is not borne out by the facts. First, there is essentially no difference between the vitamin industry and the pharmaceutical and biotech industries (that is one message of this book, after all: the tricks of the trade are the same the world over). Key players include companies like Roche and Sanofi-Aventis; BioCare, the U.K. vitamin pill company, is part owned by Elder Pharmaceuticals, and so on. The vitamin industry is also — amusingly — legendary in the world of economics as the setting of the most outrageous price-fixing cartel ever documented. During the 1990s the main offenders were forced to pay the largest criminal fines ever levied in legal history — $1.5 billion in total — after entering guilty pleas with the U.S. Department of Justice and regulators in Canada, Australia, and the European Union. That’s quite some cozy cottage industry.

7 - Nutritionists

Graham crackers are a digestive biscuit invented in the nineteenth century by Sylvester Graham, the first great advocate of vegetarianism and nutritionism as we would know it, and proprietor of the world’s first health food shop.

Soon these food marketing techniques were picked up by more overtly puritanical religious zealots like John Harvey Kellogg, one of the men behind the cornflake. Kellogg was a natural healer and health food advocate, promoting his granola bars as the route to abstinence, temperance, and solid morals. He ran a sanatorium for private clients, using “holistic” techniques, including that modern favorite colonic irrigation. Kellogg was also a keen antimasturbation campaigner.

The most important take-home message with diet and health is that anyone who ever expresses anything with certainty is basically wrong, because the evidence for cause and effect in this area is almost always weak and circumstantial, and changing an individual person’s diet may not even be where the action is.

9 - Is Mainstream Medicine Evil?

One thing you could measure is how much medical practice is evidence based. This is not easy. From the state of current knowledge, around 13 percent of all treatments have good evidence, and a further 21 percent are likely to be beneficial. This sounds low, but it seems the more common treatments tend to have a better evidence base. Another way of measuring is to look at how much medical activity is evidence based, taking consecutive patients, in a hospital outpatients’ clinic, for example, looking at their diagnosis, what treatments they were given, and then looking at whether those treatment decisions were based on evidence. These real-world studies give a more meaningful figure: lots were done in the 1990s, and it turns out, depending on specialty, that between 50 and 80 percent of all medical activity is “evidence based.”

In the United States and New Zealand (but nowhere else in the developed world) drug companies are allowed to advertise their pills directly to the public

The U.S. pharmaceutical industry’s annual spend on promotion is more than three billion dollars, and it works, increasing prescriptions and doctor visits.

This means that we’ll first have to explain some background about how a drug comes to market. This is stuff that you will be taught at school when I become president of the one world government. Understanding this process is important for one very clear reason: it seems to me that a lot of the stranger ideas people have about medicine derive from an emotional struggle with the very notion of a pharmaceutical industry. Whatever our political leanings, we all feel nervous about profit taking any role in the caring professions, but that feeling has nowhere to go. Big pharma is evil; I would agree with that premise. But because people don’t understand exactly how big pharma is evil, their anger gets diverted away from valid criticisms — its role in distorting data, for example, or withholding lifesaving AIDS drugs from the developing world — and channeled into infantile fantasies.

In the United States, the pharmaceutical industry has been one of the most profitable industries over the last twenty-five years. It only lost its first-place standing in 2003, and is currently in third place after Internet and communications companies. The country spent $227.5 billion a year on pharmaceutical drugs in 2009, and much of that goes on patented drugs, medicines that were released in the last ten years. Globally, the industry is worth more than $800 billion.

But drug trials are expensive, so an astonishing 90 percent of clinical drug trials, and 70 percent of trials reported in major medical journals, are conducted or commissioned by the pharmaceutical industry. A key feature of science is that findings should be replicated, but if only one organization is doing the funding, then this feature is lost. It is tempting to blame the drug companies — although it seems to me that nations and civic organizations are equally at fault here for not coughing up — but wherever you draw your own moral line, the upshot is that drug companies have a huge influence over what gets researched, how it is researched, how the results are reported, how they are analyzed, and how they are interpreted.

[How to manipulate your trial:]

What can you do? Well, first, you could study it in winners. Different people respond differently to drugs: old people on lots of medications are often no-hopers, whereas younger people with just one problem are more likely to show an improvement. So study your drug only in the latter group.

Next up, you could compare your drug against a useless control. Many people would argue, for example, that you should never compare your drug with placebo, because it proves nothing of clinical value. In the real world, nobody cares if your drug is better than a sugar pill; people care only if it is better than the best currently available treatment.

And yet various studies have shown that the reported prevalence of anorgasmia in patients taking SSRI drugs varies between 2 percent and 73 percent, depending primarily on how you ask: a casual, open-ended question about side effects, for example, or a careful and detailed inquiry. One three-thousand-subject review on SSRIs simply did not list any sexual side effects on its twenty-three–item side effect table. Twenty-three other things were more important, according to the researchers, than losing the sensation of orgasm. I have read them. They are not.

Well, if your trial has been good overall, but has thrown out a few negative results, you could try an old trick: don’t draw attention to the disappointing data by putting it on a graph. Mention it briefly in the text, and ignore it when drawing your conclusions.

If your results are completely negative, don’t publish them at all, or publish them only after a long delay.

[More tricks, each is its own section:]

  • Ignore the Protocol Entirely
  • Play with the Baseline
  • Ignore Dropouts
  • Clean Up the Data
  • “The Best of Five…No…Seven…No…Nine!”
  • Torture the Data - “Torture the data, and it will confess to anything,” as they say at Guantánamo Bay.
  • Try Every Button on the Computer

Overall, studies funded by a pharmaceutical company were found to be four times more likely to give results that were favorable to the company than were independent studies.

Publication bias is a very interesting and very human phenomenon. For a number of reasons, positive trials are more likely to get published than negative ones.

Rightly or wrongly, finding out that something doesn’t work probably isn’t going to win you a Nobel Prize — there’s no justice in the world — so you might feel unmotivated about the project, or prioritize other projects ahead of writing up and submitting your negative finding to an academic journal, and so the data just sits, rotting, in your bottom drawer. Months pass. You get a new grant. The guilt niggles occasionally, but Monday’s your day in the clinic, so Tuesday’s the beginning of the week really, and there’s the departmental meeting on Wednesday, so Thursday’s the only day you can get any proper work done, because Friday’s your teaching day, and before you know it, a year has passed, your supervisor retires, the new guy doesn’t even know the experiment ever happened, and the negative trial data is forgotten forever, unpublished. If you are smiling in recognition at this paragraph, then you are a very bad person.

A review in 1998 looked at the entire canon of Chinese medical research and found that not one single negative trial had ever been published. Not one.

Generally the influence of publication bias is more subtle, and you can get a hint that publication bias exists in a field by doing something very clever called a funnel plot.

The most heinous recent case of publication bias has been in the area of SSRI antidepressant drugs, as has been shown in various papers. A group of academics published a paper in The New England Journal of Medicine at the beginning of 2008 that listed all the trials on SSRIs that had ever been formally registered with the FDA, and examined the same trials in the academic literature. Thirty-seven studies were assessed by the FDA as positive: with one exception, every single one of those positive trials was properly written up and published. Meanwhile, twenty-two studies that had negative or iffy results were simply not published at all, and eleven were written up and published in a way that described them as having a positive outcome. This is more than cheeky. Doctors need reliable information if they are to make helpful and safe decisions about prescribing drugs to their patients. Depriving them of this information, and deceiving them, are a major moral crime. If I weren’t writing a light and humorous book about science right now, I would descend into gales of rage.

Vioxx was taken off the market in 2004, but analysts from the FDA estimated that it had caused between 88,000 and 139,000 heart attacks, 30 to 40 percent of which were probably fatal, in its five years on the market.

What’s truly extraordinary is that almost all these problems — the suppression of negative results, data dredging, hiding unhelpful data, and more — could largely be solved with one very simple intervention that would cost almost nothing: a clinical trials register, public, open, and properly enforced. This is how it would work. You’re a drug company. Before you even start your study, you publish the protocol for it, the methods section of the paper, somewhere public. This means that everyone can see what you’re going to do in your trial, what you’re going to measure, how, in how many people, and so on, before you start.

There are trials registers at present, but they are a mess.

It’s worth noting that drug adverts aimed directly at the public are legally allowed only in the United States and New Zealand, as pretty much everywhere else in the developed world has banned them, for the simple reason that they work.

10 - Why Clever People Believe Stupid Things

To recap: We see patterns where there is only random noise. We see causal relationships where there are none. These are two very good reasons to measure things formally. It’s bad news for intuition already.

It seems we have an innate tendency to seek out and overvalue evidence that confirms a given hypothesis.

This tendency is dangerous, because if you ask only questions that confirm your hypothesis, you will be more likely to elicit information that confirms it, giving a spurious sense of confirmation. It also means — if we think more broadly — that the people who pose the questions already have a head start in popular discourse.

So we can add to our running list of cognitive illusions, biases, and failings of intuition: 3. We overvalue confirmatory information for any given hypothesis. 4. We seek out confirmatory information for any given hypothesis.

Put simply, the subjects’ faith in research data was not predicated on an objective appraisal of the research methodology, but on whether the results validated their preexisting views.

So we can add to our list of new insights about the flaws in intuition: 5. Our assessment of the quality of new evidence is biased by our previous beliefs.

It’s because of availability, and our vulnerability to drama, that people are more afraid of sharks at the beach, or of fairground rides on the pier, than they are of flying to Florida or driving to the coast. This phenomenon is even demonstrated in patterns of smoking cessation among doctors. You’d imagine, since they are rational actors, that all doctors would simultaneously have seen sense and stopped smoking once they’d read the studies showing the phenomenally compelling relationship between cigarettes and lung cancer. These are men of applied science, after all, who are able, every day, to translate cold statistics into meaningful information and beating human hearts. But in fact, from the start, doctors working in specialties like chest medicine and oncology, where they witnessed patients dying of lung cancer with their own eyes, were proportionately more likely to give up cigarettes than their colleagues in other specialties. Being shielded from the emotional immediacy and drama of consequences matters.

When people learn no tools of judgment and merely follow their hopes, the seeds of political manipulation are sown. —Stephen Jay Gould

11 - Bad Stats

But if anyone in a position of power is reading this, here is the information I would like from a newspaper, to help me make decisions about my health, when reporting on a risk:

  • I want to know whom you’re talking about (e.g., men in their fifties);
  • I want to know what the baseline risk is (e.g., four men out of a hundred will have a heart attack over ten years);
  • and I want to know what the increase in risk is, as a natural frequency (two extra men out of that hundred will have a heart attack over ten years).
  • I also want to know exactly what’s causing that increase in risk: an occasional headache pill, or a daily tubful of pain-relieving medication for arthritis.

You might not trust the press release, but if you don’t know about numbers, then you take a big chance when you delve under the hood of a study to find a story.

This breaks a cardinal rule of any research involving statistics: you cannot find your hypothesis in your results. Before you go to your data with your statistical tool, you have to have a specific hypothesis to test.

12 - The Media’s MMR Hoax

We have already seen, with the example of Dr. Spock’s advice to parents on how their babies should sleep, that when your advice is followed by a very large number of people, if you are wrong, even with the best of intentions, you can do a great deal of harm: because the effects of modest tweaks in risk are magnified by the size of the population changing its behavior. It’s for this reason that journalists have a special responsibility, and that’s also why we will devote the last chapter of this book to examining the processes behind one very illustrative scare story: the MMR vaccine.

Journalists frequently flatter themselves with the fantasy that they are unveiling vast conspiracies, that the entire medical establishment has joined hands to suppress an awful truth. In reality I would guess that the 150,000 doctors in the U.K. could barely agree on second-line management of hypertension

An Australian obstetrician called William McBride first raised the alarm in a medical journal, publishing a letter in The Lancet in December 1961. He ran a large obstetric unit, seeing a great number of cases, and he was rightly regarded as a hero; but it’s sobering to think that he was in such a good position to spot the pattern only because he had prescribed so much of the drug, without knowing its risks, to his patients.

If you ever suspect that you’ve experienced an adverse drug reaction, I would regard it as your duty as a member of the public, to report it (in the United States anyone, including patients, can report an adverse event at the FDA MedWatch site).

Even when Edward Jenner introduced the much safer vaccination for protecting people against smallpox at the turn of the nineteenth century, he was strongly opposed by the London cognoscenti.

After all, as any trendy MMR-dodging North London middle-class humanities graduate couple with children would agree, just because vaccination has almost eradicated polio—a debilitating disease that as recently as 1988 was endemic in 125 countries — doesn’t necessarily mean it’s a good thing.

If you ever wanted to see evidence against the existence of a sinister medical conspiracy, you need look no further than the shower of avoidant doctors and academics and their piecemeal engagement with the media during this time.

We still don’t know what causes autism.

If there is one thing that has adversely affected communication among scientists, journalists, and the public, it is the fact that science journalists simply do not cover major science news stories.

Despite all that, I remain extremely wary of GM for reasons that have nothing to do with the science, simply because it has created a dangerous power shift in agriculture, and “terminator seeds,” which die at the end of the season, are a way to increase farmers’ dependency, both nationally and in the developing world, while placing the global food supply in the hands of multinational corporations. If you really want to dig deeper, Monsanto is also very simply an unpleasant company (it made Agent Orange during the Vietnam War, for example).

But I would argue — perhaps sanctimoniously — that the media have a special responsibility in this case, because they themselves demanded “more research” and, moreover, because at the very same time that they were ignoring properly conducted and fully published negative findings, they were talking up scary findings from an unpublished study by Krigsman, a man with a track record of making scary claims that remain unpublished.

In fact, there have been systematic quantitative surveys of the accuracy of health coverage in Canada, Australia, and the United States — I’m trying to get one off the ground in the U.K. — and the results have been universally unimpressive. It seems to me that the state of health coverage in the U.K. could well be a serious public health issue.

If you wanted to do something constructive about this problem, instead of running a single-issue campaign about MMR, you might, perhaps, use your energies more usefully. You could start a campaign for constant automated vigilance of the entirety of the Food and Drug Administration data set for any adverse outcomes associated with any intervention, for example, and I’d be tempted to join you on the barricades.

And Another Thing

Without anybody’s noticing, bullshit has become an extremely important public health issue, and for reasons that go far beyond the obvious hysteria around immediate harms, the odd measles tragedy or a homeopath’s unnecessary malaria case.

But journalists and miracle cure merchants sabotage this process of shared decision making, diligently, brick by brick, making lengthy and bogus criticisms of the process of systematic review (because they don’t like the findings of just one), extrapolating from lab dish data, misrepresenting the sense and value of trials, carefully and collectively undermining people’s understanding of the very notion of what it means for there to be evidence for an activity. In this regard they are, to my mind, guilty of an unforgivable crime.

Editors will always — cynically — sideline those people and give stupid stories to generalists, for the simple reason that they want stupid stories. Science is beyond their intellectual horizon, so they assume you can just make it up anyway. In an era when mainstream media are in fear for their lives, their claims to act as effective gate-keepers to information are somewhat undermined by the content of pretty much every column or blog entry I’ve ever written.

You can also toe the line by not writing stupid press releases (there are extensive guidelines for communicating with the media online), by being clear about what’s speculation in your discussions, by presenting risk data as “natural frequencies,” and so on. If you feel your work — or even your field — has been misrepresented, then complain. Write to the editor, the journalist, the letters page, the readers’ editor, start a blog, put out a press release explaining why the story was stupid; get your press office to harass the paper or TV station, use your title (it’s embarrassing how easy they are to impress), and offer to write them something yourself.

The greatest problem of all is dumbing down. Everything in the media is robbed of any scientific meat, in a desperate bid to seduce an imaginary mass that aren’t interested. And why should they be? Meanwhile, the nerds, the people who studied biochemistry but now work in middle management, are neglected, unstimulated, abandoned. There are intelligent people out there who want to be pushed, to keep their knowledge and passion for science alive, and neglecting them comes at a serious cost to society. Institutions have failed in this regard.

I give you the CERN podcast, the Science in the City mp3 lecture series, blogs from profs, open-access academic journal articles from PLOS, online video archives of popular lectures, the free editions of the Royal Statistical Society’s magazine Significance, and many more, all out there, waiting for you to join them.