How science goes wrong

13

A SIMPLE idea underpins science: “trust, but verify”. Results should always be subject to challenge from experiment. That simple but powerful idea has generated a vast body of knowledge. Since its birth in the 17th century, modern science has changed the world beyond recognition, and overwhelmingly for the better.

But success can breed complacency. Modern scientists are doing too much trusting and not enough verifying—to the detriment of the whole of science, and of humanity.

Too many of the findings that fill the academic ether are the result of shoddy experiments or poor analysis (see article). A rule of thumb among biotechnology venture-capitalists is that half of published research cannot be replicated. Even that may be optimistic. Last year researchers at one biotech firm, Amgen, found they could reproduce just six of 53 “landmark” studies in cancer research. Earlier, a group at Bayer, a drug company, managed to repeat just a quarter of 67 similarly important papers. A leading computer scientist frets that three-quarters of papers in his subfield are bunk. In 2000-10 roughly 80,000 patients took part in clinical trials based on research that was later retracted because of mistakes or improprieties.

Even when flawed research does not put people’s lives at risk—and much of it is too far from the market to do so—it squanders money and the efforts of some of the world’s best minds. The opportunity costs of stymied progress are hard to quantify, but they are likely to be vast. And they could be rising.

Written By: The Economist
continue to source article at economist.com

13 COMMENTS

  1. The most prone to cheating would be experiments that “prove” a drug is effective and safe. One of the problems is the research is invariably funded by a company that has a vested interest in the outcome.

    Drug trials that don’t give positive results may be legally ignored. No wonder so many results cannot be reproduced.

    • In reply to #1 by Roedy:

      The most prone to cheating would be experiments that “prove” a drug is effective and safe. One of the problems is the research is invariably funded by a company that has a vested interest in the outcome.

      Drug trials that don’t give positive results may be legally ignored. No wonder so many results…

      It’s a common fallacy I see in science reporting in the mainstream media all the time, talking about problems with clinical trials done by for profit drug companies as if they are generic problems with science and the scientific method. It’s often the same people who talk about “controversies” caused by people with a fossil fuel agenda who spread FUD about climate science.

      My response to this article is the same as a previous one on the same topic: science is hard and scientists make mistakes. Scientists are human and have bias, agendas, and politics just like everyone else. None of that is news.

      • In reply to #2 by Red Dog:

        In reply to #1 by Roedy:

        The most prone to cheating would be experiments that “prove” a drug is effective and safe. One of the problems is the research is invariably funded by a company that has a vested interest in the outcome.

        Drug trials that don’t give positive results may be legally ignored….

        Isn’t this problem with drug trial business essentially what Ben Goldacre has been saying for years? Any company or institution can ape scientific methodology and claim to be legitimate, simply because of a combination of public ignorance and the fact that science is largely diffused, with few if any central bodies of reference to turn to. For instance, no one person or institution holds a complete catalogue of medical trials, and there’s no public register that prevents “inconclusive” or negative trials from being silently left to slip through the cracks if it doesn’t agree with the experimenters’ hopes.

        The title is misleading. It should be “how scientists go wrong”, or “how people jumping on the science bandwagon go wrong”.

    • In reply to #3 by Smill:

      In reply to Red Dog, post 2. ‘science is hard’…really?

      Sorry, I guess it wasn’t clear. My point is that saying or providing evidence that “science is hard” is a rather banal thing to do. (I thought that would have been clear by the end of that paragraph where I said “none of this is news”) And that a good percentage of articles like this essentially come down to giving examples of how science can go wrong. And since science is hard there are a lot of ways things can go wrong with any specific experiment, both because no matter how hard we try all of us are still human and can’t completely remove our bias and because there are so many variables in any non-trivial experiment it is easy to ignore some factor that seems like it should be insignificant but may in fact invalidate or completely alter the results.

      So showing examples where people were biased or where someone overlooked a factor that was really critical don’t convince me at all that there is some serious problem with the scientific method, if anything to me they are evidence of the opposite, they show how well the scientific method works because eventually the problems were discovered.

      Now of course I’m all for any initiative that makes science even more rigorous, less error prone, and more open. However, I have a cynical conspiratorial side and that side always looks at articles like this as an attempt to sow FUD (Fear, Uncertainty, Doubt) as a way to discredit science in general so, for example, the people in power who read the Economist can keep on ignoring the mountain of scientific evidence for things like climate change.

  2. Red Dog makes a good point; we shouldn’t act as if the ways an individual paper’s publication can be a product of funder’s selection bias (which may be an argument for more funding of public sector science to offset private sector abuses, by the way) in any way reflect on conclusions most experts in a field embrace due to their proving reproducible. Indeed, I’d say more generally that, unless you can give an example of a reason reproducible conclusions can’t be trusted, we have nothing to talk about.

    (Well, maybe something; I’m open to their ideas, briefly tacked on at the end of both screeds, concerning how to get more findings reproduced. But that’s about getting more bang for our invested buck, not whether people who named themselves after economics have provided a reason to disparage real science. Seriously; read the titles of these articles, and the text under them, and… well, almost all the rest. it’s anti-science vitriol based on fallacious reasoning.)

    This is not the first recent article in The Economist focusing on reasons one study doesn’t prove jack. Every scientist knows scientist doesn’t prove jack. I’ve never known any scientist to pretend otherwise to any audience, unless that scientist is also pushing some kind of nonsense about the science itself, e.g. if they’re a climate change denier who technically is a scientist. More succinctly, reputable scientists don’t goof on this. The people who do are journalists. These authors for the economist, being anonymous, could even be such journalists. Even if they’re not, however, the points in a previous article$ boiled down to “Individual studies’ findings are often bad and often aren’t reproducible, and that’s somehow a problem for science’s reliability” (it’s not when science is defined by reproducibility). If journalists never publicised findings that hadn’t been reproduced, these articles would never have felt necessary to their authors; they may not really be necessary at all. [ $ It was posted on this website; my critical response was the first of many. http://www.richarddawkins.net/news_articles/2013/10/18/unreliable-research-trouble-at-the-lab ]

    Rather than repeat my arguments from round 1 here, I’ll refer people there and check whether this article gives reasons for reproduced findings – you know, quite literally textbook stuff – to be doubted, and… nope; nothing. Like I said, nothing to talk about here.

  3. I’ve heard the figure is more like 80%. But it might not really be an issue with all science. Just very specific fields in science.

    The absolutely worst areas being macro-economics, mainly in the banking, monetary, and public finance area (presumably why this article is featured in The Economist) and public health, mainly in the nutrition and exercise physiology area. These being areas where statistical correlations are the main, possibly only, tool employed. And where there are very significant political and commercial vested interests involved with research funding.

    Coincidentally these are also the areas where the public have become fed up, resulting in all areas of science losing credibility. This can have high level political impact. E.g. Australia recently dismantling its ministry of science and climate commission. Other fields of science might not incur these problems. Though Kahneman thinks there’s problems in psychology too.

    An example might be the concept of carbohydrate loading for athletes, for which there are thousands of reputable research projects published in peer reviewed journals. Nevertheless the phenomenon may be trivial and of no real consequence for athletes. Yet the work is highly reproducible because it’s such a popular and easy topic for junior researchers (usually athletes themselves) to try their skills at. But what may be being reproduced are the same errors on each occasion.

    Claims of secret breakthroughs in the subtleties of carbohydrate loading have been cited by incredibly successful top athletes in recent years as their ‘explanation’ for suspiciously outstandingly performances. But future research is quite likely to indicate that the performance enhancement effects of dietary practises are likely to diminish dramatically when the research data is corrected for other more blatantly effective factors associated with ‘performance enhancement’. It may be an example of diversionary science, which plays a role in covering up what’s really going on. There are areas where serious research occurs, but remains proprietary and none is ever published.

    Other examples in the nutrition area may have originated with the need for tobacco companies (many now morphed into processed food giants) to establish viable diversionary alternatives to explain away cardiovascular disease and cancers. This may even be the underlying cause of the prevailing saturated fat and cholesterol issue. Same now applies to processed foods and sugars content with heart disease etc. (Same major players behind this scenes again.)

    Pharmaceutical companies (and medical physicians who are dragged along for the ride) may not be the drivers here. They’re just profiting from the opportunities generated by the bad science and public credibility. It’s difficult to blame them for acting in their fiduciary duty to exploit the gullible public to maximise profits when they have no obligation to reveal detrimental information such as all the clinical trials that show nil or negative effect. This was revealed last week on the Catalyst TV show: side effects are minimised in clinical trials by effectively having sequential trials. Subjects who demonstrate side effects in the initial random trial are then eliminated from the pool of subjects and an entirely ‘new’ re-randomised trial is held. This double randomisation approach apparently is scientifically valid according to the scientists involved.

    It may be no coincidence that the most profitable pharmaceuticals of all time may turn out to have been anti-cholesterol, anti-stomach ulcers, anti-depressives – all being based on what may amount to semi-unintentional scientific fraud.

    I’ve been quite interested in this area since childhood when I found out that Karl Popper once told a family friend who worked with him that most scientists still don’t comprehend the scientific method. (This was in the late 1960s or early 70s, long after his books were published.) Possibly because they already think they know it already.

    This was significant because at the time Popper (& Kuhn) were regarded as the key players in exploring nature of science. The question came up because Popper had the same opinion about democratic politics, claiming that few political scientists understood the nature of democracy. Being focussed on electoral processes and fairness of representation etc. instead of the nature of the entire system and value of the outcome.

    It’s not just a simple case of statistical significance. The issue being that the quality of science would inevitably be compromised in future. The problem being that the nature of science is not taught and that prominent Nobel-prize winning scientists like Richard Feynman have often sneered at philosophy. Complete ignorance of philosophy, like economics, is almost a matter of pride among some scientists.
    So it may not just be a matter of commercial, career, or other basic biases. They play a role, but the real problem may be insufficient widespread awareness of the nature of science and how significant these biases may be.

    The good news is that bad science may be confined to particular areas. So as long as you don’t need to use money, have a government, eat food, or face any disease risks or medical treatment then there’s no need to be concerned.

  4. “Trust but verify” does not sound like the scientific method to me. Better would be “posit and falsify” as a scientific method for establishing what may be accepted as most likely to be true. There is more to the scientific method than just reproducing experimental results. In any case, failing to get a positive result is as significant as getting one. Skepticism is the scientist’s greatest virtue. The aim is to find out what stands up to all possible tests. Then, if a hypothesis enables one to predict an outcome, its scientific status is sure to rise.

    Those who hold views that have not been subjected to such stringent testing do not like to think of science as being so effective at discrediting unsubstantiated views and beliefs, so they do what has been done in this Economist article – they water down the concept of science itself, to make it more compatible with all the unsubstantiated beliefs held in fashionable society.

  5. “”Trust but verify” does not sound like the scientific method to me. Better would be “posit and falsify” as a scientific method for establishing what may be accepted as most likely to be true. There is more to the scientific method than just reproducing experimental results. In any case, failing to get a positive result is as significant as getting one. Skepticism is the scientist’s greatest virtue. The aim is to find out what stands up to all possible tests. Then, if a hypothesis enables one to predict an outcome, its scientific status is sure to rise.”

    Quite right. The “find out what stands up to all possible tests” is SO important, unless a “scientist” has a pre-decided outcome and a very good reason to resist questions…

    Unfortunately, it does happen.

Leave a Reply