A study has shown...

A Study has shown…

T


here’s a reason why the BMJ have banned the sentence “more research is needed” at the end of academic papers. That’s because it means nothing and adds nothing to the topic in question. If you have identified gaps in the knowledge then your paper is the place to be suggesting how we should fill it!


The same could be said for the sentence that begins many news articles - ‘A study has shown...’. The problem with that sentence is that is skips the most important part of any study; that is; was it set up and run properly to investigate what you are about to say it does?

More...



Goldacre highlights another problem when bold statements are made in newspapers or online and not in academic journals, where they would go through a process of checks known as peer review. That's not to say that peer review is without its own problems, but at the very least the claims being made have to be set out and it allows others some insight into how the conclusions were reached.


Any study will do

If you search PubMed you can find a ‘study’ that shows just about anything you want but that doesn't mean it proves or disproves the point you are making. In fact, if you're starting out with the result and then going to find research to back it up then something has gone seriously wrong and there should be alarm bells ringing in the confirmation bias box. That might sound a bit sensational but it’s very common. Have a look at writing jobs that are listed online and you find hundreds that follow this format;


‘Writer with scientific background needed to write 10 articles showing how X (insert food or nutrient of choice) is beneficial in Y (insert condition of your choice).’

I think there is a more subtle pseudoscience that is pervading the common media and the examples above demonstrate one form of it. There have been enough ludicrous stories in the media for most people to be conscious of blatant nonsense such as ‘coffee cures cancer’. Most people realise that this sort of thing will be a large exaggeration or misinterpretation, or at the very least done under such unreal circumstances as to be meaningless. Think MPG in your car and how that is tested with no relevance to real world driving, and you see how the same thinking turns a possibly good bit of research into an irresponsible headline.


So, for example,someone does a study of a particular cancer cell, in a dish, in a lab and treats it with some chemical that is found in coffee.They may show that the cancer cells are affected by that chemical. The headline then reads “Coffee Cures Cancer”. That is not only incorrect but irresponsible. and shows a complete lack of understanding of what has happened. We are mostly aware of this and I think we mainly ignore it, particularly since the next week the headline is likely to say ”Coffee Causes Cancer”.


Pseudoscience lurking

But I think there is a more dangerous pseudoscience lurking in the shadows that arguably is pulling the wool much more gently over our eyes. We have probably all been victims of this whether we know it or not. Many articles start out with a study then go on to discuss some of the science behind it and explain some of the limitations of it. That's all fine and gives a bit more depth than the headline as well as giving the impression that some critical appraisal is underway. Then what often happens is a subtle shift from discussing the real science to making assumptions and using less meaningful information to back up the assumptions that have often not been tested in a proper way, if at all.


There are a number of ways that the science shapeshifts into pseudoscience.

Animal studies


Click the references on lots of web or news articles spouting big claims and you will see that they are indeed PubMed articles they have linked to. However read further and you will see that a lot are animal studies. Now that's not to say that they are useless, but if you are going to claim that eating a handful of some food or nutrient each day is going to halve your risk of cancer then you have to bring some really good evidence, as well as the food, to the table. And it's only right that the responsibility to back up such claims lies with those making them. At the very least you need to be bringing studies in humans with a long term follow up and a great methods section, otherwise what you are saying is this; ‘Some study done in a rat / mouse or part of them in a lab has shown ‘some effect’ that may suggest that it has an effect on cancer cells (which ones? all of them?), therefore the same will be true in humans, therefore it can help to reduce cancer rates’. Bollocks.


Even when studies are done in humans it is very difficult to study nutrition on a population level. Ioannidis highlights in his paper “In contrast to major nutritional deficiencies and extreme cases, the effects of modest differences in nutrient intake have been difficult to study reliably in the population level”. He gives a excellent example of these problems which is an analysis that shows two thirds of the respondents to the National Health and Nutritional Examination survey reported energy intakes that were incompatible with life.


Post hoc ergo propter hoc


Which means ‘after this, therefore because of this’. We all know this and use it a lot. ‘Who had the TV remote last, because it was working before so you must have broken it.’ The problem with this reasoning is that you assume that the preceding event can be the only cause of the outcome and ignore any other factors that can play a part. The more complex the system the more fallible this logic becomes as you ignore the inherent complexity. Our bodies and the biological processes that occur within them are some of the most complex systems we have ever tried to understand. We don't know how seemingly simple parts of our body work so this kind of logic is dangerous when discussing something as complex as nutrition.



Observational studies


These are studies that differ from randomised control trials in a major way. Observational studies do not start with a particular intervention, say giving a specific drug vs not giving the drug. They work by looking at populations or groups of people and ‘observing’ for differences between them. The key difference being that the person doing the observing is not in control of any interventions.


Again it's not to say that these studies are of no value but it is crucial that they are interpreted correctly and are very careful as to what conclusions are drawn from them. A major problem is confounding, which occurs when we assign causal relationships to events based on observations, without considering that the observed effects may be influenced by other factors that we have not been making observations of. An example may be that observational studies show that high alcohol consumption is associated with higher rates of cardiovascular disease.However both higher alcohol consumption and cardiovascular disease are associated with smoking, which we know to be a cause of cardiovascular disease, so it would be wrong to say that the alcohol consumption was the causal factor in the cardiovascular disease as this may be better explained by the smoking.


If you’ve read this far and want to read more the Bradford Hill criteria are a set of criteria that have been used in public health for many years to evaluate causality.


Small effects need big research

When it comes to nutrition we are often looking for tiny differences or ‘effects’. So you won't see a headline such as “Eating X each day cures 99% of cancers”. Something with such a powerful effect would be easy to test. Normally you see headlines such as “Regular consumption of this can help reduce chance of X disease”. There are lots of ways that kind of headline is misleading, but first you need to know what you are claiming the reduction is. If my chance of getting disease X is very low anyway, then is it really impressive or interesting that a study may show that an intervention may reduce this by a small amount further? Similarly if my chance of getting disease Y is very high and the intervention may show a small reduction in my risk, is this really useful or relevant enough to suggest that people should change their behavior without more solid evidence?


To look for small effects you need to do a massive study involving lots of people and follow them up for years for your study to have enough power. Power in this sense is the statistical term. Inn essence if you go looking for small differences in anything you need to have lots of people in your study so that you can say with enough confidence that your observed difference isn't simply related to pure chance.


Its very hard to do a proper randomised control trial on diet. How do you ensure that people stick to it rigidly? I know I like to raid the fridge every now and again, don't we all? Would you be the perfect study participant and confess to all the times you didn't follow the set plan?



So what then?


In summary we probably don't need another study such as those discussed above as they really don't add anything to what we already know. For the reasons above studies that make big bold claims that seem implausible are probably exactly that.


If you torture the data enough, it will confess to anything

Ben Goldacre