The “File Drawer Problem”

17 02 2012

   When reasearch is conducted, the results and conclusion are often sent to journals to be published. If the research is interesting, and it results in a significant effect, then there is a fair chance it will get published. But what if your research showed no effect?

   The “File Drawer Problem” is when research with a true hypothesis, or contrary results to previously published papers is rejected by journals (Rosenthal, 1979). It has been found that research with significant results is three times more likely to be published than research with negative results (Dickersin et al, 1987).

   Significant results have been known to be published even if several papers beforehand have shown there to be no significant difference in the same area. This is thought to be because the editors of a journal won’t think readers would be interested in research that shows no effect; the most common cause for non-published papers is the investigators themselves not thinking people would be interested in null results (Easterbrook, 1991).

   John Ioannidis (2005) came up with 6 reasons why a lot of journals don’t get published:

  1. When the studies conducted in a field are small
  2. The effect sizes are small 
  3. There is a greater number and less preselection of tested relationships
  4. There’s more flexibility in design, definition, outcomes or analytical modes
  5. There’s greater financial and other interest and prejudice
  6. There are more teams involved in a field searching for statistical significance

   Ioannidis also came up with “remedies” of the file drawer problem:

  • Better powered studies (low bias meta analysis, large studies testing major concepts)
  • Enhanced research standards 
  • Considering what the chances are of a true or non-true result

   I believe that the file drawer problem is a serious matter, and that no research is pointless, even if it shows no effect. There could be something that we’ve believed for a very long time, and some research could disprove this thing!

   Publishing false true results can be a dangerous thing too. For example, the Thalidomide tragedy in 1960.This drug, known as thalidomide, was thought to alleviate morning sickness in pregnant women, and was sold over the counter. Many young women took this drug, and many would give birth to babies with terrible birth defects, such as badly deformed eyes, ears, nose or heart, and even phocomelia (all or parts of limbs are missing). The drug was tested on pregnant rats, and did not appear to harm them, and so the drug was deemed safe.

   I think there is no harm in publishing a null result; it could just be the case that there is no effect, it doesn’t have to mean the research is false, or even boring. Some people may want to carry out a study in an area, and if they see previous research got a null result then they can try and disprove it.

Advertisements

Actions

Information

10 responses

22 02 2012
psycho4stats

I completely agree with the notion of your blog here. Being able to disprove a theory in my eyes is just as important as having a high chance that you are correct with a significant result. Not only will it save lot’s of money, research into other areas to do with that subject may be bringing you closer to a highly significant result.
I also think that your ending example about publishing type 1 error papers can indeed be extremely dangerous and detrimental as explained through your example. Especially in the field of medicine, it is something that researchers have to be extra careful about when interpreting results. Maybe it is the notion of coming out with having to accept a null hypothesis that some researchers cannot bear?!

(this paper shows some research into the development of cracking down on stopping type 1 errors within fMRI research) http://scan.oxfordjournals.org/content/4/4/423.full

22 02 2012
psucfa

It seems like the file drawer problem is a serious problem posed towards scientific discovery and advancement. Psucc0 commented “If I remember correctly our very own Guillaume Thierry came up against this problem. He was aiming to publish some contrary results about the fusiform face area and was just being ignored because it was a result that would shatter that particular paradigm!” I vaguely remembered Guillaume talking about this during a lecture after reading this comment and remember how disasterous I thought this problem was/is!

It’s hard to believe that science, something that is based on factual truth and the advancement of humankind (and other animals for that matter!) can be so ignorant and blind towards newfound data, regardless of whether it has a limited/non-existant effect size or it contradicts data paradigms that already exist! Afterall that’s what science is about! Research and more research! It’s widely known that you CANNOT prove a hypothesis/theory, only provide evidence to back it up.

Robert Rosenthal submitted an article in 1979 detailing this problem, mentioning that 5% of articles published contain type I errors whilst file drawers are filled with 95% of studies that have nonsignificant data. This was in 1979!

If this is true then it is hard to believe that science can advance when the “unwanted” results just get shelved so that they don’t contradict standard practice/knowledge or tarnish a journals name by being insignificant. This is truly a problem that needs to be overcome by the scientific community if we want to carry scientific progress onwards.

Resource:

Rosenthal. R, (1979). The file drawer problem and tolerance for null results, Psychological Bulletin, 86(3), 638-641.

22 02 2012
22 02 2012
thewonderfulworldofstats

I’ve often wondered why I never come across anything that has a non-significant result, when it could actually give us a lot of information. Especially, as you have mentioned that if something has been disproved, it should be shared. If it turns out there really isn’t an effect is it fair to leave people to believe something that is not true, (this is just a hypothetical situation). It’s a bit like when people were convinced the world was flat, and then it was found that it was in fact round! This did take some convincing, so I guess that researchers also have this problem.
I think non-significant results should be reported if they can benefit, or give new information.

22 02 2012
22 02 2012
prpij

really informative blog. I have never understood why journals dot publish journals that show no effect. it seems the science has become a search for Th. differences instead of looking at similarities or what has no effect. I think findings that support the null hypothesis should be published more often, it makes the research more worth while! and why should all the significant results get all the glory! http://topsciencenews.blogspot.com/2011/02/importance-of-publishing-negative_15.html

21 02 2012
21 02 2012
Comments for 22nd Feb « My Statistical Blog

[…] https://scarlettrose23.wordpress.com/2012/02/17/the-file-drawer-problem/#comment-93 Share this:TwitterFacebookLike this:LikeBe the first to like this post. Leave a Comment by […]

21 02 2012
Psucc0

If I remember correctly our very own Guillaume Thierry came up against this problem. He was aiming to publish some contrary results about the fusiform face area and was just being ignored because it was a result that would shatter that particular paradigm!
Null results can actually be quite interesting – here is a blog i found on the subject
http://evolvingmind.info/blog/2010/03/two-noteworthy-null-results-in-psychology-and-gender-differences/
Also null results can also be used to conduct balanced a meta-analysis which stands a good chance of actually finding the correct answer! So, you are completely correct in concluding that this is a big problem and people should make more of an effort to publish null results

18 02 2012
psud77

I really enjoyed this! It is a subject that I had pondered on occasion myself but never really considered the full implications of this effect. I completely agree that more research showing null and even negative effects should be published. I feel that the biggest advantage of being able to look through this research is the findings in research methods. To be able to establish what measures and methods were used in a study to produce a particular result can help influence research is a similar or even completely different field. For example, we can use the research to ascertain the effectiveness of psychometric measures; if we use the IPQ (Illness Perception Questionnaire) as an example – it has been validated as a research measure but it may be the case that in years to follow it becomes irrelevant and fails to ask the right questions and is then no longer a relevant measure. That said it may also identify elements of theory that have ‘passed their sell-by-date’ and are no longer relevant. This is particularly important in applied psychology where only the most current and relevant theory should be used to ensure the effectiveness of an intervention etc.

Furthermore it can highlight more simple needs such as sample sizes or even research type – it may be that a study just needs a larger sample size or that perhaps a longitudinal study would elicit more stable results. Of course it will also indicate which elements of a theory do not work in the general population which will in turn make future research easier as we will be able to in effect streamline the process of research by making sure we do not pair variables together when this has already been falsified in past research. It could be argued that before engaging in research we should compile a comprehensive list of past research but if some research is not being published then it cannot be included and simply fails to impact the scientific scene the way it should.

In an earlier blog I wrote about how a negative result can tell us much more than simply ‘accepting’ an experimental hypothesis. I still stand by this claim as I feel that we will never be able to answer every question science proposes but with fortified methods the answers we can provide will be as strong and robust as possible thus ensuring that we produce as few unrepresentative conclusions as possible; this being particularly relevant when it comes to applying research findings to specialty settings such as clinical or ABA settings. We have a responsibility to ensure that work is fairly represented and this could be something to discuss with the BPS or other journals to identify their policies and discover the extent of this effect on publication of research in psychology.

Reference:

Weinman, J., Petrie, K. J., Moss-Morris, R., & Horne, R. (1996). The Illness Perception Questionnaire: A new method for assessing the cognitive representation of illness. Psychology & Health, 11, 431-445.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




%d bloggers like this: