The scientific community has been aware of the existence of a publication bias since the 1950s. What is a publication bias, and how does this affect science? This form of bias arises when the outcome of an experiment determines whether the results are published. On the surface, this doesn’t sound terrible. Shouldn’t exciting, unexpected results take precedence over a study demonstrating that treatment A doesn’t work? Unfortunately, this is the sort of thinking that has created a stigma around negative data.
As students, we are taught that the scientific method is king. If your experiment is well designed and controlled, your data will either prove or disprove your hypothesis. If your experiment is well designed and controlled, your data should be worthy of publication, regardless of the outcome. Unfortunately, this is not the case. Studies have shown that negative results (those that fail to prove the researcher’s hypothesis) are three times less likely to be published (yes, we recognize the potential irony here). Why is this a problem?
Let’s examine two scenarios. In the first scenario, Dr. Z notices that a few of her cancer patients who received Drug A for an unrelated medical condition responded extremely well to chemotherapy. Excited, Dr. Z publishes a well-received paper based on this anecdotal evidence and plans a large, randomized clinical trial to test the hypothesis that Drug A improves the efficacy of chemotherapy. Several years and hundreds of thousands of dollars later, statistical analysis demonstrates that Drug A has no effect on patients’ response to chemotherapy. Discouraged, Dr. Z halts the trial and does not publish the results. A decade later, Dr. Y comes across Dr. Z’s original paper and initiates another clinical trial to test the same hypothesis. If Dr. Z had published her negative results, she could have saved Dr. Y a great deal of time and money.
In the second scenario, a prominent scientist publishes a ground-breaking paper linking a specific gene to the development of Alzheimer’s. A young scientist decides to use the techniques described in the paper to take the research a step farther, and endeavors to replicate the results. After several failed attempts, the young scientist realizes there must be an error in the paper, but does not wish to jeopardize his career and instead abandons the project without publishing the results. Meanwhile, several pharmaceutical companies are furiously working to develop a drug that can target the gene identified in the paper. Several years and millions of R&D dollars later, the prominent scientist recognizes his error and the paper is retracted.
Both of these scenarios demonstrate the dangers of failing to publish negative data. If the science is good, there is much to be learned from what doesn’t work. Unfortunately, the publication industry prioritizes new, ground-breaking work over inconclusive or negative results, and researchers are hesitant to spend time and money on a manuscript that is unlikely to be accepted in a high-impact journal. So where does this data end up? Historically, it was typically relegated to the filing cabinet. However, the advent of preprint servers may be changing this scheme.
Preprint servers are online repositories for paper manuscripts (often in draft format). While these manuscripts are not peer reviewed in the traditional sense, preprint servers provide an opportunity for the research community as a whole to comment and give feedback. The authors’ work is validated or challenged in a way that gives everyone a voice. The popularity of preprint servers has grown dramatically in the last five years, and many journals accept manuscripts that have already been made public on a preprint server. Today, preprints receive a DOI (digital object identifier) so that they can be cited just like a published paper, giving credit to the original authors. Because publication in a preprint server is not subject to editorial oversight, negative data that would otherwise never see the light of day can quickly be made available in a free, open-access format.
As part of the Focused Ultrasound Foundation’s commitment to open science, we have created a focused ultrasound preprint server, hosted on the Center for Open Science’s Open Science Framework. Any research funded by the Foundation must by uploaded to the FocUS Archive, either as a final report or manuscript draft. We hope to encourage researchers in the focused ultrasound community to publish negative data, ending the cycle of publication bias and the stigma of negative data.
Additional Resources
White Papers, Symposia, and Workshop Proceedings
Can Partnering with Patients Advance Focused Ultrasound?
Cultivating the Next Generation: FUSF’s Fellowship Program
Collaboration: Fueling a Changing Focused Ultrasound Landscape
Kelsie Timbie, PhD, is the Scientific Programs Manager and the Veterinary Program Director at the Focused Ultrasound Foundation.