Tim Parker could tell that something wasn’t quite right with the research about European blue tits.
It was the early 2000s, and Parker was analyzing large data sets he collected with colleagues as a post-doctoral researcher at Oxford University in the United Kingdom. But something about research on the small birds wasn’t adding up.
Despite having statistical power, many variables and large samples, Parker was unable to replicate results found by other researchers regarding coloring in the heavily studied birds. As a behavioral ecologist, Parker wondered, why didn’t his results align with those found in many previously published studies? Scientists across Europe have studied the birds, publishing about 50 studies and around 1,000 statisticalrelationships. In fact, the European blue tit is often described as a model for understanding plumage color evolution.
An associate professor in ecology and evolutionary biology, Parker joined the faculty at Whitman College in 2006 and shortly after began a meta-analysis of blue tit literature. In 2013, after five years of study, Parker published the results of his meta-analysis. His findings weren’t encouraging about the shape of his field.
“People had spent literally millions of dollars on blue tits. And the only thing we could say with any confidence was that males were bluer than females, and we knew that before we started,” Parker said.
Bad Science or Bad System?
As part of his work, Parker discovered that many studies showed evidence of bias in how the information was presented. As he dug deeper, he began to see how the academic demands to publish new, meaningful research were having a negative effect on the quality of the science.
The underlying problem isn’t researchers doing bad science, Parker said, but rather a system that incentivizes novelty. For example, a journal may be more likely to publish an article showing a new result — such as that female blue tits respond to a certain color pattern in a male blue tit — rather than an article that effectively reproduces an existing finding. And publishing an article can increase odds of receiving promotion, tenure, grants or recognition from your field.
“Researchers are responding to what journals wants, what funding agencies want, and they’re making decisions for the betterment of their career, but those decisions aren’t always best for science,” Parker said.
The idea that the academic science community is facing a “replication crisis” is getting global attention. While those in the ecology and evolutionary biology fields like Parker are only recently turning to it, researchers in the social sciences have been looking at the issue for a while.
Assistant Professor Tom Armstrong in the Department of Psychology was first exposed to the notion when he was working as a social psychologist studying the concept of social priming, an idea made popular in Malcolm Gladwell’s book “Blink.”
“It was an open secret that many of the findings did not replicate,” Armstrong said. “I switched into clinical psychology, and a few years into my graduate program the replication crisis officially started. A coalition of labs attempted to replicate famous psychology findings, and nearly two-thirds of the findings did not replicate, with a higher rate of failure in social psychology, particularly the social priming studies.”
The findings prompted a reform movement that has been slowly creeping to other scientific disciplines.
Parker used his study as a launching pad into the world of open science. Through networking with other researchers concerned about issues of bias and reproducibility, Parker received funding from the National Science Foundation and the Laura and John Arnold Foundation to bring together editors from top ecology and evolutionary biology journals from around the world. Parker and Shinichi Nakagawa, a Japanese biologist then living in New Zealand, and Jessica Gurevitch, an ecology professor at Stony Brook University in New York, worked with the Center for Open Science to host the meeting in Virginia in 2015, the same year the Transparency and Openness Promotion (TOP) guidelines were published in the journal Science.
Parker and a group of 10 authors also conducted a synthesis of evidence from diverse sources to show how big the problem is in the ecology and evolutionary biology fields. The article was published in the journal Trends in Ecology and Evolution.
“The journal Conservation Biology almost immediately adopted a really rigorous editorial policy. They are still a leader in transparency,” Parker said. “The science publisher Nature developed a checklist for all papers in the field in ecology, evolution and the environment, for all their publications.”
Today, more than 850 organizations have signed on to the TOP guidelines.
Parker’s work to promote transparency and reduce bias hasn’t slowed down. In May 2018, he published a paper in Nature Ecology and Evolution providing peer reviewers with a checklist to help improve transparency in journal articles. Over the summer he published an article surveying ecologists and evolutionary biologists about the statistical decisions they make in their research.
Bringing Transparency to the Classroom
Despite both working on transparency and science bias issues and being employed at Whitman College, Parker and Armstrong found out about their shared interest over Twitter.
“He was re-tweeted by Brian Nosek, who is a social psychologist and the most prominent science reformer. Around that time a colleague put us in touch because of our shared interest in open science,” Armstrong said. “I was surprised, because I didn’t know that ecology was plagued by the same questionable research practices as psychology. It was thrilling for me, because he had actually traveled to the Center for Open Science and worked alongside many of the reform leaders that I followed on Twitter. I had no idea that a leader of the open science movement was here on Whitman’s campus.”
This fall, the two are bringing their passion for good science to the classroom with a new course. The interdisciplinary class will help students understand the replication crisis, its origins and explore strategies to improve reproducibility and limit bias, Armstrong said. Part of the course will be helping students think critically about research data and be able to analyze it for bias.
Error can happen because of the number of variables involved, the amount of controls, and the way statistical models are run, Parker said.
“The more models we run, the more likely we will find a pattern just by chance,” he said. “There are a lot of different ways that people can measure results and analyze data. If they’re only reporting a portion of the outcomes, they can mislead the reader how likely their results are. It was done not with the sense of thinking ‘oh I’m engaged in fraud.’ They are trying to find out what the data are telling them.”
Parker also hopes to help the students begin thinking about the roles they can play in reshaping the scientific community, particularly how incentives are used in institutional settings.
“Avoiding questionable research practices to do good science isn’t intellectually hard. It’s not a fundamentally difficult thing to do. We just have to keep forever vigilant,” Parker said.