Science. Communication. Community.
Science writers determine what research results are shared with the public. A New England Journal of Medicine study says we’re doing well, but we may still rely on inappropriate indicators.
Not much science makes it out of the lab and into the hearts and minds of the general public. In this somewhat anti-science environment (discussed on this blog last week), science writers have a lot of power. When we decide what stories to cover, we are not only discussing an interesting research project; we’re also implicitly telling the reader, “This is the science you need to know about.”
Sometimes I feel overwhelmed by this sense of responsibility, and I find myself lying in bed fretting about my next story. I may think the promise of synthetic biology is fascinating, but is it more important than new climate change research, or a clinical trial for a heart disease treatment? Debates like this can occupy professional editors at top scientific journals for weeks, and the scientific community can take years to realize the groundbreaking nature of certain work, but I have to decide what to cover on a deadline.
Luckily, I just got some news that may help me sleep better: editors from the New England Journal of Medicine found that reporters actually do a pretty good job covering the most important papers, at least when choosing among those published in their journal. The research was presented at the Seventh International Congress on Peer Review and Biomedical Publication, which brought journal editors and academic researchers together to discuss research about peer review and publication practices. (At some points it felt a bit “Being John Malkovich”; there was one talk analyzing research presented at previous Peer Review Congresses—that is, research about research about publishing research—but that’s a whole separate post.)
Amid talks about publication ethics, retractions, and sharing data, Sushrut Jangi from NEJM presented data showing that media outlets effectively highlight “blockbuster” papers: those that are both highly viewed shortly after publication, indicating immediate interest, and highly cited over a few years, suggesting lasting impact within the scientific community. The researchers found that, in the first few weeks after publication, these articles were mentioned by the media on average about 50 times, as compared to just 20 times for the rest of the articles. In other words, journalists were more likely to cover the stories that, by the journal’s metrics at least, were the most important in both the short- and long-term.
This is not to say that we can rest on our laurels, patting ourselves on the back for a job well done. NEJM is one of the most highly regarded general medical journals in the world, so reporters are more likely to cover research published there than that in, say, The Journal of Bone and Joint Surgery (no offense intended to orthopedists). Such biases toward “top” journals, however, may no longer be warranted. Many argue that it is time to call into question the prestige afforded to publishing in these few journals, which are highly regarded in large part because of their calculated impact factors. The impact factor is a journal-level metric that is determined by the total number of citations their published articles receive, and a high impact factor has come to be equated with “important” science. Some researchers, though, feel that it is mathematically and scientifically inappropriate to use the journal-level impact factor to assess the importance of the individual papers published in that journal. In addition, a 2011 editorial showed that higher impact factors were also correlated with higher retraction rates, calling into question the assumption of quality of the work published in these journals. (There’s much more to be said about the impact factor; you can find some additional links at the bottom of the post if your interest is piqued.)
Despite the problems with these measures, and the multitude of good research published in a huge variety of venues, in my experience many science journalists favor these “high-impact” journals. It’s not that we’re lazy; it’s that, in the face of today’s growing research output, we’re struggling with discoverability and filtering, the same issues affecting the research community. There are some interesting tools under development, like Altmetric and Impact Story to track usage and Mendeley and Academia.edu to find research related to a topic of interest, that may help, but somehow we need to come up with new ways to filter the ever-growing literature to find the work that is truly news-worthy. It’s our responsibility, and it brings the reward of knowing that we really are finding the best science out there.