Figure One

Science. Communication. Community.

The Power of the Pen

Science writers determine what research results are shared with the public. A New England Journal of Medicine study says we’re doing well, but we may still rely on inappropriate indicators.

by Rachel Bernstein

hand holding pen

Science writers must shoulder the responsibility of wielding the pen (Image credit: wesleyt/Flickr)

Not much science makes it out of the lab and into the hearts and minds of the general public. In this somewhat anti-science environment (discussed on this blog last week), science writers have a lot of power. When we decide what stories to cover, we are not only discussing an interesting research project; we’re also implicitly telling the reader, “This is the science you need to know about.”

Sometimes I feel overwhelmed by this sense of responsibility, and I find myself lying in bed fretting about my next story. I may think the promise of synthetic biology is fascinating, but is it more important than new climate change research, or a clinical trial for a heart disease treatment? Debates like this can occupy professional editors at top scientific journals for weeks, and the scientific community can take years to realize the groundbreaking nature of certain work, but I have to decide what to cover on a deadline.

Luckily, I just got some news that may help me sleep better: editors from the New England Journal of Medicine found that reporters actually do a pretty good job covering the most important papers, at least when choosing among those published in their journal. The research was presented at the Seventh International Congress on Peer Review and Biomedical Publication, which brought journal editors and academic researchers together to discuss research about peer review and publication practices. (At some points it felt a bit “Being John Malkovich”; there was one talk analyzing research presented at previous Peer Review Congresses—that is, research about research about publishing research—but that’s a whole separate post.)

Amid talks about publication ethics, retractions, and sharing data, Sushrut Jangi from NEJM presented data showing that media outlets effectively highlight “blockbuster” papers: those that are both highly viewed shortly after publication, indicating immediate interest, and highly cited over a few years, suggesting lasting impact within the scientific community. The researchers found that, in the first few weeks after publication, these articles were mentioned by the media on average about 50 times, as compared to just 20 times for the rest of the articles. In other words, journalists were more likely to cover the stories that, by the journal’s metrics at least, were the most important in both the short- and long-term.

This is not to say that we can rest on our laurels, patting ourselves on the back for a job well done. NEJM is one of the most highly regarded general medical journals in the world, so reporters are more likely to cover research published there than that in, say, The Journal of Bone and Joint Surgery (no offense intended to orthopedists). Such biases toward “top” journals, however, may no longer be warranted. Many argue that it is time to call into question the prestige afforded to publishing in these few journals, which are highly regarded in large part because of their calculated impact factors. The impact factor is a journal-level metric that is determined by the total number of citations their published articles receive, and a high impact factor has come to be equated with “important” science. Some researchers, though, feel that it is mathematically and scientifically inappropriate to use the journal-level impact factor to assess the importance of the individual papers published in that journal. In addition, a 2011 editorial showed that higher impact factors were also correlated with higher retraction rates, calling into question the assumption of quality of the work published in these journals. (There’s much more to be said about the impact factor; you can find some additional links at the bottom of the post if your interest is piqued.)

Despite the problems with these measures, and the multitude of good research published in a huge variety of venues, in my experience many science journalists favor these “high-impact” journals. It’s not that we’re lazy; it’s that, in the face of today’s growing research output, we’re struggling with discoverability and filtering, the same issues affecting the research community. There are some interesting tools under development, like Altmetric and Impact Story to track usage and Mendeley and Academia.edu to find research related to a topic of interest, that may help, but somehow we need to come up with new ways to filter the ever-growing literature to find the work that is truly news-worthy. It’s our responsibility, and it brings the reward of knowing that we really are finding the best science out there.

Further reading:

Lost in publication: how measurement harms science

Why the impact factor of journals should not be used for evaluating research [PDF]

Sick of Impact Factors

Advertisements

About Rachel Bernstein

Rachel was a 2010 AAAS Mass Media Fellow at the Los Angeles Times, and is now a freelance science writer living in San Francisco. Among other things, she is interested in scientific publishing, education, and backpacking, especially in Yosemite.

One comment on “The Power of the Pen

  1. Jenna Bilbrey
    September 14, 2013

    I think it helps to know some of the publishing requirements. For example Nature and Science only really take “new” discoveries. They don’t publish follow-up papers or papers expanding on an already established idea. So that’s good for groundbreaking science. PNAS is typically good, but sometimes you have to watch out—members of the National Academy of Sciences get two “free” papers a year, which means they can publish any of their work even if it’s derivative and not groundbreaking.

    Impact factors, while an alright indicator of readership, aren’t really that important in terms of finding groundbreaking discoveries. Some really specialized journals have ridiculous impact factors because most of the work in that field is published there, but the work isn’t that groundbreaking. Also review journals have gigantic impact factors.

    Personally, I like to pick interesting science. If I’m excited while reading it, other people will be too!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: