January 11, 2009

Science's true tragedy .

Every Friday, HippieHusband and I go over to Mr.Enthusiasm's house, where we play deep strategy boardgames. I am not a big fan of Cranium and other trivia-focused games like Hoopla. Largely, because at one of our lab retreats up at GeneralSolutionGuru's cabin, in TheLandOfTrees, the lab played the game Hoopla. Each time that retreat is mentioned, so is that one particular game. I mean, who wouldn't know what's bigger than a swimming pool and smaller than the ocean? A cruiseship, stupid. Personally, I think it's obvious, but apparently no one else did. Hmmf...

This Friday, Mr.Enthusiasm suggested I read an editorial by Hochberg et al (2009), called, "The tragedy of the reviewer commons." It connects the rise in manuscript submissions and increased demand placed on reviewers to the tragedy of the commons.

The tragedy of the commons is the idea that many individuals acting in their own self-interest can destroy a shared resource, even if in the long term it doesn't benefit anyone. The idea was first outlined by a mathematical amateur named William Forster Lloyd (1794-1852) but later explored by Garrett Hardin, in this essay. He said, imagine a pasture used by several herdsmen. Each herdsmen will try to put as many cattle on this shared pasture. Eventually, as the population grows, a time will come when there are too many herdsmen and too many cattle. Each herdsmen must make a decision: do they put an additional cattle on the pasture? The benefit to the herdsmen is that his cow will get more food and thus bigger which leads to more money. But the cost of adding another cow to the pasture is that in the long run, there is less food because the pasture becomes overgrazed. As Hardin states,
Freedom in a commons brings ruin to all.
Examples of commons might include things like the oceans (overfishing), the air (pollution), forests (clearcutting), etc.

The shared resource that Hochberg et al (2009) refer to in their paper is the Reviewer Commons. They suggest that the refereeing burden is not evenly distributed across the scientific community. Those who are active in publishing are the least responsive to reviewing papers, thus making the reviewer pool small. As Hochberg et al (2009) suggest, this can easily be solved by simply extending the reviewer pool to include senior PhD students and post-docs.

(In my opinion, I think post-docs and Phds are who editors should first consider. We are more likely to put the time and effort into improving the science and less likely to be territorial about the science.)

In the Hochberg et al. (2009) scenario, the herdsmen are all of us who who send off our "cows" (manuscripts) to the various journals, attempting to publish the smallest acceptable unit.

The authors suggest that given the current competitive atmosphere fostered by various research and granting agencies, most of us, herdsmen, want our cows to have the best pastures. So although the manuscript may not be appropriate for the journal (think N/S/C/E) we send it off anyway in the hopes that "our cow" may find a little grass. And when it is rejected from the top pastures, "our cow" gets sent off to pastures of lower quality, until eventually, it lands in a pasture where it finds a small patch of grass to eat. Hochberg et al. (2009), suggest that sending manuscripts off to inappropriate journals puts undue strain on the already stressed editorial process. As a result, we all suffer. First we suffer because the rejection rate at these top international journals is so high >60%. Secondly, we suffer because we are taxing the reviewer base with this self-serving process. Hochberg et al. (2009) estimate that on average the mean number of reviewers required for a manuscript to get published is between 5 and 10.

According to Hochberg et al. (2009), the tragedy is to the quality of science. Often reviewers will see repeated submissions of the same manuscript because the same reviewer might be used by multiple journals. If a reviewer sees that an author ignored their previous comments, they might be less inclined to invest more time to improving the science. Even if the reviewer is not the same, by ignoring the comments, the author does a disservice to the commons, because another reviewer may simply be repeating what a previous reviewer said. Lastly, reviewer comments can only improve the science and ignoring them, diminishes the quality of the science.

Hochberg et al. (2009) make two suggestions on how we might preserve our pastures. First, pre-review your manuscript before submission by sending it to other colleagues. Secondly, if the manuscript was rejected by a different journal, a statement could be required by the journal to confirm that the authors have considered and implemented previous comments. Lastly, they suggest authors should carefully revise previously rejected manuscripts based on reviewer comments.

While I agree with all of these suggestions, it seems to me that Hochberg et al. (2009) are simply trying to weakly police the commons or worse rely on an honour system, rather than solve the tragedy. Let me put it this way: Why should I care what happens to the quality of science? For me the benefit to geting more publications is that it will ultimately lead to that postdoc, the grants, the tenure-track position, and finally the coveted tenure? This is just being savvy to playing the game.

In the words of my gentle and loving, HippieHusband, "The system is fucked, and given that the system is fucked what can we do prevent cheap pieces of shit from benefiting at the expense of us all."

One alternative proposed by Hauser and Fehr (2007) in this paper, is to implement an incentive program for reviewers. So let's say you have reviewed for the journal, but you sent the review in late and it was poorly written. Then when you submit a manuscript for review your punishment is to essentially sit in publication limbo. The amount of time you spend in publication limbo depends on how late and crappy the review was. Similar ideas have been used to indicate the reputation of a seller on Ebay. Hauser and Fehr (2007) try to address some concerns that might be raised. Only the first author of a multi-authored publication should be punished. Or in the case of scientists who would try to cheat the system by refusing to review at all, they should get an additional week added to their publication limbo.

This is weakly policing the system and I don't think this goes far enough. In order to really solve the tragedy of the commons, I think we should look to evolutionary theory to help guide us. In an elegant experiment using bacteriophage, Kerr et al. (2006) showed that local migration and spatial structure could promote competitive restraint.

So here's what I propose, it's a modification of what is suggested by Hauser and Fehr (2007). In order for you to have a paper published in a particular journal you must earn positive reviewer credits. So for example, if I wanted to get a paper published in N/S/C/E, I would have to offer my name to the reviewer database for that particular journal and also provide a quality review in a timely fashion. For every positive credit that you earned at a journal, you would have the opportunity to publish work at that journal. A poor quality review wouldn't earn any positive credits. This means if I want "my cows" to have access to the best pastures, I have to work hard at maintaining those local pastures. This would fragment the community of biological journals into subpopulations and restrict the down-the-ladder approach to manuscript submission that many are encouraged to pursue. By fostering a more considered approach, we would create spatially structured communities of prudent scientists.

I know my suggestions are not without limitations. One obvious concern is that just because you have your name in the reviewer database at a particular journal, does not mean that an editor would select you to review a manuscript. This would be especially true for graduate students and unknown post-docs. Secondly, as Hauser and Fehr (2007), state "For the proposed system to work, the journals must fully commit to this policing policy."

This brings us back to the original question posited by Hardin. Do I, as a young herdsmen, in the current climate, where the stochastic forces do rule the review process, keep adding cows to the already taxed pastures?

Despite my idealism (yes, really I am), I think that the current scientific Ponzi scheme is about to do a major face plant, much like the US economy. With the collapse in the economy, competition will intensify both for jobs and grants.

So is it in my best interests to adopt a prudent strategy? Well, I think the answer comes in Hardin's own words,
If you do behave as we ask, we will secretly condemn you for a simpleton who can be shamed into standing aside while the rest of us exploit the commons.
This is science's true tragedy. In our freedom we have brought ruin to our commons, knowledge.

8 comments:

Candid Engineer said...

Interesting post.

I have to think though, that since a manuscript submission requires on average 3 reviews for a particular journal, that you would have to submit 3 quality reviews before you can submit something.

Except, what if I've done some fabulous Nature-type science before I get my three reviews in? Should the exceptional science be punished?

Anonymous said...

Dear GirlPostdoc,

I found your blog on our Editorial Opinion very insightful. It is possible that the merit system you propose based on reviews submitted could be worked, but it would leave individual journals in the uncomfortable position of losing potentially excellent manuscripts to competing journals that do not enforce the same policy.

We propose a couple of author-based solutions, but alone, this will not eliminate the problem. A robust solution will probably require that journals associate, much like states cooperate to form a nation. I think that this has chances to happen in ecology, but it will take considerable effort for journals to negotiate the conditions and then manage the commons.

Ciao,

Michael Hochberg

unknown said...

Candid Engineer,

I'm not sure that 3 quality reviews should be required before you submit manuscript. That might result in more of what you've suggested.

Yes, it's true that the merit based system will leave an author who has done excellent science out in the cold. But one solution might be to use credits earned by the collaborators.


Michael,

I'm flattered that you read my post. Thanks for stopping by.

Yes, I had thought about the journals losing potentially excellent manuscripts. But, if journals associate as you suggest then a credit earned at one state could be used for the entire nation. That might also serve Candid Engineer's concern as well.

Jenn, PhD said...

Interesting post GirlPostdoc. But I think you've missed something important: that it is in the journal's own best interest to accept papers that will get cited more often than others, in order to increase their impact factor, desirability, advertising revenue etc. No matter how much the authors do or do not contribute to the scientific commons. Any journal that plays by these rules (except if they all did) risks lowering their impact factor by cheesing-off the authors of the highly-cited papers who might just take their business elsewhere instead... How would scientific publishing have to change in order for your system to work?

Jenn, PhD said...

oops, I should read the comments before posting as I see that Michael Hochberg mentioned something similar. Still, touching on the business and money side of publishing, I think it would be difficult to convince journals to "associate" without some sort of payment system...

Anonymous said...

Hey GPD,

This is a great post. Cooperation between all the journals in ecology is not really so unrealistic, because most are owned by four organisations (Wiley-Blackwell, PloS, NPG and BMC). Cooperation between these on a system might encourage everyone else to follow suite.

I think it would be important to offer everyone one free submission to each journal, because they would then get onto the journal's database (and the editors' radar). Getting reviewer credit might be harder for young scientists, but your proposed system would do more to encourage supervisors to move review requests onto their students.

Another solution to reviewer fatigue might be to require authors to submit previous reviews with their manuscript, so that Editors can return papers without review if they are unlikely to survive the review process in that journal.

Tim Vines

Anonymous said...

I strongly disagree with the following statement in the middle of your piece...

"Lastly, reviewer comments can only improve the science and ignoring them, diminishes the quality of the science."

Have you ever had a paper reviewed? Really?

Here's just one example from a paper I submitted... we did some experiments characterizing responses to heart attack in a knockout mouse. As is customary with all knockout mice, we also showed data from wild-type (WT) controls. The reviewer's comment was that we already know how WT mice respond, so those data should be removed! Yes, the reviewer suggested REMOVING THE CONTROLS! I defy anyone to rationalize this suggestion, it is completely stupid, and was put in the review for one purpose ONLY - sabotage. Reviewers are not the nice people you think they are.

Until something is done regarding the ability of a senior scientist with a "dogma" to kill a paper that suggests something opposite, the commons is completely bereft of morals.

BTW, I publish about 6 papers a year. I review about 80 and sit on 3 editorial boards. How's that for a balanced workload.

unknown said...

Anonymous,

I have had several papers reviewed and yes, sometimes the comments are stupid. For the most part, I'd like to think (maybe hope) that a reviewer takes that responsibility seriously (I know I do). This means not letting dogma stand in the way of brilliant science.

But, I am also not naive and I know that certain reviewers will kill a paper because it goes against their work or just because they can't stand the PI in the lab. I would hope that somekind of checks could be put in place to avoid this kind of conflict of interest. But when the pool of reviewers is small, it makes this difficult.

You are absolutely right that cheaters exist in all systems.

The liability of a brown voice.

 It's 2am in the morning and I can't sleep.  I'm unable to let go of the ruminations rolling around in my brain, I'm thinkin...