Reviewing the “peers”
June 30, 2013
Posted by on
A while back, with some not too complex detective work on the part of USA Today, we got our hands on the reviewer comments from the infamous #arseniclife paper. Looking at the reviews, I agreed with the experts, including Leonid Kruglyak, who “reviewed the reviews”, that they generally looked normal… yes, they were quite positive at times but let’s face it, some scientists are nice and some aren’t and by chance alone, you might have a group of three that use encouraging words even if there are criticisms. But since then, I’ve seen opinions written out there that are not too kind to these reviewers… good thing they’re anonymous, or we were looking for our pitchforks now! Even Ash Jogalekar over at “The Curious Wavefunction“, whose posts I often find refreshing, called the reviewers out on their subpar job.
But what do “I” think about this whole fiasco? I think hindsight is 20/20… It all comes down to taking the authors at their word that there was no phosphorus in the media they used… everything else follows… But as it turns out, that is very difficult to achieve. But I didn’t know that and if I was a reviewer, I wouldn’t have brought it up either. I probably would’ve asked for specific experiments but they would be based on my background (and could be claimed to be “outside the scope”). And that is all it comes down to, isn’t it? Our backgrounds… and the reviewers are in fact chosen from different areas. For a paper like this, you probably will get a bacteriologist, may be someone working on Archea and maybe a chemist would not even be on the short list (let alone one that would have the relevant knowledge). In the end of the day, I don’t think that as researchers, we want reviewers to be too ambitious… we have all got bad reviews that we think are nonsensical. I actually think the reviewers should just make sure, given the facts and their knowledge, the results are not seriously flawed. And we all know that a published paper is not exactly the word of God and it can be easily refuted/corrected/expanded upon.
So, if we all know these things, then what is the source of the outrage? I think there are a number of points:
- This is Science we are talking about here… we like to think that the impact factor of the journal has a strong correlation to the strength of its underlying science. We all toil away for years hoping to get a paper in Science and it breaks our heart to see a flawed paper like this appear on its pages. I agree that this paper, in retrospect, is outrageous but this is not the first time a bad paper is published in Science or Nature or any other journal.
- This process shows every thing that’s wrong in research nowadays… holding press conferences like a true salesman, relying on seemingly random reviews to accept a paper and the disproportionate value associated to papers in big name journals.
Are there solutions to these problems? Of course there are… pretty old ones actually. Switching to Arxive model instead of the journal model makes a lot of sense… in that model, studies are presented on equal footing and it is on their own merit that they gain attraction. Can we ever do that? Not in the near future… biology is expensive and researchers need decent publications for every grant cycle. We simply don’t have the luxury of time to wait for our papers to climb the citation ladder (that is assuming citation is even a good measure of merit). The current journal structure enables us to “assign” a “value” to a paper based on a couple of samplings, even before the paper is published. Is it a flawed process? Of course. Is it at times unfair? Absolutely. But let’s realize what the problem is: just too many people, not enough money. Let’s not heap all that on the shoulders of three reviewers…