Wednesday, May 20, 2009

Peer Review: Over-valued?


I've come across some disturbing articles lately regarding peer review. First was an article that reviewed the reviews of clinical neuroscience papers submitted to conferences and found that although editors based publication decisions on reviews, the reviews did not predict each other at all for one journal, and agreement was small for the second (Rothwell & Martin, 2000). Similar findings were reported for submitted conference abstracts.

I also found a paper reviewing the granting practices of NSERC, the main body of governmental science funding here in Canada. It found that the cost of peer review is so high that it would be cheaper to just give every scientist with the basic qualifications $30,000 per year instead of vetting and, supposedly, finding the good ones (Gordon & Poulin, 2009), which costs $40,000 per application. As you might imagine, if people didn't have to impress their peers with their research, you'd see a lot more innovation and less bread-and-butter studies. As Gordon and Poulin put it, "...control by peer review makes no sense for the allocation of scarce resources in any environment conducive of innovation..."

I am a fan of peer review, but these sobering studies have qualified my admiration for it. Peer review is good for making sure studies are methodologically acceptable, but probably has no business determining what is important.

I welcome argument to the contrary.

(If you post a comment, sign your name or I won't know who you are.)

References:

Gordon, R. & Poulin, B. J. (2009). Cost of the NSERC Science Grant Peer Review System Exceeds the Cost of Giving Every Qualified Researcher a Baseline Grant. Accountability in Research, 16(1), 13-40.


@Article{GordonPoulin2009,
author = "Gordon, Richard and Poulin, Bryan J.",
title = "Cost of the NSERC Science Grant Peer Review System Exceeds the Cost of Giving Every Qualified Researcher a Baseline Grant",
journal = "Accountability in Research",
year = "2009",
volume = "16",
number = "1",
pages = "13--40",
month = "January",
}


Rothwell, P. M. & Martyn, C. N. (2000). Reproducibility of peer review in clinical neuroscience: Is agreement between reviewers any greater than would expected by chance alone?. Brain, 123(9), 1964-1969.


@Article{RothwellMartyn2000,
author = "Rothwell, Peter M. and Martyn, Christopher N.",
title = "Reproducibility of peer review in clinical neuroscience: Is agreement between reviewers any greater than would expected by chance alone?",
journal = "Brain",
year = "2000",
volume = "123",
number = "9",
pages = "1964--1969",
month = "September"
}

4 comments:

Anonymous said...

Okay... so... let me understand...
Peer review is used for more than just accuracy of published articles? I did not realise that peer review was used as a tool for hiring, grants, choosing conference works (don't I sound silly?? Call me miss-wannabe-academic-but-so-not).

What is the process? I always imagined it as: a) you submit your paper; b) a panel of individuals (who either boasted experteses in your field or enough general knowledge with some complementary expertese) would sit down and 1) review the methodology, bias, size/appropriateness of sample group etc. to make sure any data can be considered statistically significant, 2) map out the logical structure of the argument for soundness and evaluate the premises to see if they are valid, and 3) have a discussion with the author regarding conclusions and to engage the subject, evalute the passion, the ability of the individual to take criticism and rebut... and to ask questions, see what future research this study could encourage, and find the interdisciplinary hooks that could make it exciting for scientists, philosphers and lay people alike...

What is it actually like? I (sadly) fear that it can get influenced by politics, funding, and pride. What are your personal experiences? You're much published, aren't you?

- Adrienne

Neal said...

I have to agree with your statement, that it's good in terms of ensuring that a study is methodologically valid and little else -- in fact, personally, I've always (largely) believed that was the point.

It's far too easy for a reviewer to tank a paper simply because they do not agree with the theories being proposed (or concluded) rather than the paper's own merits (in fact, I've personally received reviews like that, where the text of the review was all positive, save one comment about not really liking the theoretical approach, and taa-daa, low values across the board [not just for the approach].

As a result, as you yourself said, people would be somewhat freer to go against conventional thinking if they didn't have to worry about being evaluated by people with that very frame of thought.

Jim Davies said...

Adrienne:

"a) you submit your paper;"

correct!

" b) a panel of individuals (who either boasted experteses in your field or enough general knowledge with some complementary expertese) would sit down and 1) review the methodology, bias, size/appropriateness of sample group etc. to make sure any data can be considered statistically significant, "

The reviewers don't talk to each other, usually, and I think this is good because it avoids group think. They review the papers by themselves and submit the review to the editor. They consider all the things you mentioned, plus how big a deal the research is. They need to do this because even a perfectly conducted study might not be important enough for a huge journal like Science or Nature.

"2) map out the logical structure of the argument for soundness and evaluate the premises to see if they are valid,"

yeah, that too.

" and 3) have a discussion with the author regarding conclusions and to engage the subject, evalute the passion, the ability of the individual to take criticism and rebut... and to ask questions, see what future research this study could encourage, and find the interdisciplinary hooks that could make it exciting for scientists, philosphers and lay people alike..."

The author gets to read the review, but it's not a back and forth, and the reviewers are often anonymous. They rarely comment on future work, and almost never look for interdisciplinary hooks unless the venue is specifically targeted to an interdisciplinary argument.

Peer review of journal and conference articles is not affected by funding, but could be affected by politics and pride. Blind review seems good because if the reviewers cannot figure out who the authors are they can't hold a grudge. However I recall reading somewhere that blind review does not really change the nature of the reviews.

Dustin said...

For most Human-Computer Interaction conferences, there is a formal rebuttal back to reviewers, but I'm not sure if that changes much.

Here's an analysis of CHI (the largest HCI conference):

http://www.bartneck.de/publications/2009/scientometricAnalysisOfTheCHI/index.html

One of the findings is that "Best Paper" awards are no better predictors of number of citations than a random sample of papers.