Quantcast
Viewing all articles
Browse latest Browse all 34

An experiment in post-proposal peer review

Image may be NSFW.
Clik here to view.
780

I’m a huge fan of post-publication peer review (PPPR). It’s the future of scientific publishing and it’ll be de rigeur – rather than a novelty – for the next generation of scientists. Because if that doesn’t happen, science and society are going to continue to suffer from gaping holes in the quality-control mechanism that is traditional peer review.

I’m about to describe an experiment which takes the online/public peer review process back a couple of steps from the point of publication. But before I do that, it might help if I explain just why I’m such an enthusiastic advocate of PPPR.

Over the past couple of years, and along with colleagues at Nottingham, NIST, and Liverpool, I’ve been embroiled in a rather heated debate about the validity of a substantial body of research focused on the structure of coated (aka ‘stripy’) nanoparticles. I blogged about this for physicsfocus around about this time last year, and was delighted when our paper critiquing the nanoparticle research in question was finally published in PLOS ONE a couple of months ago.

Long before the paper appeared in PLOS ONE, however, we had made it available (via the arXiv) at the PubPeer PPPR site, for what is perhaps best described as pre-publication peer review. This led to a large volume of very helpful comments (and, it must be admitted, the occasional less-than-helpful post) from our peers. The PubPeer contributions of one of those peers, Brian Pauw, were so insightful and important that he ended up being added as a co-author to the paper.

In addition to highlighting the benefits of open and public next-generation peer review, the striped nanoparticle controversy made me intensely aware of a number of shocking deficiencies in the traditional peer review system. First is the demonstrated inability of traditional peer review to always filter out junk. I don’t want to harp on about the deficiencies in the striped nanoparticle work (which is faulty, rather than fraudulent) so let’s turn to a truly shocking example of the failure of traditional peer review: the nano chopsticks farce, as Brady Haran and I discuss in this Sixty Symbols video:

Social media, in particular the Chemistry Blog and ChemBark sites (and their associated Twitter feeds), exposed the chopstick ‘breakthrough’ as a staggeringly poor Photoshop job within days of the paper being published. It was retracted just two months after its publication.

A decade before this chopsticks debacle, the nanoscience community endured the rather less cack-handed, arguably quite clever, and remarkably systematic fraud of Hendrik Schön. I firmly believe that if post-publication peer review had existed in the early 2000s that Schön’s fraud would have been identified much, much sooner than it was. (Note how quickly the PubPeer community identified problems in the then-acclaimed, but now-retracted, STAP results published at the start of last year.)

PPPR isn’t, however, all about laying bare fraudulent work. At its best it’s exactly how the scientific method should work: authors should be willing to have their work discussed, debated, and dissected by their peers both before and after – particularly after – its publication. Compare and contrast with the following response from a well-respected, influential, and – for those who care about simplistic and flawed metrics – very high impact-factor journal, after I asked whether they’d be interested in publishing our critique (which eventually became the PLOS ONE paper described above):

 

 Image may be NSFW.
Clik here to view.
Moriarty_screenshot

 

Or, in other words, our journal is not interested in following the scientific method.

From PPPR to PPrPR

The deficiencies in peer review of course extend to the assessment of grant proposals. As I was writing this post, a link to an article published in Nature a couple of days ago appeared in my Twitter timeline (thanks @NKrasnogor), highlighting that the ratings of Medical Research Council proposals from external referees do not correlate well with the probability of the grant application being funded. This, of course, will not come as a great surprise to many researchers.

Some time ago I suggested to the Engineering and Physical Sciences Research Council (EPSRC) that they carry out an experiment where they send the same set of proposals to entirely independent prioritisation panels (and referees), and subsequently check for correlations between the rankings of the various panels. This is particularly important given that EPSRC blacklists researchers on the basis of where their grant proposal falls on the ranked list returned by the prioritisation panel.

EPSRC hasn’t run this experiment.

I’m trying a rather different peer review experiment of my own. Late last year I discussed the possibility of open peer review of a grant proposal, rather than a publication, with PubPeer and, subsequently, The Winnower. While PubPeer facilitates open review of any publication with a DOI, The Winnower, founded by Joshua Nicholson, combines open access publication with PPPR. The Winnower kindly agreed to publish our EPSRC proposal, Mechanochemistry At The Single Bond Limit, which, for the reasons discussed in this article in Physics World, is my first for EPSRC in quite some time. With the DOI provided by The Winnower, we subsequently set up a PubPeer thread related to the proposal.

As the ‘Pathways to Impact‘ section of the proposal lays out, the entire impact case is based on public engagement (rather than, for example, commercial exploitation). A key component of that public engagement programme, should the grant application be successful, is that my colleague Brigitte Nerlich will be an ‘embedded’ sociologist within the research team. Brigitte will observe, and blog/tweet about, just how the scientific method plays out in the course of the project. It therefore makes a great deal of sense to extend the public engagement aspects of the proposed research to the grant application process itself, i.e. to incorporate post-proposal peer review (PPrPR).

Coincidentally, and fortuitously, a week or so after the discussions with PubPeer and The Winnower, Dorothy Bishop tweeted a link to an important and very relevant paper by Daniel Mietchen in PLOS Biology (not one of the journals I usually read).

The closing sentence of the abstract to this far-sighted paper is worth quoting at length:

“The article … explores the option of opening to the public key components of the [grant application review] process, makes the case for pilot projects in this area, and sketches out the potential that such measures might have to transform the research landscape in those areas in which they are implemented.”

The motivation for making our EPSRC proposal available for comment and criticism via The Winnower and PubPeer is exactly as that abstract describes – it’s a question of opening up the grant application/review process to public scrutiny. My aim over the coming months is – EPSRC and reviewers permitting – to make available, here at physicsfocus, the referees’ reports and, ultimately, the outcome of the panel ranking process.

It’s an experiment that may return a null result, of course, in that there could well be a deafening silence in response to making the proposal (and, hopefully, the subsequent reviews) publicly available. After all, I don’t believe that there are too many academics fretting about finding more reviewing to do. But then, a null result is still very often an important finding that can provide key insights.

Let’s just run the experiment and see…

Image credit: Shutterstock/EDHAR

The post An experiment in post-proposal peer review appeared first on physicsfocus.org.


Viewing all articles
Browse latest Browse all 34

Trending Articles