Monday, December 07, 2009

Peer review, science, and climate-gate

Michael Jennings has some comments on peer review and how it can be corrupted. Peer review and open science | Samizdata.net

Traditional journal based scientific peer review works as follows. A researcher does his research and writes his paper. He then submits the paper to the editor of a journal. The editor of the journal then sends the paper to a number (usually two or three) of other researchers in the same field. These researchers then write short reports on the paper outlining what is good or bad about it and usually suggesting improvements, along with a recommendation as to whether the paper should be accepted by the journal. The reports are then forwarded to the author of the paper, who responds to suggested changes and then sends a revised version of the paper to the journal. After possibly several repetitions of this, an accepted paper will eventually be published in the journal.
....
There are various ways in which this process can be corrupted, but (certainly in the field I worked in) this generally did not happen. Publishers of journals made a point of appointing people of integrity as editors. It was in their self-interest to do this, because the long term consequences of not doing so would be a loss of credibility for the journal. The danger, always, is that authors, editors, and referees all end up coming from the same clique, in which such a process can be corrupted.

Another danger is that fields become isolated from each other, and workers in one field do not properly absorb knowledge and techniques from other fields. Many scientists (and non-scientists) for that matter use a great deal of statistics in their work, and do a great deal of computer programming in their work. Often, they will not be experts in either statistics or computer programming. Sometimes they will do good work from a statistical perspective, and write good computer code. On the other hand, if their work is to be published in peer reviewed journals, and the referees for the papers selected by those peer reviewed journals are not experts in statistics or computer science, and use similarly sloppy methods themselves, then poorer quality work can at times be gotten away with (similarly, you should beware of anyone in business or finance who tells you that his "proprietary black box model" tells this, and that he cannot show it to you because it is "proprietary". Similar situations of sloppy code and statistics are endemic here, too).
....
Peer review matters professionally. If you are submitting a Ph.D. thesis and the work in it has already been published in reputable, peer reviewed journals, then your examiners have little work to do. If you are applying for an academic job, or for promotion or tenure, then your publication record in peer reviewed journals is central to the process. However, the peer reviewed journals are a way of keeping score. Amongst physicists at least, they are not where the work is done or how it is communicated.

We have in recent weeks heard calls from various people for science to adopt a model more resembling open source software - one aspect of which is opening access to the evolution of work to more people than a small number of officially appointed referees. The "Many eyes make all bugs shallow" philosophy surely has wider reference than just to software, although when a good portion of the work is software, it's probably even more relevant.

However, what has been less reported is that in many fields, particularly the most quantitative fields, this model already exists. The physicists got there first, partly because they got the internet a decade before most other fields. However, many others have followed. The question should be, "If not, why not?"

No comments: