I blogged recently about the Publishing Process and when the publishers get it right. That was my blessing of their work. This posting has a slightly different flavor…it’s about my concerns regarding the editorial and peer-review process. This posting is not meant to be generally applicable to all journals nor all editors. It is simply a review of recent experiences. It’s NOT about ChemSpider per se, just a general rant about my experiences of publishing and reviewing science, something we will be doing in the future about our experiences with ChemSpider.

In my line of work I get to work with some excellent scientists. Similarly, a number of the people I have developed respect for now make up the ChemSpider Advisory Group. As a result of scientific collaborations I author/co-author about 8 peer-reviewed articles a year. I also review about 6 articles a year for various journals. Of late the processes involved have me concerned.

Some of my recent experiences having articles I am involved with reviewed are discussed below.

1) One article had multiple “Publish As-is” with minor grammatical corrections. One reviewer clearly didn’t have an understanding of the science and got stuck on “it’s too long”. After a long discussion with the editor, despite the multiple “Publish as-is” we were left with no choice…reduce the length of the article or retract. We reduced the length of the article. I suspect that there are page limits being put on editors to the disservice  of good science.

2) A recent submission to one of the most popular journals in the world was sat on for many months. This work resulted from work with a pharmaceutical company and the submission therefore was from a chemistry software company and a pharmaceutical company.The manner in which this work was treated suggests unfair bias to non-academic submissions as if the fact that the work was included into a commercial product was problematic. As a result of the treatment received we have committed to not return to that journal to publish.

3) I was asked to review a publication including a comparison of performance of one of our prediction algorithms. A table contained a comparison of numerous algorithms including ours. The article in question highlighted that the authors algorithm outperformed any of the other algorithms under comparison. A close examination showed that for our algorithm four numbers had been mixed up in the table and arranging them properly showed that actually our algorithm had the best performance. This feedback was given to the editors. The paper was then published with MORE errors than originally identified and our algorithm was way down the list. A request to the editor for an Erratum or publication of a rebuttal by us was turned down…specifically because they did not feel it was appropriate to get in the middle of such a comparison with a “commercial organization.” It took two years but the original author was kind enough to publish a retraction with us but by that time the damage was done.

My recent experiences with articles I have reviewed include two in particular (no names or topics mentioned)

1) I was asked to review an article that was categorically “bad science”. I strongly suggested that the article be retracted..not rewritten. The science behind the article was simply wrong. There was nothing to rescue. The article was withdrawn but published very shortly after with 70% of the content intact in an Open Access journal (no title mentioned). It remains bad, but published, science.

2) I was asked to review an article by an ACS journal editor. This was another one comparing the performance of a series of algorithms, one of ours included. The work made our algorithms look terrible. It did the same for many others. Some of the data used to build the correlation were extracted from other publications. Some of the data were not even what they were correlating. A close examination of the data showed that when bad data were removed and the remaining data were treated appropriately that our algorithms gave excellent performance. I provided detailed feedback to the editor and suggested retraction. The paper was eventually retracted and published later in an open access journal with the SAME conclusions and comparisons. This was all the more upsetting since we’d been in discussions with the author(s) about reworking their data in collaboration.

I have two primary concerns and requests:

1) There appears to be a flavor in the air of “commercial chemistry software” is a bad thing. This is certainly true on many blogs discussing open source solutions. It appears to permeate into the publishing process based on some of my experiences (NOT all I should emphasize!!!!). This is my chosen career….I’ve done the PhD, a postdoc with a government laboratory, worked at a University and even in a Fortune 500 company. I eventually ended up at a chemistry software company. This is how I pay my bills, clothe my kids, live. I work with 140 other people producing chemistry software. It is a respectable career. We do science for a living. We get paid for our science by selling our products. No grants, no large company above us to carry us…our efforts are all we have to produce our living. We do EXCELLENT science and this should be the measure by which we are measured, not the fact that our science is eventually commercialized.

2) There IS good science in Open Access journals of that I have no doubt. I hope that part of the Open Access Journal process is that they check for the refusal of publication of submitted works from other journals prior to acceptance. Then they only have the ethics of the submitters to deal with. Peer-review is still necessary in Open Access Journals. My concern is with the burden of scientists who are generally overwhelmed. I get too many publications to review as it is. I have colleagues who get one a month. These are people who are already overwhelmed with work. I have no solution to the problem….just acknowledging it.

These are just observations and I welcome feedback. If and when we publish about ChemSpider where will we publish? We already have had invitations to publish in an Open Access journal…a decision is yet to be made (and a manuscript written).

Stumble it!

5 Responses to “My Worries About the Peer-Reviewed Publishing Process and Are Open Access Journals Becoming the Fallback?”

  1. Jean-Claude Bradley says:

    There is bad science in Open Access and Restricted Access journals, in peer reviewed and non-peer reviewed publications, in blogs, on wikis, databases, at conferences, in emails and conversations between scientists.

    There is also good science in all those places.

    I think that if a researcher looking for information finds it and is able to evaluate and actually USE it then the presence of other information is irrelevant and the main purpose of publication has been met. Even if the interpretation in a paper is flawed as “bad science”, the raw experimental data may be useful to others (if made available of course).

    So in that sense I think you have already published ChemSpider in the blogs that you maintain. I would guess that most of the people in cheminformatics who really should know about ChemSpider already do .

    But, as we all know all too well, there are other reasons for publishing besides communication – for that mabye Chemistry Central is a good pick.

  2. Peter Suber says:

    Antony: Just to balance the picture a little, I’ve twice had the reverse experience of one of yours. I’ve twice reviewed bad science submitted to open access journals and recommended that it be rejected. In both cases the articles were rejected, but in both cases I later saw them published in non-OA journals with all their original errors and defects uncorrected. Nothing really surprising here: there are strong and weak OA journals and there are strong and weak non-OA journals.

  3. David Bradley says:

    Just as a point of interest, a few rhetorical questions re the commercial-academic balance.

    How many grant applications and funding bodies mention “wealth creation” as one of their priorities?

    How many universities have technology-transfer infrastructure in place?

    How many spin-off companies emerge from publicly-funded research grants?

    Lots.

    db

  4. Bryan Vickery says:

    Antony, your blog posting raises many important questions.

    You might want to read an editorial which recently appeared in The Scientist (http://www.the-scientist.com/2007/5/1/13/1/) regarding access to closed files, such as correspondence with authors, confidential reviewer comments and ratings, and internal editorial exchanges. Such a move would make public your objections (as a reviewer) to certain manuscripts that actually get published, and I would guess make Editors less likely to allow such articles through without significant revision.

    While there are many differences between open access journals and traditional subscription journals, such as free and unhindered access to the research and associated data, the right to reuse and redistribute it, no page length restrictions and for authors to retain ownership of their work, they have many of the same motivations. All journals aim to build a reputation that will make authors want to publish in it. This is achieved by recruiting respected Editorial Boards and imposing high editorial standards, including stringent peer-review.

    Our open access journals allow anyone to download and use the associated data files, making evaluating the research easier. We allow comments to be posted on and about any article and are working on adding blog-trackbacks to our articles so that those intending to read the article can already see what the community is saying. These are all valuable forms of post-review.

  5. Gary Martin says:

    PUBLISH, PERISH, PREJUDICE?

    I share many of the views and sentiments already expressed on this blog, and almost undoubted feel even more strongly about several points that have already been made. In that vein, I’m going to make some comment to the numbered points already made by Tony.

    1.) Editors and scientific content – When a paper has multiple “publish as-is” recommendations and one outlying comment from a single referee that says a paper is too long, and the editor then REQUIRES that the paper be shortened DESPITE the comments of the “publish as-is” majority opinion you have to ask WHY?

    A.) Is the opinion of that referee so highly regarded as to outweigh the other researchers who have weighed in on the merits of the paper?

    B.) Or perhaps is the Editor under page constraints imposed by the publishing “house” in question, that allow only some arbitrarily fixed number of pages/year for a given journal?

    I do know that the latter, is in fact the case with some publishing organizations, the ACS in particular. Frankly, bluntly, and candidly, if editors are under pressure from the publishing organization to live within a page quota for a journal, the publisher ought to be called on the carpet for it… they’re arbitrarily tampering with the quality of science in the name of whatever the quota was based on. That sort of myopic viewpoint has no place in science! At the same time, the editor ought to be ashamed of using a minority opinion and requiring authors to shorten papers when the majority opinion is a “publish as-is” recommendation if it’s nothing more than a convenient excuse to allow him/her to adhere to the annual page allotment from the publisher. This one irks me mightily!

    2.) As for the paper that Tony refers to that was “sat on” for many months, the total happened to be 9 weeks, and I was the senior author of that paper. It was submitted to a prestigious journal with the self-professed aim to have communications handled in an expeditious fashion and a decision rendered within 2 weeks. Two weeks, vs. nine weeks? The former is expeditious – the latter handling is worse than deplorable, and not as good as what I routinely get from other journals for regular, full paper submissions, much less for communications. Moreover, the editor in question at said prestigious journal had two reviews in hand when the communication was rejected – one “publish as-is” and the other “publish with minor revision.” Yet the paper was rejected by the editor because there weren’t comments from the five plus reviewers to whom the communication was sent for comment? Give me a break… can’t an editor make a decision on whether cutting edge science is worthy of publication with the opinions of two competent referees in hand and make a decision to accept a paper? A topical editor not rendering an opinion is hiding behind the editorial process… How many opinions does it take to validate a piece of work? Obviously two competent scientists in agreement isn’t enough for some journals when the editor could be the third opinion! But, then again, perhaps the rejection of the communication in question had something to do with the “pedigree” of the authors when the senior author is a high level scientist within the pharmaceutical industry and the co-authors are colleagues at a scientific software company? Are we not up to the caliber of the academics who have published communications ON THE SAME TOPIC IN THE SAME JOURNAL BOTH BEFORE AND SINCE WE SUBMITTED? One does have to wonder! Is this perhaps a case of editorial/journal prejudice? A recent cursory scan of the comtents of a random sampling of issues of the journal in question did not uncover a single paper where the senior author was an industrial chemist! Perhaps it’s just a quirk of the issues that I happened to peruse… who knows? Personally, I won’t bother to send anything more to said prestigious journal again – they’re not worth my time or trouble.

    3.) Bad Science Is Bad Science – I know a number of journal editors personally. From talking with them, there are frequently >50% rejection rates on papers received by these journals on the basis of scientific quality, the decision made either by the editor from his read prior to sending a paper out for review or with the advice of selected referees. When a paper is rejected, it is incumbent upon the author(s) to have the scientific integrity to revise their work to render it suitable for publication, even if it is destined to be published somewhere else. Regardless of where something is going to be published, the comments and suggestions of referees should make the manuscript better. Has it gotten to the point that there is so much pressure for authors to publish that they’re willing to just send the same crap, without any form of revision, to a journal with low or lower standards, whether a print or on-line journal, just so they can get something published without any more work? Is this about the numbers game or doing quality science? There are good on-line journals out there, don’t get me wrong. The good ones have high standards and do a quality job of policing what appears in their on-line pages. On the other hand, seeing something that you’ve recommended should be rejected by a journal that you’ve reviewed subsequently appearing verbatim and completely unchanged from the version you rejected is a clear indication that the commitment and scientific integrity of the author in question is less than stellar and that the journal in which the substandard work appeared has not done any form of creditable review of the paper. It is very unlikely that one group of reviewers would reject a paper on the basis of poor science only to have some other set of reviewers unanimously recommend “as-is” publication, which is what this sequence of events would have required! In general, examples of this type that I’ve seen have involved papers rejected by an “in-print” journal that have appeared in changed in an “on-line” journal Are there other variants of this scenario out there? Almost assuredly, but I haven’t personally encountered them yet.

Leave a Reply