Skip to content
June 1, 2017 / jpschimel

Citation Stacking: editorial misconduct & my response

Yesterday, I received a copy of an open letter from a group of junior European scientists expressing their concerns over “citation stacking”–that is when reviewers and journal editors pressure authors to cite papers that either they wrote or that are published in “their” journal. That letter is available at:

https://docs.google.com/document/d/1Xx08GNdeK97uLRQzRLStweTKIXC971mm4H4JorX0Iho/pub

One of the editors who was forced to step down for such activity denies that is what he was doing. Perhaps. You can see the analyses and arguments in the material cited in the letter. To me, the real idiocy of such behavior (assuming it occurred) is that it involves great risk for essentially no reward!

I do get paid as a Chief Editor of Soil Biology & Biochemistry, but the modest stipend (for processing 180 manuscripts a year) isn’t anywhere near enough to make it worth risking my reputation and career. And there’s no bonus for increasing the impact factor. When the “crime” is so petty and the rewards so minimal, why would you risk it?

Fundamentally, we all thrive in a healthy “ecosystem”–a publication system that works to provide high quality and effective review. Such a system helped me to polish my early manuscripts into something worth reading, and so helped develop my career. As an editor, I am a steward whose job it is to maintain the health of that ecosystem and to pass it along to my successors in the condition I took it over (or even better). That “ecosystem health” is not measured by the impact factor of the journal, but by the integrity of the editorial and review process.

In any event, I wrote a response and thought it would be worth putting it here as well as in an e-mail.

————————————–

Dear Colleagues who drafted this letter:

I have been a Chief Editor of Soil Biology & Biochemistry for 10 years. I can say with absolute certainty that I have never and will never do any such citation stacking as discussed in your open letter. And, I can add that I am confident that none of my colleagues who serve as Chief Editors of SBB do either. I can equally assure you that I have never received any hint of pressure from the publisher to manipulate citations. We all want to see the journal do well—but as long as we maintain the best editorial system we can, and publish the best papers, the journal will remain respected and highly cited. I would resign before I would ever cave to pressure to manipulate the process. I don’t get paid anywhere near enough to take the time, and risk, to cheat!

As a Professor, it is my responsibility to produce both science and scientists; I therefore have a vested interest in furthering the careers of my junior colleagues. Students and postdocs are typically the lead authors on the most important and novel work and I want to publish those papers, read them myself, and see them recognized. I try to carry my mentoring and career development responsibility into my editorial responsibilities.

Now, it has long been common that when papers are sent to reviewers (and editors) who are senior scholars in a field, they (we) may note several older papers that they feel are relevant to the current paper and suggest that the authors consider citing them. As an author, I have regularly received such suggestions. Sometimes those reviewers and editors are even correct about the relevance of those older papers! When that is the case, it serves effectively as one of the education functions of peer review; as an author, I appreciate such comments. If the suggestion is off-base, and I have received (and probably written) such recommendations, the author should ignore them and explain in their cover letter why (or just note the ones you have included). But this is different from the citation stacking you describe. In any case, such “old-fart” recommendations don’t displace citation to the newer papers lead-authored by junior scholars.

Remember also that an introduction is not a “literature review”—its purpose is not to describe the state of knowledge in a field or to cite every paper ever written on the subject. Rather it is to define a specific gap in that knowledge. Thus, the papers you should cite are those that frame the boundaries of that gap. There may well be classic papers in the general field which are not necessary (or useful) in defining the edges of knowledge, so don’t cite them. A reviewer who wrote those papers may note them in their review—but if they don’t help frame the knowledge gap, you don’t need to cite them.

It is equally possible that some relevant recent work may not be cited because there is such a glut of publications that it is difficult to fully keep up with everything published. I have long held what I call “Schimel’s Law of the Literature” which states “You can either keep up with the literature, or you can contribute to it.” And I fully recognize inherent contradiction in that statement—in fact, that is the point. I try to ensure we have cited the most relevant recent work when we submit papers from my group, but it is inevitable that we will miss some. To slightly misquote a classic saying: “To err is human—to forgive, canine.”

I have heard anecdotal accusations of authors being pressured to cite more papers from the journal they are submitting to, which you appropriately condemn, and I have seen some of the reports documenting such behavior you cited. But I have never experienced such behavior personally, and absolutely will not carry out such practices as an editor. Papers are cited in SBB ultimately because the authors felt they deserved to be cited.

As it has come to matter less where we publish (I think I have searched for a topic, downloaded a paper, read it, and cited it without ever noting where it was published!), publishers (and some editors apparently) have gotten more sensitive to metrics and standing. “Journal quality” actually mattered more in the hardcopy era, when we went to the library to check the table of contents for a limited number of top-tier journals. This reflects the fallacy of journal “impact factor” battles—journals aren’t “good” because they have high IF’s, but because they maintain strong editorial systems, which filter out the papers that aren’t appropriate and work to perfect those that are. Real “quality” lies with the individual paper, and that remains in the hands of the authors.

Scientific publication involves discussion and mutual education among authors, reviewers and editors. That process should work (and I believe mostly does work) to polish manuscripts into strong papers. In my experience, such misbehavior as you call out remains exceedingly rare. As both reviewer and editor I may suggest many things to authors—to think about the problem differently, analyze or interpret the data differently, and even consider (and cite) some literature they may not have been aware of. Those are all things reviewers are supposed to do. But I do not and never will pressure authors to shift their citations just to increase either my own citation numbers or the citation levels of my journal.

 

 

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: