Skip to content
January 30, 2022 / jpschimel

“Multifunctionality”: Why do we use a term that has problems both scientifically and linguistically?

“Multifunctionality” was first used as a term in ecology, apparently, by Hector and Bagchi (2007) to recognize that “most ecosystems are managed or valued for several ecosystem services or processes.” Since it was introduced, the term has gained traction, with a rapidly growing number of papers using the term and developing indices to measure it (Garland et al. 2021). 

Multifunctionality is, in some ways, a great concept—it is important to recognize that we rely on ecosystems to provide multiple services, and that different organisms, or groups of organisms, may be responsible for different vital functions. This means that preserving biodiversity isn’t just about ethics, but about preserving Earth’s life support system. But equally, “multifunctionality” is a challenging concept, because any term that encompasses everything, equally, can mean nothing. Its power is in its overarching breadth, but it’s challenge is in its lack of specificity and concreteness. Multifunctionality indices capture everything—from soil nutrients and biochemical variables to plant and animal communities, and are calculated using a wide array of mathematical normalization approaches to account for data types that have wildly different units.

A “multifunctionality” index incorporates whatever you throw into it, like a pot of soup. There might be onions, carrots and beef, or there might be potatoes, leeks and cream. They’re both soup, but they’re different. If I go into a restaurant and ask a waiter for “soup,” they’ll ask what kind of soup I’d like. If I say I’d like the “soupiest” one, they’ll probably call the police to report a crazy person.

For “soup” to be useful as a concrete entity, rather than as an abstract concept, I need to pull it apart—I need to ask for either the beef soup or the potato leek soup. It is equally the same with “multifunctionality”—it becomes concretely useful when we pull apart the index to explore what it means. In that way, it is a terrible concept, or at least a terrible term—I can’t just say that a site has a MF index of 42. It only gains meaning when I identify that the index is what it is because N cycling processes are active, but P turnover is slow and plant diversity is high. In contrast, another site with the same overall index is P-rich, but without an N-fixer, N-dynamics are constrained and plant diversity is low. Part of the problem with the term “multifunctionality” is therefore scientific and semantic—it’s definition is vague and amorphous and there are diverse metrics. Hence, it is difficult to compare or relate to other sites or studies. Multifunctionality indices are not intercomparable. 

The other part of the problem, however, is linguistic, or even political. We use loaded terms like “multifunctionality” to carry concepts we view as important to our professional peers but also to speak to Society more broadly. This is a pattern we have seen before—we’ve developed terms that capture broad scientific concepts and that we might hope would breach the wall of social consciousness. Two notable such terms are ecologists’ favorite “biodiversity” and soil scientists’ favorite “soil health.” Such terms have traction in public discourse, but are widely recognized as being problematic within the scientific community. 

When we talk about “biodiversity” to non-scientist friends they may think they understand what we’re talking about. It’s our scientist friends who won’t! They’ll ask: “What do you mean by ‘biodiversity’? Do you mean alpha- or beta-diversity? Species or functional?” We know there are many ways to characterize “biodiversity” and that we must define our metrics. Unless we do so, “biodiversity” has little scientific value, even while it retains political and social power. 

“Soil Health,” on the other hand is even worse. It’s a great term for interacting with the public and with policy makers—everyone can readily get behind the idea that we should have healthy soils. It has gained substantial public visibility as a win-win ideal that enables farmers and environmentalists to ally. But what is a healthy soil? The USDA doesn’t even deign to offer a definition on its web page (USDA Soil Health). 

But before “soil health” became a dominant term, soil scientists discussed “soil quality,” for which the USDA offers the following definition: “Soil quality is how well soil does what we want it to do” (USDA Soil Quality); a definition so encompassing and variable as to be effectively meaningless but it at least purports to be a definition. Discussions earlier on generally focused on “soil fertility,” which the U.N. Food and Agriculture Organization defines as “the ability of a soil to sustain plant growth by providing essential plant nutrients and favorable chemical, physical, and biological characteristics as a habitat for plant growth” (UNFAO). That provides a clear vision for how to measure it. But “fertility” is less useful politically, because it sounds like an issue for farmers rather than city dwellers—if your soil is infertile, add fertilizer! 

Why do we migrate away from concrete, definable, terms to vaguer ones? As scientists, we adopt more encompassing terms because we know we can’t examine each piece of a system in isolation; our concepts have become more holistic. But also because, as we engage with a non-scientist public, we must use terms that resonate in “common language.” When we speak in our technical scientific terms, we speak in “jargon”—meaningless twittering—and run the risk of being ignored. 

That is why “soil health” has been so powerful as an idea. It is built from words that are deeply rooted in common language. “Soil” entered English with the Norman conquest, while “health” predates the conquest—it’s Old English (OED). The roots of “biodiversity” are also deeply grounded in English—“diversity” goes back to Middle English, while “bio-” as a combining form is well established (OED).   

But “multifunctionality” has all of the scientific problems—it’s amorphous and undefinable—but none of the public communication virtues of “biodiversity” or “soil health.” It’s a word with no existence in common English! It first appeared in 1953 in a chemistry paper referring to an enzyme’s active site (OED). Even the word “Functionality” only appears in English in 1836.

“Multifunctionality is a term that was coined to be jargon. It has limited specific value within the ecological community and less with the public. Why do we use the term and try to come up with ways to try to define and measure it? They will never be comparable and they will never be understandable. 


Food and Agriculture Organization of the United Nations

Garland G, Banerjee S, Edlinger A, Miranda Oliveira E, Herzog C, et al. 2021. A closer look at the functions behind ecosystem multifunctionality: A review. J. Ecol. 109: 600–613

Hector A, Bagchi R. 2007. Biodiversity and ecosystem multifunctionality. Nature. 448: 188–90

OED: Oxford English Dictionary

USDA Soil Health

USDA Soil Quality

October 6, 2021 / jpschimel

Writing Science Getting Started Group Exercise

Getting Started Exercise on Storytelling & Story Structure
Introduction to the key elements of story structure: The OCAR Model

Identify the key points that frame your story: this should be the core of either a proposal or a paper you are working on, but which is not important—this is an exercise in communication training. How do you tell the story? How do you emphasize the essential points? 

In every story (whether fiction, journalism, or science) there are four critical elements: Opening, Challenge, Action, and Resolution. For each of these elements, write a separate short paragraph—no more than 2-3 sentences. 

The OCAR Elements

1. Opening: This should identify the larger problem you are contributing to, give readers a sense of the direction your paper is going, and make it clear why it is important. It should engage the widest audience practical. The problem may be applied or purely conceptual and intellectual—this is the reason you’re doing the work. 

2. Challenge: What is your specific question or hypothesis? You might have a few, but there is often one overarching question, which others flesh out.

3. Action: What are the key results of your work? Identify no more than 2-3 points. 

4. Resolution: What is your central conclusion and take home message? What have you learned about nature? If readers remember only one thing from your work, this should be it. The resolution should show how the results (Action) answer the question in the Challenge, and how doing so helps solve the problem you identified in the Opening

In a full paper or proposal, the text linking these elements is important—for example the body of the introduction shows how you get from the large problem in the Opening to the specific questions in the Challenge. Worry about that another time: if the landmarks of the route are clear, we don’t worry about each twist in the road—Mapquest only bothers telling you when to turn. 

For the target audience, aim for scientists who are not just your most immediate disciplinary peers; aim a little wider. Who might benefit from your work? How do you communicate to them? For example, if you study the genetics and physiology of plant secondary metabolite production and how it influences fungal leaf pathogens, might this also be useful to people who study herbivory or litter decomposition? If so, could you broaden the opening to define the problem to bring in those ideas? 

In groups of 4 or 5 people, discuss your pieces. Are they clear? Do they frame a story you’d want to read? Does the language engage a wide community or is it narrow jargon? How might you improve them? Everything is fair game. 

In offering feedback, remember that you are trying to help your friends and colleagues develop and learn. You need to be critical, but supportive and constructive. Highlight what is good but don’t shy from pointing out what doesn’t work or could be stronger. Identify why you think what you do and how you might improve things. It’s better to say “this doesn’t work for me” than “this doesn’t work;” the former describes your personal reaction, which is always valid; the latter criticizes the piece, which might not be universally valid. What doesn’t work for you may work for others; figuring out how different readers respond helps authors learn. The analysis part of this exercise is as important as the writing part: part of becoming a good writer is becoming a good reader and critic. 

July 15, 2019 / jpschimel

Why does it cost $2000 to publish a paper? Or: the fiction of an “Article Processing Charge”

In the debates in academic publishing about subscriptions, pay-walls, and corporate profits, I often get puzzled responses from my colleagues and friends in the humanities and social sciences: “We run a journal out of our Department and its essentially free; editors and reviewers don’t get paid, so why is it so expensive for you?” I had a recent question from a colleague who was perplexed at my comment that it costs almost as much to reject a paper as it does to accept and publish one.

I think some of these views come from some deep-seated legacy from the printing days where running the presses was an obvious cost: copy-editing, printing, and mailing out hardcopy issues of journals. But that is no longer true—we don’t print journals and send them to libraries; we merely post them to the web. So why is it expensive?

What are the costs in modern academic publishing? The academic staff (reviewers and editors) generally do not get paid—we manage most editorial jobs as professional service[1]. Actual human copy-editing is mostly a thing of the past. Converting a raw manuscript into a formatted article is automated. So it seems like there are almost no specific per-article costs. So, shouldn’t publishing be almost free? Why do articles cost several thousand dollars to produce and publish?

The simple reason is that the main costs aren’t per-unit costs, but maintaining the technology operation to support processing articles. To produce an article, an author has to submit a manuscript—so the journal must maintain software to take in those manuscripts and allow editors to manage the review process. Managing editors have to assign submissions to handling editors, who have to select and invite reviewers. So there has to be a reviewer database, which will maintain contact information and reviewing history[2]. That database must keep track of invited reviewers and their comments through multiple cycles of revision and rereview. That must then interface with a system to take accepted manuscripts and do layout to convert those files into actual “articles” that are ready to post as published: formatting text into columns, getting figures and tables into appropriate locations, etc. And then you need to maintain the journal website to make available information about the journal (and how to submit papers) and to host the published articles. All that calls for complex applications that are effectively linked with each other—a lot of sophisticated IT that has to be developed and maintained by knowledgeable and skilled software engineers.

Then that software requires hardware support. Not only to host the review systems, but also then to host the journal and make the final products available, as well as maintaining the archives. That used to mean library bookshelves to archive published work—now it means server farms. Maintaining that hardware requires another set of engineers.

Those people need to get paid, as do the electricity bills for the server farms, and we need to take in money from subscriptions or author payments, so we need a financial services team—accounting, auditing etc. Then, let’s not forget the lawyers—we are dealing with business so we need legal services, not only to support management, but to help with academic conflicts: post-hoc battles over authorship (complaints that someone was left off), accusations of plagiarism, etc. Such cases are rare, but they do occur, and the bigger the operation, the bigger the needs.

To coordinate these different groups, we need professional management, led by a skilled business administrator. And of course, all these elements must be physically housed somewhere so there are costs associated with the real estate—offices and server farms.

Once you have the infrastructure in place, then you can consider the actual academic team that handles the manuscripts and decides whether to publish them. As I noted, the actual editorial costs are frequently minimal, because we rely mostly on volunteers to serve as reviewers and even as the associate editors who handle a paper or two a month.

Thus, when my colleagues say that they run a journal out of their department for “free” or on just a shoe-string budget, relying on sweat-equity and volunteerism, they are deceiving themselves [3].

Or perhaps, more realistically, they are only considering the academic work associated with the journal. In fact, in such cases, most of the operational and staff costs are covered by their department or their university, which provides the hardware infrastructure to host the journal; as well the maintaining the financial and administrative systems to allow the journal to exist. The direct costs remain small because they are handling a limited number of manuscripts, and most of the indirect costs are invisible, absorbed by the institution, including the salaries of the involved faculty.

But the department-supported model fails when submission numbers increase to a level at which academic departments can no longer subsidize the operation. As the scale of the operation increases, the scope of the of the IT enterprise grows, and with it, the administrative overhead. There are thresholds of size and complexity at which managing the operation requires new professional people and new layers of organization.

As you move into the international, natural science universe, submission numbers and expectations for processing speed also rise. The journal on which I have been a Chief Editor (Soil Biology & Biochemistry) receives well over a thousand submissions a year, as do the journals of the Ecological Society of America. Yet, these are modest operations compared to a monster like the Journal of the American Chemical Society (JACS), which publishes almost 20,000 pages of material each year[4].

We are now seeing a shift in financial models. In the classical model, libraries paid for subscriptions that covered production costs, making publishing free for authors[5]. The new model makes articles free to read (i.e. open-access) but production costs are covered by those authors.

The current versions of open access publishing charge authors of accepted articles an “article processing charge” (APC). But this terminology can be deceptive, if not downright dishonest. An APC is in no way the cost to an author to process their paper. Rather it is merely their fractional share of the total costs to manage the journal. Many of those costs (notably managing submission and peer review), however, are imposed by rejected papers.

Consider a scenario where a journal receives 2,000 submissions a year, but accepts only 30% of them. In an open-access mode, the authors of the accepted 600 papers cover all the costs of supporting the entire operation. Existing APCs often run around $2,000[6]. Those numbers suggest the real cost of running our hypothetical journal would be $1.2 Million. Of that, only a modest portion would be associated with either doing layout for the accepted papers or hosting them to make the journal publicly available (i.e. article-specific costs); the bulk of the total is associated with maintaining infrastructure and processing manuscripts, most of which are ultimately declined.

If you calculated an article processing charge that applied to every paper handled by the journal, rather than just those ultimately published, that charge would be only $600[6]. But could you do that—charge an APC to submit a manuscript, rather than to publish an article? That would make submitting a paper an expensive lottery ticket—pay $600 to have an article reviewed and considered for publication, but with no refunds if it were rejected. Would anyone agree to that? I’m sure many would scream about the ethics of it:  “You charged me $600 and didn’t even accept my paper?” Would anyone ever believe that journals weren’t just increasing rejection rates to make more money?

The alternative, when authors pay only if their articles are accepted, creates different dilemmas. One is the expansion of predatory journals, which will publish pretty much anything as long as an author pays. They become a form of “vanity” publishing, where authors pay to gain a measure of credibility for papers that deserve none. I’ve heard from colleagues who provided cogent arguments for rejecting a paper, only to see it in print later, effectively unchanged. There is also an ethical question: why should I pay to process someone else’s manuscript? Why should I pay $2,000 to publish my article when most of that money really goes to subsidize processing papers that are rejected? Shouldn’t they pay their share of the costs? Or for a really substantial ethical question: who should cover the APCs for authors who can’t afford to pay? As I noted in a recent post[8], many papers are produced after grants expire or by authors whose institutions or nations lack the wherewithal or inclination to cover those costs. Do those authors get shut out of publishing at all? It isn’t “open access” if authors can’t publish.

We are in the midst of a revolution in how to pay for academic publishing, but there still remain deep questions of how to spread the costs among readers, published authors, and authors of rejected papers. I see no truly fair way to allocate costs among these groups in an open-access publishing model. But the costs are real—$2,000 per published paper is a realistic value, even when the publisher is non-profit and when most editorial and review work is done as unpaid professional service. Importantly as well, this shift concentrates costs onto authors because in the subscription model, institutions that did little research still subscribed to a suite of journals (e.g. the California State University System or my alma mater Middlebury College) and so supported the costs of producing journals. They have no responsibility to support journals in an APC-based funding model.

Academic departments and institutions may choose to absorb those costs and so to subsidize publications, but as journals grow from individual small operations that a single academic department might host to the larger journal families produced by professional societies or by commercial publishers, those costs can’t be subsidized. The Ecological Society of America (ESA) publishes five journals and doing so was bankrupting the Society until it partnered with Wiley to streamline costs. The American Chemical Society, in contrast, publishes 61 journals and is one of the larger academic publishers in the world!

Producing journals, the core of academic societies and professions, requires knowledgeable, professional IT, financial, and management people, people who have to get paid, even when the Editorial team works as volunteers. That money must come from somewhere, and ultimately there are only two possibilities: the producers or the consumers of academic articles; authors or readers. Historically, we’ve relied on readers—libraries subscribed to journals. Academe and Governments are increasingly requiring that we shift to relying on authors who pay APCs.

Yet, most of the cost to publish papers is not actually associated with producing any specific article, but with managing the infrastructure; maintaining the journals to assimilate and review the manuscripts. In an APC-model, the more selective a journal becomes, the higher the costs to those authors whose papers are ultimately accepted.

This creates new patterns of stress and obligation that will migrate into academic careers in new and complex ways, ways I am not confident our institutions have envisaged. If I choose to publish in cheaper journals, which might have higher acceptance rates and hence be less prestigious, will my colleagues hold that against me at tenure or promotion? If I have a paper that deserves to go to a high-profile, high-impact, expensive journal, will my University cover those fees? Will the Institutions that are pushing the drive to author-pays, open-access publishing models ensure I will always have the resources to publish in the best journals for my work? Personally, I lack confidence. Libraries have had centuries to develop, in their DNA, a vision that their job is to make material available to readers. That they will now have to transition that vision to recognizing an equally fundamental, financial, responsibility to support their institutions’ authors, will not, I fear, come easily.


[1]Typically the Editor-in-Chief or the team of Chief Editors who run the journal do get paid but this is typically effectively a stipend, rather than a salary—it probably rarely exceeds 1-2% of the total costs of running a journal.

[2]It’s important to know whether you asked that reviewer to review another paper last week! I can’t keep track of even just who I invite, but the journal Ecology has over 100 associate editors who are all looking for reviewers; before inviting a reviewer, you need to be make sure that they aren’t already handling manuscripts for other editors.

[3] Another colleague who runs a humanities journal relies on donation and grant funding to cover costs–it comes to $1500 per published paper.

[4] And JACS is only one of 61 journals published by the ACS.

[5] Although many journals produced by professional Societies have used a mixed funding model, with page-charges in concert with subscription fees.

[6] When the Ecological Society started Ecosphere the APC was $1250, although it was later increased to $1,500. It was only ever as low as it was was because it was partially subsidized by ESAs existing management and financial structure. That was already in place to support the Society’s “traditional” subscription-based journals. If Ecosphere was a free-standing operation, the APC would have been higher.

[7] You could separate costs associated solely with accepted papers—e.g. the layout and journal hosting systems, so that there might be a $500 submission charge to all manuscripts, with an additional $333 publication charge to only accepted papers. But while that would reduce the cost to publish, it would still be a high price to submit.


May 2, 2019 / jpschimel

Student Career Paths

Yesterday I was sent a questionnaire about the career paths that students take after completing their Ph.D.: Research faculty, Teaching faculty, or “non-Academic.” It explored my attitudes, my thoughts on what I think their attitudes are, and even what I think students think my attitudes are.

As I looked at the questions and my answers, they probably looked pretty incoherent. My personal preference for my students is the research track, but I struggled to get across my rationale. I’d guess most people would assume I prefer my students take the research track out of selfishness—an ego-driven motivation that expanding my “clan” grows my reputation and standing. Well, my motivation is definitely selfish. But it’s not about my ego; it’s actually more purely narrow and personal than that—I like my students and I want to maintain and develop that relationship.

By the time my students have finished, we have worked closely together for years and I have come to like every one of them. My first students, at the University of Alaska Fairbanks, were also the people I hung out with and they were true friends. As I’ve gotten older, my finishing students increasingly are what I might call “proto-friends;” the supervisory relationship forces some personal gap and the age difference magnifies that. But once a student adds Ph.D. to their name, the supervisory relationship is over and done. That allows a real friendship to develop as we’re now peers. But for that to happen, we must stay in touch.

When students go into research careers, we attend the same conferences: Ecological  Society of America, AGU, Soil Ecology, etc. I see them regularly, which allows friendship to develop; several former students have become among my closer friends. I feel the same about my Ph.D. advisor who I consider a true friend as well as a valued colleague and mentor.

When students go into research careers, I benefit, directly and personally. When students go into other paths, the chances are good I may never see them again. I’m Facebook friends with Mitch Wagener, who I co-advised at UAF, and I still feel very connected with him, but he took a teaching professor position at Western Connecticut State University; I haven’t seen him in person in 25 years. I only get to experience his quirky humor and watch his beautiful daughters grow up on my computer screen. That doesn’t make me happy. Another student works for the Nature Conservancy in California—I hear of her through professional connections, but I haven’t seen her in years either. So, yes, my preference for research careers is deeply personal and completely selfish.

But equally, that selfishness has nothing to do with my professional valuation of those career paths. Nothing. Truly. My job as an advisor and mentor is to help my people get to the place that is right for them. What that means for me personally is irrelevant.

I want my students to be successful and to do well. But success is defined from their perspective: it is about being happy, productive, and feeling like you are doing good in your life; not necessarily by getting rich or by reflecting glory on your advisor. Whatever career path makes my students happy and satisfied is great and is what I want for them. Both Mitch and Sophie have built rich lives that make them happy—and that makes me very happy. My job is to help my people achieve their goals as best I can. I try to live up to that responsibility. But, yes, I like it when that means that, as the years go by, my former students stay more than just a line on my C.V.

December 9, 2018 / jpschimel

“Open Access” or “Pay to Play”

This is an e-mail from the lead author on a paper we have just had accepted in Ecology & Evolution—an Open Access Journal:

“Hi everyone, I still haven’t gotten a request for payment on our Arctic paper, but the charge should be $1950; so this is when I send around the collection hat! Right now the only funds I have are for a separate desert-focused project. Unless some of you have the money to cover it, it seems like my other option would be to scrounge around my University to find a way to pay it. Do any of you have experience with similar situations in the past—where the paper comes out after the project money is gone?1

Papers often are accepted after a grant has expired and so this situation is not uncommon: co-authors trying to figure out who can bootleg the funds to cover the publication cost.

Globally, the rate of scientific publication has been increasing at ca. 3% per year2, and given my experience as a journal editor, I suspect that means that the rate of submissions has been increasing even faster3. Hence the total cost of producing literature is increasing. The ultimate questions are: Who carries that cost? and How do we pay?

The “pass the hat” model illustrated above, I would argue, is probably the worst—what would we do if the hat came up empty? We need to balance our concern with “open access for readers” with “open access for authors.” They are equally important. In fact, the latter may be more important: if I can’t afford to publish a paper, then no one will ever be able to read it.

Several years ago, the Ecological Society of America partnered with Wiley to produce the Society journals—doing it ourselves was bankrupting the Society. By partnering with Wiley, we were able to hold the overall price of the journal steady, but Wiley’s size gave them an economy of scale that allowed them to produce the journals at lower cost. Every major professional society I know has partnered with a large publisher for the same reasons: we needed their services as major IT companies to be able to manage the submission, review, and production process at an affordable cost.

We may object to the profits that Elsevier brings in, but given the shift in ESA’s cost structure upon partnering with Wiley, I suspect that if we were to re-fragment academic publishing to avoid dealing with corporate publishers, the loss of efficiency might well outweigh their loss in profits; thus, the total cost of producing scientific literature could well increase.4

Remember that journals provide not one, but two, essential services:  publication and peer review. Readers rely on “brand names” to give them some confidence that what they are going to invest their time in reading has met some appropriate thresholds of quality5. As authors, we rely on peer review for our own quality control. I’m both a “highly cited researcher” and the author of the book Writing Science; I write good papers. Yet, I’ve never had a paper accepted as submitted; in fact, I’ve never heard of it happening. My papers have all been improved, and some transformed, by thoughtful and critical review. No matter how good we are, review makes our work better. Peer review is essential and someone has to maintain that process: publishing isn’t just about posting documents to the web; it’s about editing.

Authors also increasingly want more services associated with publishing (e.g. data archival) and they want it faster and better. Maintaining the tech systems for managing manuscripts and review is becoming more streamlined and efficient so the per-unit cost may drop, but the number of units keeps growing! Managing a large technology enterprise, which is necessary to handle the manuscript flow, is expensive. In the humanities, I understand there remain journals that are managed by a single university academic department, but such “let’s put on a play in our backyard” kind of production cannot fly in the natural sciences, where journals get thousands of submissions each year and authors expect decisions in weeks.

I don’t know what Elsevier’s cost structure looks like; I do know they are behaving like every other large tech company in growing and buying up useful subsidiaries. In that, they are no different than Google, Apple, or Microsoft.

Those profits offend many of my colleagues—but why? We don’t have a problem buying chemicals from Fisher, computers from Apple, or cloud data storage from Microsoft. All these corporations make large profits—Thermo Fisher made a profit of over $550 million in just one quarter this year!6 Why is publication different?

We see commercial publishers differently because we “give” the publishers our work (as authors and reviewers); they then own the copyright and charge us to access our work. That seems wrong when we can post documents to the web for free. But frankly, I don’t give a damn! The copyright has little value to me. I rely on journals for peer review and for giving my work visibility. Open access might increase the public’s ability to read my papers—but mostly I publish specialized soil ecology!7 The people who want to read my papers can—at the least, they’ll see the titles on Google Scholar and even without a subscription, they can email me for a pdf.  And you could pay me to review, but then you’d have to add that to the overall production cost8.

My concern remains the core bottom line for scholars: how do we most effectively ensure that we can get our research reviewed, validated, and published efficiently, quickly, affordably, and well.  Having to pass the hat to pay publication charges doesn’t achieve that. Paying $2,000 to publish a paper may not seem a big deal if you have a $1 million grant running, but when the grant runs out, $2,000 is a lot of money.

Unfortunately, as philosophically appealing as “Open Access for Readers” is, it can translate into “Pay to Play for Authors.” And that is a real problem.


I’ve slightly condensed this, and added a few words (e.g. Arctic, and the actual listed publication cost) to clarify. The core message is unaltered.

The STM Report: An overview of scientific and scholarly journal publishing.  International Association of Scientific, Technical and Medical Publishers.

I am a Chief Editor at Soil Biology & Biochemistry (which is owned by Elsevier). Our submissions have been growing steadily, with much of that increase coming from China. Yet, for several reasons, the acceptance rate of papers from China remains below that of papers from the U.S. and Western Europe. Hence submissions have grown faster than published papers—and from the manuscript management and review side, it is submissions that take work and cost money. But there is no article submission fee. Hence the “hidden” costs have grown with the growth in Chinese science.

Or maybe not—according to Michael Hiltzik, author of “In UC’s battle with the world’s largest scientific publisher, the future of information is at stake” (LA Times, Dec. 9), “Elsevier makes profit of about 30% on all its business, and operating profit of 38.9% on its STM journals.”

Remember what happened to the music industry when streaming and Napster came along. While it created “open access” for anyone who could produce a track or a video, it was time consuming to sort out the good from the bad.


7 For fast-breaking publically-relevant science, immediate public access is important. But for that kind of work, we do submit to journals like Nature, Science, or PNAS that journalists look at. Most of what we ever publish, however, is targeted at our academic peers in specialist journals and will never get read by “the public.” If someone wants a copy of a paper, it’s both easy and legal to send them a pdf.

I’d guess that a “reasonable” honorarium for reviewing a paper would be $100. That might be nice, but it isn’t enough to change my life or even my decision as to whether to review a paper. But every paper needs at least two reviewers, and sometimes papers need several rounds of review and sometimes the original reviewers are unavailable. I’ve handled papers that ultimately were reviewed by 5 people. Thus real production costs would increase by at least $200 and possibly by as much as $500 or more (would you get $100 for each round of re-review?). On top of a base cost of $2000 that is a lot.  $100 may be “small beer” for a reviewer—but it translates into a big hit for the authors. How is that a “victory” for anyone?



August 23, 2018 / jpschimel

“Othering” by Implication

I initially included this post as a subscript to my piece about Environmental Research Letters’ recent announcement that they now publish “evidence-based reviews.” I commented that I was shocked and even offended, because it had never occurred to me that there could be any other type of review in a science journal: “non-evidence-based reviews”? Those, of course, do not exist.

When you create a new category (of people or things), you never create only one—you create two: the in-group and the out-group. This is a form of “othering,” which the Oxford English Dictionary defines as “to conceptualize (a people, a group, etc.) as excluded and intrinsically different from oneself.”

When you create a new group identity, it is at best naïve to ignore what that suggests about the people or things that remain outside. If your group name implies “better,” the out-group, now “worse group” will inevitably, and justifiably, feel offended.

Not every case of othering by title, however, implies better. Sometimes the problematic implications are less obviously prejudicial.

We had such a case recently at the University of California, where we have a category of faculty titled “Lecturer with Security of Employment” (LSOE). For those who know how lecturers are often treated in universities, that may sound like indentured servitude, but in fact LSOE’s are full faculty, but with no requirement to do research. Their primary focus is teaching and their job is thus much like a professor at a liberal arts college. LSOEs are members of the Academic Senate and are on pay and benefit scales that parallel Professors. SOE is effectively tenure; before that the title is lecturer with potential security of employment. We value LSOEs and we wanted a title that better expresses that.

The obvious title was “Teaching Professor” but here is where we ran into the “evidence-based” conundrum in defining new categories: if some people are “Teaching Professors,” what are the rest of us professors? Would we be, by implication, “non-teaching professors”?

That, of course isn’t true—teaching is an organic responsibility of being a UC Professor. We worried that implying that regular professors don’t teach could feed the public’s worst misconceptions about the University! Creating the formal title of “Teaching Professor” we feared, could backfire and damage UC. We settled on a compromise: LSOEs can unofficially call themselves “Teaching Professors” but the official title remains LSOE.

We do have “Research Professors” who have no teaching obligation, which is partly why “Teaching Professor” seemed an obvious title, but research professors are typically soft-money, supported off research grants. And there, the flip does no public damage: if you’re not a research professor, does that mean you teach?

Language is tricky—its casts light on things, but in so doing, creates shadows. We interpret both. When you create terms that cast light on some people, you necessarily “other” others. So be sensitive to the language and the likelihood of offense. Consider not just the light you cast, but everyone else who will suddenly feel themselves in shadow.

August 23, 2018 / jpschimel

“Evidence-based review”?

I got an e–mail this morning from Environmental Research Letters (ERL) proudly announcing that they now publish “evidence-based reviews.”

Screen Shot 2018-08-23 at 10.33.43 AM

I was initially stunned, then horrified by their choice of language. If their reviews are “evidence-based” what are everyone else’s? I always understood that for something to be science, it had to be based on evidence! The alternative to an “evidence-based review” is a review not based in evidence? But by definition, that would not be science—it would be science fiction.

It seems that what ERL may be emphasizing is more along the lines of meta-analysis, in which the review is a formal quantitative analysis of specific data-sets. If so, yes, that is different than qualitative or conceptual analysis of existing knowledge and understanding. If you want to know how much the Earth’s temperature has increased over the last 50 years, there are many datasets to synthesize, and a conclusion must use a formal analytical structure that provides clear rules for what is included or excluded. But that is no more “evidence-based” than a “traditional” review that synthesizes existing understanding of a topic. I’ve written a number of such reviews and I maintain that they are deeply “evidence-based;” I’m sure that the reviewers and editors who handled those papers would agree.

So why did the ERL Editor’s choose the term “evidence-based review”? A term so loaded that I’ve been stewing over it for hours and that it motivated me to write a blog post?

I can postulate three, not mutually exclusive, hypotheses. First, but I suspect least likely, is that they did intend to disparage the more traditional conceptual approach to synthesizing knowledge and literature. Perhaps the editors feel that this approach is too subject to individual interpretation. But all datasets are subject to interpretation and that is what peer review is for: to ensure that contributions are robust, sound, and accurately reflect the evidence.

More likely would be that they simply fell into a “Curse of Knowledge” trap—they knew what they meant by “evidence-based,” and did not see that it might be viewed differently by others. Such problems plague communication and are hard to avoid because it is hard to know what others know and think.

I have more sympathy for this explanation, but only a little because this should have been easy to foresee and avoid. If you create a new category of “evidence-based” review, you obviously and explicitly suggest the existence of “non-evidence-based” reviews—something I never dreamed could exist until I got ERL’s e-mail. This is a form of “othering” that I find very problematic. I can only hope that the Editors of ERL were looking for a simple, positive term to define a new category of reviews, and didn’t adequately consider the implications of their language choice.

My third hypothesis recognizes that ERLs Editor-in-Chief is Dr. Daniel Kammen. Dr. Kammen is an eminent scientist who works extensively at the interface of environmental policy. In the U.S., there is increasing focus in policy decisions to distinguish inputs that are based on real evidence vs. those based on the pure opinion. ERL is a journal that aims to publish science that will be relevant to environmental policy decisions. Hence, perhaps there is a need to more effectively identify science as being evidence-based. So voilà: “evidence-based reviews”! In the Journal of Public Policy, I wouldn’t object to this, because in policy the distinction between data-based vs. expert opinion based input is important.

But if that hypothesis is correct, the appropriate response for ERL, a pure science journal, should not be to flag some publications as being “evidence-based,” and so to suggest that there is an alternative (are they going to have evidence-based research papers?), but to more effectively highlight that “If it isn’t evidence-based, it isn’t science” and that ERL only publishes science.

I can believe that the decision to use the term “evidence-based” might reflect Dr. Kammen’s experience at the science-policy interface in the era of “Fake News.” If this is true, though, I am still deeply disappointed in the journal’s choice of terminology. I very much hope that ERL will find a better, more suitable term to describe what they are looking for.


May 1, 2018 / jpschimel

How to write an effective proposal review

In a recent post, I discussed how to do a good manuscript review. I analogized that to battlefield medicine, where the first step is triage: determine whether the patient can be saved. But the truly critical step is the second one: treatment. If the “patient”—the paper—has potential, then your job as reviewer is to help make it as strong as possible. Submitted manuscripts always need revision and editing to reach their potential. Peer review provides a service to journals in their decision making, but the greater service is the one we provide each other.

Proposal review is different. It is almost entirely evaluative, with essentially no “treatment.” We don’t review proposals for their authors, but for the funding agency. That ultimately serves the community, because we want agencies to make good decisions, and so we help them with that. But, our job is to tell tell the agency whether they should fund the project, not to tell the Principal Investigators1(PIs) how to make the work better. The PIs will see your review, but they are not its audience—the review panel and program officers are.

In making recommendations, remember that research proposals are works of science fiction: the PIs are not going to do exactly what they wrote. A proposal isn’t a promise, but a plan, and the military maxim “no plan survives contact with the enemy” applies. The PIs may have great ideas, but nature won’t cooperate, or they’ll recruit a student or postdoc who takes the work in different directions. That’s the nature of science. In a research project, you must aim to achieve the project’s core goals, but it will mutate. If you knew enough to describe exactly what you will do over three years, you knew enough to not need to do it! We do the research because we don’t know all the answers. We rely on PIs to use their judgement to sort out glitches that arise.

To recommend funding a proposal, therefore, it should be pretty awesome; awesome enough that you have confidence that a) it is worth doing, b) enough of it will likely work and c) the PIs will be able to work around the parts that don’t and still achieve their goals. If major elements are likely to fail, or you lack confidence the investigators will be able to solve the problems that arise, you should say so and recommend rejection. When you are reviewing a proposal, therefore, you must answer two questions:
1) Is the proposal exciting and novel enough to be worth investing limited resources?
2) Is the proposal technically sound enough to be doable?

PIs show the novelty of the questions by demonstrating the knowledge gap. This calls for clearly defining the boundaries of existing knowledge (not just saying “little is known about this”) and by framing clear, falsifiable hypotheses (not just fluff: “increasing temperatures will alter the structure of forest communities” but how they think it will alter them). PIs demonstrate that the work will likely succeed by clearly explaining the experimental design (the logic is often more important that the gory details though), discussing methods in appropriate detail, describing how they will address risks and alternative strategies in case things don’t work, etc. The better the PIs thought through the plan, the better positioned they are to cope when things go off track.

One challenge in reviewing is that since only the best proposals will be funded, reviewing is inherently relative: how does this one stack up against the competition? Since you aren’t reading those, you have to assume a baseline to compare against. That is why the first proposal I ever reviewed took several days; now it sometimes only takes an hour. I had to develop a reference standard for what a good proposal looks like—the job gets easier the more you review2.

Also, keep in mind that success rates have often sunk below 10%, which means that many strong proposals fail. This is a shift from when I started, when success rates were 20-30%. That sounded bad until I served on my first panels and realized that only about 40-50% of the proposals were worth funding, creating a “functional” funding rate  closer to 50%. With two panels a year, that meant if a good proposal didn’t get funded this time, it had a strong shot next time. That’s no longer true. Now, many seriously good proposals are not going to succeed, not this time, likely not next time, and quite possibly not ever. Ouch. As reviewers, though, just keep pushing—if you read a proposal that you really think deserves funding, say so. Force the panels and program officers to make the hard calls: which great proposals to reject—that’s the job they signed on for. It also helps them argue for increased support to say “We were only able to fund a third of the ‘high priority’ proposals.”

I know how NSF defines rating scores3, but in my experience, NSF’s definitions don’t quite match reality, and their connection to reality has weakened as funding rates have dropped. Over the years, I’ve developed my own definitions that I believe more closely match how the scores work in practice.

Excellent: This is a very good proposal that deserves funding. Exciting questions and no major flaws. If I’m on the panel, I am going to go to fight to see that this one gets funded.
Very Good: This is a good proposal. The questions are interesting, but don’t blow me away, and there are likely some minor gaps. I’m not going to fight to see this funded, but it wouldn’t bother me if it were. Functionally, this is a neutral score, not really arguing strongly either way.
Good: This is a fair proposal; the ideas are valid but not exciting and/or the approaches are weak (but not fatally so). The proposal might produce some OK science, but I don’t think it should be funded and will say so, if not vociferously.
Fair: This is a poor proposal. It should absolutely not be funded, but I don’t want to be insulting about it. There are major gaps in the conceptual framing, weaknesses in the methods, and/or it seriously lacks novelty.
Poor: This score is not really for the program officer, but for the PI. For me, giving a “poor” is a deliberate act of meanness, giving a twist of the knife to an already lethal review. It says: I want you to hurt as much as I did for wasting my time reading this piece of crap! I would never assign “poor” to a junior investigator who just doesn’t know how to write a proposal. Nope, “poor” is reserved for people who should know better and for some bizarre reason submitted this “proposal” anyhow.

In just about every panel I’ve served on, there are only a few proposals that are so terrific that there is essentially unanimous agreement that they are Must Fund. Those would probably have rated so regardless of who was serving on the panel and are the true Excellent proposals. Most of us probably never write one. Then there are the proposals that define Very Good: these comprise a larger pool of strong proposals that deserve funding—but there isn’t likely to be enough money available to fund all of them. Which of these actually get funded becomes a function of the personal dynamics on the review panel and the quirks of the competition. Did someone become a strong advocate for the proposal?Were there three strong proposals about desert soil biological crusts? It’s not likely an NSF program would fund all three if there were also strong proposals about tropical forests or arctic tundra. Were any one in the panel, it would likely have been funded, but with all three, two might well fail. When resources are limited, agencies make careful choices about how to optimize across areas of science, investigators, etc. I support that approach.

Broader Impacts
One required element of NSF proposals is Broader Impacts. These can include societal benefits, education, outreach, and a variety of other activities. Including this was an inspired move by NSF to encourage researchers to integrate their research more effectively with other missions of the NSF and of universities. When NSF says that broader impacts are co-equal with intellectual merit as a review criterion, however, sorry, they’re lying. We read proposals from the beginning but broader impacts are at the end. We start evaluating with the first words we read, and if at any point, we conclude a proposal is uncompetitive, nothing afterwards matters. If the questions are dull or flawed, the proposal is dead and nothing can save it—not a clever experiment and not education and outreach efforts! Because broader impacts activities are described after the actual research, they are inherently less important in how we assess a project.

Broader impacts may be seen as an equal criterion because a proposal will only get funded if all of its elements are excellent. A proposal is successful when you grab reviewers with exciting questions, and then don’t screw it up! The approaches must address the questions and the education and outreach activities must be well thought out, specific and effective. Great broader impacts won’t save bad science, but weak broader impacts will sink strong science. The relative strengths of broader impacts activities may also decide which scientifically awesome project makes it to the funding line; but they won’t prop up weak science.

To wrap up: to write a useful proposal review, remember you are making a recommendation (fund vs. don’t fund) to the funding agency, and then providing justification for that recommendation. If you think a proposal is super, why? What is novel? Why are the experiments so clever? Why is the inclusiveness part more than just “we’ll recruit underrepresented students from our local community college”? How have the PIs shown that this effort is woven into the research? As an ad hoc reviewer, bring your expertise to the table to argue to the panel what they should recommend. As a panelist, give the program officer the information and rationale they need to help them decide. Do those things well, and your reviews will be useful and appreciated.


1Please do not call them “Principle Investigators”—that is one common error of language that drives me nuts: a “principle investigator” investigates “principles”: i.e. a philosopher, not a scientist! A “principal investigator” is the lead investigator on a project. When I see people careless with that language, I wonder: are they equally careless with their samples and data? Do you really want me asking that when I’m reviewing your proposal?

2When I was a Ph.D. student, my advisor, Mary Firestone, came to the lab group and said she’d just been invited to serve on the Ecosystem Program review panel (two panels a year for three years) and asked what we thought. We all said, “no, don’t do it—we already don’t see enough of you!” She responded with “You haven’t come up with anything I haven’t already thought of, so I’m going to do it.” We all wondered why she asked us if she was going to ignore our input. We were clueless and wrong; Mary was considerate to even check. By serving on review panels you learn how to write good proposals—as I learned when I started serving on panels! It’s a key part of developing a career. Mary understood that; we didn’t. Sorry for the ignorant ill thoughts, Mary.

3NSF Definitions of Review Scores
Excellent Outstanding proposal in all respects; deserves highest priority for support.
Very Good High quality proposal in nearly all respects; should be supported if at all possible.
Good A quality proposal, worthy of support.
Fair Proposal lacking in one or more critical aspects; key issues need to be addressed.
Poor Proposal has serious deficiencies.


April 29, 2018 / jpschimel

Protect “verbidiversity” or why I hate “impact” redux

In biology, we value biodiversity; each species brings something slightly different to the table, and so we worry about homogenizing the biosphere. The same risk is present with language—when we take words that are in the same “genus” (e.g. impact, influence, effect) but are different “species” with some genetic and functional differentiation, and essentially hybridize them, we eliminate distinctions between them and destroy the diversity of the vocabulary. Just as eliminating biodiversity weakens an ecosystem, eliminating “verbidiversity”— the nuances of meaning among similar words—weakens the language, and our ability to communicate powerfully.

In this vein, I’ve been reading a bunch of manuscripts and proposals recently and I am so sick of seeing “impact” used every time an author wanted to discuss how one variable influences another. One sentence really struck me though; that was because it didn’t just feel like the author was over-using “impact,” but was really mis-using it:

“The amount and duration of soil moisture impacts the time that soil microorganisms can be active and grow.”

This is modified from a line in the real document, which is of course, confidential. The use of “impact” in this context just reads wrong to me. The derivation of “impact” is from Latin “Impactus” which derives from “Impingere” according to the OED and other sources. Definitions include: To thrust, to strike or dash against. The act of impinging; the striking of one body against another; collision.

Thus, “impact” carries a sense of an event—something short and sharp. Boom! A physical blow. An “impact crater” occurs when an asteroid hits a planet. “Impact” is a weird word when what you really mean is a long-term influence.

“Impact” does also have a definition that doesn’t include a physical blow, but rather a metaphorical one. The implication is still, however, that the effect is dramatic:
1965    Listener 26 Aug. 297/1   However much you give them, you are not going to make a significant impact on growth, though you may make an impact in the charitable sense. [From the Oxford English Dictionary].

Even in the metaphorical sense, however, most, or at least many, good uses of “impact” still have a flavor of the event being short, even if the effect is long-lasting:
1969    Ld. Mountbatten in  Times 13 Oct. (India Suppl.) p. i/1   He [sc. Gandhi] made such an impact on me that his memory will forever remain fresh in my mind.. [OED]

Or consider:
1966    Economist 10 Dec. 1144/3   What has had an impact on food distributors, apparently, is the opening of an investigation by the Federal Trade Commission into supermarket games and stamps. [OED]

In that sentence, it was the opening of the investigation that had the impact, and that opening was a single event. Lets go back, now, to the example that drew my attention:
“The amount and duration of soil moisture impacts the time that soil microorganisms can be active and grow.”

Or consider another sentence modified from another document:
“Mineralization and plant uptake directly impact soil N cycling.”

In these sentences “impact” is nothing but a synonym for “influences” or “affects.” It doesn’t even imply a dramatic or an abrupt effect; it’s just expressing a relationship. So to me, using “impact” this way is a poor choice. Using a word that implies an abrupt or dramatic influence to just say that there is some relationship steals power and nuance from the word “impact.” It damages “verbidiversity” and our ability to express sophisticated thoughts and ideas.

I know I’ve got a bug up my butt about the over-use of “impact” to express every possible relationship, but good writing involves being thoughtful about which words you choose and how you use them. English has an enormous vocabulary, the greatest verbidiversity of any language on Earth, having taken words from Anglo-Saxon, Norman-French, Latin, and others. But even when we have adapted a word and somewhat altered its meaning from its native language, a ghost of the word’s original definition commonly lingers. Be sensitive to those lingering implications, and use your words thoughtfully. Note that “impact” isn’t the only word that suffers from overuse, misuse, or just plain confusing use—its just one that I’m allergic enough to to motivate a blog post.

If nothing else, using language thoughtfully means it may be more likely that a reviewer is paying rapt attention to the cool science you are trying to sell, instead of writing a blog post about how your language annoyed him (even if he still thinks the science is cool). That could mean the difference between a $1 Million grant and a polite declination.

April 13, 2018 / jpschimel

How to write a useful manuscript review

A “good” peer review is an analysis that is useful and constructive for both the editor and the authors. It helps the editor decide whether a paper should be published, and which changes they should request or require. It helps the author by offering guidance on how to improve their work so that it is clearer and more compelling for a reader. But keep in mind:

Peer review isn’t just criticism—it’s triage.

“Triage” comes originally from military medicine. When wounded soldiers are brought into a medical unit, busy doctors must separate who is likely to die regardless of what surgeons might do from those who can be saved by appropriate medical care.

All manuscripts come into journal offices as “wounded soldiers.” I’ve authored 175 papers, written hundreds of reviews, and handled about 2,000 manuscripts as an editor. Across all those, not a single paper has ever been accepted outright—not one. Some only needed a light bandage, others required major surgery, but they all needed some editorial care.

When a paper is submitted, the editor and reviewers must therefore do triage: does this paper stand a chance of becoming “healthy” and publishable? Or is it so badly “wounded”—damaged by a poor study design, inadequate data analysis, or a weak story—that it should be allowed to die in peace (i.e. be rejected)? An editor at a top-tier journal such as Nature is like a surgeon on a bloody battlefield, getting a flood of patients that overload any ability to treat them all, and so a higher proportion must be rejected and allowed to die. At a specialist journal, the flood is less, and so we can “treat,” and eventually publish, a greater proportion of the papers.

Typically, an editor makes a first triage cut—if the paper is so badly off that it obviously has no chance of surviving, he or she will usually reject the paper without getting external reviews. At Soil Biology & Biochemistry we call that “desk reject;” at Ecology, it’s “reject following editorial review” (ReFER), to emphasize that the paper was reviewed by at least one highly experienced scientist in the field.

But triage doesn’t end with the editor. When you are asked to review a manuscript, the first question you must address is the triage question: is this paper salvageable? Can it reach a level of “health” that it would be appropriate to publish in the journal following a reasonable investment of time and energy on the part of the editorial and review team? A paper may have a dataset that is fundamentally publishable but an analysis or story in such poor shape that it would be best to decline the paper and invest limited editorial resources elsewhere.

Thus, when you are writing a review, the first paragraph(s) should target the triage decision and frame your argument for whether the paper should be rejected or should move forward in the editorial process. Is the core science sound, interesting, and important enough for this journal? Is the manuscript itself well enough written and argued that with a reasonable level of revision it will likely become publishable? If the answer to either of those questions is “no” then you should recommend that the editor reject the paper. You need to explain your reasoning and analysis clearly and objectively enough that the editors and authors can understand your recommendation.

If you answer “yes” to both central questions—the science is sound and the paper well enough constructed to be worth fixing—you move beyond the diagnosis phase to the treatment stage: everything from there on should be focused on helping the authors make their paper better. That doesn’t mean avoiding criticism, but any criticism should be linked to discussing how to fix the problem.

This section of the review should focus on identifying places where you think the authors are unclear or wrong in their presentations and interpretations, and on offering suggestions on how to solve the problems. The tone should be constructive and fundamentally supportive. You’ve decided to recommend that the “patient” be saved, so now you’re identifying the “wounds” that should be patched. It doesn’t help to keep beating a paper with its limitations and flaws unless you are going to suggest how to fix them!  If the problems are so severe that you can’t see a solution, why haven’t you argued to reject the paper?

In this section, you are free to identify as many issues as you wish—but you need to be specific and concrete. If you say “This paragraph is unclear, rewrite it,” that won’t help an author—if they could tell why you thought the paragraph was unclear, they probably would have written it differently in the beginning! Instead say “This is unclear—do you mean X or do you mean Y?” If you disagree with the logic of an argument, lay out where you see the failing, why you think it fails, and ideally, what you think a stronger argument would look like.

It is easy to fall into the “Curse of Knowledge”: you know what you know, so it’s obvious to you what you are trying to say. But readers don’t know what you know! It may not be obvious to them what you mean—you must explain your thinking and educate them. That is as true for the review’s author as for the paper’s author. It’s easy to get caught up in a cycle where an author is unclear, but then a reviewer is unclear about what is unclear, leaving the author flailing trying to figure out how to fix it! A good review needs to be clear and concrete.

Remember, however, that it is not a reviewer’s job to rewrite the paper—it’s still the authors’ paper. If you don’t like how the authors phrased something, you can suggest changes, but you are trying to help, not replace, the authors. If the disagreement comes down to a matter of preference, rather than of correctness or clarity, it’s the author’s call.

When I do a review, I usually make side notes and comments as I read the paper. Then I collect my specific comments, synthesize my critical points about the intellectual framing of the paper, and write the guts of the review—the overall assessment. I target that discussion toward the editor, since my primary responsibility is to help her with triage. She will ultimately tell the authors what changes they should make for the paper to become publishable. Then, I include my line-by-line specific comments. Those are aimed at the authors, as they tend to be more specific comments about the details of the paper. The specific comments typically run from half a page to a few pages of text.

Sometimes reviews get longer—I have written 6-page reviews, reviews where I wanted to say that I thought the paper was fundamentally interesting and important, but that I disagreed with some important parts of it and that I wanted to argue with the authors about those pieces. I typically sign those reviews because a) I figure it will likely be obvious who wrote it, and b) I am willing to open the discussion with the authors: this isn’t an issue of right-or-wrong, but of opinion and where I think that the science might be best advanced by having the debate.

How to offer a specific recommendation?

Accept: The paper is ready to publish. You should almost never use this on a first review.
Accept following minor revision: The paper needs some polishing, but doesn’t need a “follow-up visit”—i.e. you don’t think it will need re-review.
Reconsider following revision: The paper is wounded, but savable. The problems go beyond clarity or minor edits; the paper requires some rethinking. It will therefore likely need re-review. If you recommend “reconsider,” I hope you will also agree to do that re-review.
Reject: The paper should be allowed to die. Either it is fatally flawed in its scientific core, or the scientific story is so poorly framed or written that it is not worth the editorial team’s investment in working to try to make it publishable.

Keep in mind that as a reviewer, you are typically anonymous. The editor is not. If there really are deep flaws in a paper, give me cover by recommending “reject”! If I choose not to take that advice, it makes me the good guy and helps me push the authors to fix the problems: “Reviewer #1 suggested declining the paper, but I think you might be able to solve the problem, so I’ll give you a chance to try.” That of course implies: “but if you don’t, I will reject it.” If you try to be nice and recommend “reconsider” and I decide instead to reject, then it’s all on me and I’m the bad guy. I signed on to do that job, but I do appreciate your help. Give your most honest and accurate assessment but remember that the editor must make the decision and must attach their name to that decision.

Reviewing Revisions

How does this advice change if you are getting a revised manuscript back for re-review? I’ve seen reviewers get annoyed that authors didn’t do exactly what they had recommended. Don’t. First, remember that the editor likely received two or three external reviews that might have varied in their assessments and recommendations—editors need to synthesize all that input before making a decision and offering guidance to the authors. Then, authors might have different ideas about how to solve the problems and to address reviewers’ concerns. In my experience, reviewers are usually right when they identify problems, but are less reliably so in their suggestions for how to fix them. Authors may often come up with different solutions, and it’s their paper! As long as the authors’ solution works, it works. When doing a re-review, your job is to determine whether the paper has crossed the threshold of acceptability, not whether the authors have done everything that you had suggested, and particularly not whether they did everything in the way you might have suggested. In the triage model, the question is not whether the patient is 100% healed, but are they are healthy enough to release?

The more difficult call is when a paper has improved, but not enough. I expect a paper that starts at “reconsider” to step up to “minor revisions” en route to “accept.” But what if you would rate the paper as needing additional major revisions before it closes on acceptability? The paper might have gotten better, but not enough and the trajectory is looking relatively flat. In such a case, you should probably recommend rejecting the paper. It’s not that the paper can’t become publishable, but having given the authors the advice to improve the paper, they either chose not to take it or couldn’t see how to. Well, too bad for them. You can’t write the paper for them and you can’t force the issue; we all have finite time and energy to invest in a patient that isn’t getting better. At some point, we just have to make the hard call, move them out of the hospital ward, say “I’m sorry,” and let them go.

To wrap up, remember that reviewing is a professional obligation—it’s what we do for each other to advance our science. We help our colleagues by identifying areas where the work is unclear or the arguments weak. Review can be a painful process, but writing science is hard; no one ever gets it completely right on the first shot. No one. Ever*. We all rely on peer review, so embrace the process when you’re being reviewed, and do the best job you can when you are the reviewer.

* At least never in my 30 years of experience.