Skip to content
August 23, 2018 / jpschimel

“Othering” by Implication

I initially included this post as a subscript to my piece about Environmental Research Letters’ recent announcement that they now publish “evidence-based reviews.” I commented that I was shocked and even offended, because it had never occurred to me that there could be any other type of review in a science journal: “non-evidence-based reviews”? Those, of course, do not exist.

When you create a new category (of people or things), you never create only one—you create two: the in-group and the out-group. This is a form of “othering,” which the Oxford English Dictionary defines as “to conceptualize (a people, a group, etc.) as excluded and intrinsically different from oneself.”

When you create a new group identity, it is at best naïve to ignore what that suggests about the people or things that remain outside. If your group name implies “better,” the out-group, now “worse group” will inevitably, and justifiably, feel offended.

Not every case of othering by title, however, implies better. Sometimes the problematic implications are less obviously prejudicial.

We had such a case recently at the University of California, where we have a category of faculty titled “Lecturer with Security of Employment” (LSOE). For those who know how lecturers are often treated in universities, that may sound like indentured servitude, but in fact LSOE’s are full faculty, but with no requirement to do research. Their primary focus is teaching and their job is thus much like a professor at a liberal arts college. LSOEs are members of the Academic Senate and are on pay and benefit scales that parallel Professors. SOE is effectively tenure; before that the title is lecturer with potential security of employment. We value LSOEs and we wanted a title that better expresses that.

The obvious title was “Teaching Professor” but here is where we ran into the “evidence-based” conundrum in defining new categories: if some people are “Teaching Professors,” what are the rest of us professors? Would we be, by implication, “non-teaching professors”?

That, of course isn’t true—teaching is an organic responsibility of being a UC Professor. We worried that implying that regular professors don’t teach could feed the public’s worst misconceptions about the University! Creating the formal title of “Teaching Professor” we feared, could backfire and damage UC. We settled on a compromise: LSOEs can unofficially call themselves “Teaching Professors” but the official title remains LSOE.

We do have “Research Professors” who have no teaching obligation, which is partly why “Teaching Professor” seemed an obvious title, but research professors are typically soft-money, supported off research grants. And there, the flip does no public damage: if you’re not a research professor, does that mean you teach?

Language is tricky—its casts light on things, but in so doing, creates shadows. We interpret both. When you create terms that cast light on some people, you necessarily “other” others. So be sensitive to the language and the likelihood of offense. Consider not just the light you cast, but everyone else who will suddenly feel themselves in shadow.

Advertisements
August 23, 2018 / jpschimel

“Evidence-based review”?

I got an e–mail this morning from Environmental Research Letters (ERL) proudly announcing that they now publish “evidence-based reviews.”

Screen Shot 2018-08-23 at 10.33.43 AM

I was initially stunned, then horrified by their choice of language. If their reviews are “evidence-based” what are everyone else’s? I always understood that for something to be science, it had to be based on evidence! The alternative to an “evidence-based review” is a review not based in evidence? But by definition, that would not be science—it would be science fiction.

It seems that what ERL may be emphasizing is more along the lines of meta-analysis, in which the review is a formal quantitative analysis of specific data-sets. If so, yes, that is different than qualitative or conceptual analysis of existing knowledge and understanding. If you want to know how much the Earth’s temperature has increased over the last 50 years, there are many datasets to synthesize, and a conclusion must use a formal analytical structure that provides clear rules for what is included or excluded. But that is no more “evidence-based” than a “traditional” review that synthesizes existing understanding of a topic. I’ve written a number of such reviews and I maintain that they are deeply “evidence-based;” I’m sure that the reviewers and editors who handled those papers would agree.

So why did the ERL Editor’s choose the term “evidence-based review”? A term so loaded that I’ve been stewing over it for hours and that it motivated me to write a blog post?

I can postulate three, not mutually exclusive, hypotheses. First, but I suspect least likely, is that they did intend to disparage the more traditional conceptual approach to synthesizing knowledge and literature. Perhaps the editors feel that this approach is too subject to individual interpretation. But all datasets are subject to interpretation and that is what peer review is for: to ensure that contributions are robust, sound, and accurately reflect the evidence.

More likely would be that they simply fell into a “Curse of Knowledge” trap—they knew what they meant by “evidence-based,” and did not see that it might be viewed differently by others. Such problems plague communication and are hard to avoid because it is hard to know what others know and think.

I have more sympathy for this explanation, but only a little because this should have been easy to foresee and avoid. If you create a new category of “evidence-based” review, you obviously and explicitly suggest the existence of “non-evidence-based” reviews—something I never dreamed could exist until I got ERL’s e-mail. This is a form of “othering” that I find very problematic. I can only hope that the Editors of ERL were looking for a simple, positive term to define a new category of reviews, and didn’t adequately consider the implications of their language choice.

My third hypothesis recognizes that ERLs Editor-in-Chief is Dr. Daniel Kammen. Dr. Kammen is an eminent scientist who works extensively at the interface of environmental policy. In the U.S., there is increasing focus in policy decisions to distinguish inputs that are based on real evidence vs. those based on the pure opinion. ERL is a journal that aims to publish science that will be relevant to environmental policy decisions. Hence, perhaps there is a need to more effectively identify science as being evidence-based. So voilà: “evidence-based reviews”! In the Journal of Public Policy, I wouldn’t object to this, because in policy the distinction between data-based vs. expert opinion based input is important.

But if that hypothesis is correct, the appropriate response for ERL, a pure science journal, should not be to flag some publications as being “evidence-based,” and so to suggest that there is an alternative (are they going to have evidence-based research papers?), but to more effectively highlight that “If it isn’t evidence-based, it isn’t science” and that ERL only publishes science.

I can believe that the decision to use the term “evidence-based” might reflect Dr. Kammen’s experience at the science-policy interface in the era of “Fake News.” If this is true, though, I am still deeply disappointed in the journal’s choice of terminology. I very much hope that ERL will find a better, more suitable term to describe what they are looking for.


 

May 1, 2018 / jpschimel

How to write an effective proposal review

In a recent post, I discussed how to do a good manuscript review. I analogized that to battlefield medicine, where the first step is triage: determine whether the patient can be saved. But the truly critical step is the second one: treatment. If the “patient”—the paper—has potential, then your job as reviewer is to help make it as strong as possible. Submitted manuscripts always need revision and editing to reach their potential. Peer review provides a service to journals in their decision making, but the greater service is the one we provide each other.

Proposal review is different. It is almost entirely evaluative, with essentially no “treatment.” We don’t review proposals for their authors, but for the funding agency. That ultimately serves the community, because we want agencies to make good decisions, and so we help them with that. But, our job is to tell tell the agency whether they should fund the project, not to tell the Principal Investigators1(PIs) how to make the work better. The PIs will see your review, but they are not its audience—the review panel and program officers are.

In making recommendations, remember that research proposals are works of science fiction: the PIs are not going to do exactly what they wrote. A proposal isn’t a promise, but a plan, and the military maxim “no plan survives contact with the enemy” applies. The PIs may have great ideas, but nature won’t cooperate, or they’ll recruit a student or postdoc who takes the work in different directions. That’s the nature of science. In a research project, you must aim to achieve the project’s core goals, but it will mutate. If you knew enough to describe exactly what you will do over three years, you knew enough to not need to do it! We do the research because we don’t know all the answers. We rely on PIs to use their judgement to sort out glitches that arise.

To recommend funding a proposal, therefore, it should be pretty awesome; awesome enough that you have confidence that a) it is worth doing, b) enough of it will likely work and c) the PIs will be able to work around the parts that don’t and still achieve their goals. If major elements are likely to fail, or you lack confidence the investigators will be able to solve the problems that arise, you should say so and recommend rejection. When you are reviewing a proposal, therefore, you must answer two questions:
1) Is the proposal exciting and novel enough to be worth investing limited resources?
2) Is the proposal technically sound enough to be doable?

PIs show the novelty of the questions by demonstrating the knowledge gap. This calls for clearly defining the boundaries of existing knowledge (not just saying “little is known about this”) and by framing clear, falsifiable hypotheses (not just fluff: “increasing temperatures will alter the structure of forest communities” but how they think it will alter them). PIs demonstrate that the work will likely succeed by clearly explaining the experimental design (the logic is often more important that the gory details though), discussing methods in appropriate detail, describing how they will address risks and alternative strategies in case things don’t work, etc. The better the PIs thought through the plan, the better positioned they are to cope when things go off track.

One challenge in reviewing is that since only the best proposals will be funded, reviewing is inherently relative: how does this one stack up against the competition? Since you aren’t reading those, you have to assume a baseline to compare against. That is why the first proposal I ever reviewed took several days; now it sometimes only takes an hour. I had to develop a reference standard for what a good proposal looks like—the job gets easier the more you review2.

Also, keep in mind that success rates have often sunk below 10%, which means that many strong proposals fail. This is a shift from when I started, when success rates were 20-30%. That sounded bad until I served on my first panels and realized that only about 40-50% of the proposals were worth funding, creating a “functional” funding rate  closer to 50%. With two panels a year, that meant if a good proposal didn’t get funded this time, it had a strong shot next time. That’s no longer true. Now, many seriously good proposals are not going to succeed, not this time, likely not next time, and quite possibly not ever. Ouch. As reviewers, though, just keep pushing—if you read a proposal that you really think deserves funding, say so. Force the panels and program officers to make the hard calls: which great proposals to reject—that’s the job they signed on for. It also helps them argue for increased support to say “We were only able to fund a third of the ‘high priority’ proposals.”

Scores
I know how NSF defines rating scores3, but in my experience, NSF’s definitions don’t quite match reality, and their connection to reality has weakened as funding rates have dropped. Over the years, I’ve developed my own definitions that I believe more closely match how the scores work in practice.

Excellent: This is a very good proposal that deserves funding. Exciting questions and no major flaws. If I’m on the panel, I am going to go to fight to see that this one gets funded.
Very Good: This is a good proposal. The questions are interesting, but don’t blow me away, and there are likely some minor gaps. I’m not going to fight to see this funded, but it wouldn’t bother me if it were. Functionally, this is a neutral score, not really arguing strongly either way.
Good: This is a fair proposal; the ideas are valid but not exciting and/or the approaches are weak (but not fatally so). The proposal might produce some OK science, but I don’t think it should be funded and will say so, if not vociferously.
Fair: This is a poor proposal. It should absolutely not be funded, but I don’t want to be insulting about it. There are major gaps in the conceptual framing, weaknesses in the methods, and/or it seriously lacks novelty.
Poor: This score is not really for the program officer, but for the PI. For me, giving a “poor” is a deliberate act of meanness, giving a twist of the knife to an already lethal review. It says: I want you to hurt as much as I did for wasting my time reading this piece of crap! I would never assign “poor” to a junior investigator who just doesn’t know how to write a proposal. Nope, “poor” is reserved for people who should know better and for some bizarre reason submitted this “proposal” anyhow.

In just about every panel I’ve served on, there are only a few proposals that are so terrific that there is essentially unanimous agreement that they are Must Fund. Those would probably have rated so regardless of who was serving on the panel and are the true Excellent proposals. Most of us probably never write one. Then there are the proposals that define Very Good: these comprise a larger pool of strong proposals that deserve funding—but there isn’t likely to be enough money available to fund all of them. Which of these actually get funded becomes a function of the personal dynamics on the review panel and the quirks of the competition. Did someone become a strong advocate for the proposal?Were there three strong proposals about desert soil biological crusts? It’s not likely an NSF program would fund all three if there were also strong proposals about tropical forests or arctic tundra. Were any one in the panel, it would likely have been funded, but with all three, two might well fail. When resources are limited, agencies make careful choices about how to optimize across areas of science, investigators, etc. I support that approach.

Broader Impacts
One required element of NSF proposals is Broader Impacts. These can include societal benefits, education, outreach, and a variety of other activities. Including this was an inspired move by NSF to encourage researchers to integrate their research more effectively with other missions of the NSF and of universities. When NSF says that broader impacts are co-equal with intellectual merit as a review criterion, however, sorry, they’re lying. We read proposals from the beginning but broader impacts are at the end. We start evaluating with the first words we read, and if at any point, we conclude a proposal is uncompetitive, nothing afterwards matters. If the questions are dull or flawed, the proposal is dead and nothing can save it—not a clever experiment and not education and outreach efforts! Because broader impacts activities are described after the actual research, they are inherently less important in how we assess a project.

Broader impacts may be seen as an equal criterion because a proposal will only get funded if all of its elements are excellent. A proposal is successful when you grab reviewers with exciting questions, and then don’t screw it up! The approaches must address the questions and the education and outreach activities must be well thought out, specific and effective. Great broader impacts won’t save bad science, but weak broader impacts will sink strong science. The relative strengths of broader impacts activities may also decide which scientifically awesome project makes it to the funding line; but they won’t prop up weak science.

To wrap up: to write a useful proposal review, remember you are making a recommendation (fund vs. don’t fund) to the funding agency, and then providing justification for that recommendation. If you think a proposal is super, why? What is novel? Why are the experiments so clever? Why is the inclusiveness part more than just “we’ll recruit underrepresented students from our local community college”? How have the PIs shown that this effort is woven into the research? As an ad hoc reviewer, bring your expertise to the table to argue to the panel what they should recommend. As a panelist, give the program officer the information and rationale they need to help them decide. Do those things well, and your reviews will be useful and appreciated.

—————————————————–

1Please do not call them “Principle Investigators”—that is one common error of language that drives me nuts: a “principle investigator” investigates “principles”: i.e. a philosopher, not a scientist! A “principal investigator” is the lead investigator on a project. When I see people careless with that language, I wonder: are they equally careless with their samples and data? Do you really want me asking that when I’m reviewing your proposal?

2When I was a Ph.D. student, my advisor, Mary Firestone, came to the lab group and said she’d just been invited to serve on the Ecosystem Program review panel (two panels a year for three years) and asked what we thought. We all said, “no, don’t do it—we already don’t see enough of you!” She responded with “You haven’t come up with anything I haven’t already thought of, so I’m going to do it.” We all wondered why she asked us if she was going to ignore our input. We were clueless and wrong; Mary was considerate to even check. By serving on review panels you learn how to write good proposals—as I learned when I started serving on panels! It’s a key part of developing a career. Mary understood that; we didn’t. Sorry for the ignorant ill thoughts, Mary.

3NSF Definitions of Review Scores
Excellent Outstanding proposal in all respects; deserves highest priority for support.
Very Good High quality proposal in nearly all respects; should be supported if at all possible.
Good A quality proposal, worthy of support.
Fair Proposal lacking in one or more critical aspects; key issues need to be addressed.
Poor Proposal has serious deficiencies.

 

April 29, 2018 / jpschimel

Protect “verbidiversity” or why I hate “impact” redux

In biology, we value biodiversity; each species brings something slightly different to the table, and so we worry about homogenizing the biosphere. The same risk is present with language—when we take words that are in the same “genus” (e.g. impact, influence, effect) but are different “species” with some genetic and functional differentiation, and essentially hybridize them, we eliminate distinctions between them and destroy the diversity of the vocabulary. Just as eliminating biodiversity weakens an ecosystem, eliminating “verbidiversity”— the nuances of meaning among similar words—weakens the language, and our ability to communicate powerfully.

In this vein, I’ve been reading a bunch of manuscripts and proposals recently and I am so sick of seeing “impact” used every time an author wanted to discuss how one variable influences another. One sentence really struck me though; that was because it didn’t just feel like the author was over-using “impact,” but was really mis-using it:

“The amount and duration of soil moisture impacts the time that soil microorganisms can be active and grow.”

This is modified from a line in the real document, which is of course, confidential. The use of “impact” in this context just reads wrong to me. The derivation of “impact” is from Latin “Impactus” which derives from “Impingere” according to the OED and other sources. Definitions include: To thrust, to strike or dash against. The act of impinging; the striking of one body against another; collision.

Thus, “impact” carries a sense of an event—something short and sharp. Boom! A physical blow. An “impact crater” occurs when an asteroid hits a planet. “Impact” is a weird word when what you really mean is a long-term influence.

“Impact” does also have a definition that doesn’t include a physical blow, but rather a metaphorical one. The implication is still, however, that the effect is dramatic:
1965    Listener 26 Aug. 297/1   However much you give them, you are not going to make a significant impact on growth, though you may make an impact in the charitable sense. [From the Oxford English Dictionary].

Even in the metaphorical sense, however, most, or at least many, good uses of “impact” still have a flavor of the event being short, even if the effect is long-lasting:
1969    Ld. Mountbatten in  Times 13 Oct. (India Suppl.) p. i/1   He [sc. Gandhi] made such an impact on me that his memory will forever remain fresh in my mind.. [OED]

Or consider:
1966    Economist 10 Dec. 1144/3   What has had an impact on food distributors, apparently, is the opening of an investigation by the Federal Trade Commission into supermarket games and stamps. [OED]

In that sentence, it was the opening of the investigation that had the impact, and that opening was a single event. Lets go back, now, to the example that drew my attention:
“The amount and duration of soil moisture impacts the time that soil microorganisms can be active and grow.”

Or consider another sentence modified from another document:
“Mineralization and plant uptake directly impact soil N cycling.”

In these sentences “impact” is nothing but a synonym for “influences” or “affects.” It doesn’t even imply a dramatic or an abrupt effect; it’s just expressing a relationship. So to me, using “impact” this way is a poor choice. Using a word that implies an abrupt or dramatic influence to just say that there is some relationship steals power and nuance from the word “impact.” It damages “verbidiversity” and our ability to express sophisticated thoughts and ideas.

I know I’ve got a bug up my butt about the over-use of “impact” to express every possible relationship, but good writing involves being thoughtful about which words you choose and how you use them. English has an enormous vocabulary, the greatest verbidiversity of any language on Earth, having taken words from Anglo-Saxon, Norman-French, Latin, and others. But even when we have adapted a word and somewhat altered its meaning from its native language, a ghost of the word’s original definition commonly lingers. Be sensitive to those lingering implications, and use your words thoughtfully. Note that “impact” isn’t the only word that suffers from overuse, misuse, or just plain confusing use—its just one that I’m allergic enough to to motivate a blog post.

If nothing else, using language thoughtfully means it may be more likely that a reviewer is paying rapt attention to the cool science you are trying to sell, instead of writing a blog post about how your language annoyed him (even if he still thinks the science is cool). That could mean the difference between a $1 Million grant and a polite declination.

April 13, 2018 / jpschimel

How to write a useful manuscript review

A “good” peer review is an analysis that is useful and constructive for both the editor and the authors. It helps the editor decide whether a paper should be published, and which changes they should request or require. It helps the author by offering guidance on how to improve their work so that it is clearer and more compelling for a reader. But keep in mind:

Peer review isn’t just criticism—it’s triage.

“Triage” comes originally from military medicine. When wounded soldiers are brought into a medical unit, busy doctors must separate who is likely to die regardless of what surgeons might do from those who can be saved by appropriate medical care.

All manuscripts come into journal offices as “wounded soldiers.” I’ve authored 175 papers, written hundreds of reviews, and handled about 2,000 manuscripts as an editor. Across all those, not a single paper has ever been accepted outright—not one. Some only needed a light bandage, others required major surgery, but they all needed some editorial care.

When a paper is submitted, the editor and reviewers must therefore do triage: does this paper stand a chance of becoming “healthy” and publishable? Or is it so badly “wounded”—damaged by a poor study design, inadequate data analysis, or a weak story—that it should be allowed to die in peace (i.e. be rejected)? An editor at a top-tier journal such as Nature is like a surgeon on a bloody battlefield, getting a flood of patients that overload any ability to treat them all, and so a higher proportion must be rejected and allowed to die. At a specialist journal, the flood is less, and so we can “treat,” and eventually publish, a greater proportion of the papers.

Typically, an editor makes a first triage cut—if the paper is so badly off that it obviously has no chance of surviving, he or she will usually reject the paper without getting external reviews. At Soil Biology & Biochemistry we call that “desk reject;” at Ecology, it’s “reject following editorial review” (ReFER), to emphasize that the paper was reviewed by at least one highly experienced scientist in the field.

But triage doesn’t end with the editor. When you are asked to review a manuscript, the first question you must address is the triage question: is this paper salvageable? Can it reach a level of “health” that it would be appropriate to publish in the journal following a reasonable investment of time and energy on the part of the editorial and review team? A paper may have a dataset that is fundamentally publishable but an analysis or story in such poor shape that it would be best to decline the paper and invest limited editorial resources elsewhere.

Thus, when you are writing a review, the first paragraph(s) should target the triage decision and frame your argument for whether the paper should be rejected or should move forward in the editorial process. Is the core science sound, interesting, and important enough for this journal? Is the manuscript itself well enough written and argued that with a reasonable level of revision it will likely become publishable? If the answer to either of those questions is “no” then you should recommend that the editor reject the paper. You need to explain your reasoning and analysis clearly and objectively enough that the editors and authors can understand your recommendation.

If you answer “yes” to both central questions—the science is sound and the paper well enough constructed to be worth fixing—you move beyond the diagnosis phase to the treatment stage: everything from there on should be focused on helping the authors make their paper better. That doesn’t mean avoiding criticism, but any criticism should be linked to discussing how to fix the problem.

This section of the review should focus on identifying places where you think the authors are unclear or wrong in their presentations and interpretations, and on offering suggestions on how to solve the problems. The tone should be constructive and fundamentally supportive. You’ve decided to recommend that the “patient” be saved, so now you’re identifying the “wounds” that should be patched. It doesn’t help to keep beating a paper with its limitations and flaws unless you are going to suggest how to fix them!  If the problems are so severe that you can’t see a solution, why haven’t you argued to reject the paper?

In this section, you are free to identify as many issues as you wish—but you need to be specific and concrete. If you say “This paragraph is unclear, rewrite it,” that won’t help an author—if they could tell why you thought the paragraph was unclear, they probably would have written it differently in the beginning! Instead say “This is unclear—do you mean X or do you mean Y?” If you disagree with the logic of an argument, lay out where you see the failing, why you think it fails, and ideally, what you think a stronger argument would look like.

It is easy to fall into the “Curse of Knowledge”: you know what you know, so it’s obvious to you what you are trying to say. But readers don’t know what you know! It may not be obvious to them what you mean—you must explain your thinking and educate them. That is as true for the review’s author as for the paper’s author. It’s easy to get caught up in a cycle where an author is unclear, but then a reviewer is unclear about what is unclear, leaving the author flailing trying to figure out how to fix it! A good review needs to be clear and concrete.

Remember, however, that it is not a reviewer’s job to rewrite the paper—it’s still the authors’ paper. If you don’t like how the authors phrased something, you can suggest changes, but you are trying to help, not replace, the authors. If the disagreement comes down to a matter of preference, rather than of correctness or clarity, it’s the author’s call.

When I do a review, I usually make side notes and comments as I read the paper. Then I collect my specific comments, synthesize my critical points about the intellectual framing of the paper, and write the guts of the review—the overall assessment. I target that discussion toward the editor, since my primary responsibility is to help her with triage. She will ultimately tell the authors what changes they should make for the paper to become publishable. Then, I include my line-by-line specific comments. Those are aimed at the authors, as they tend to be more specific comments about the details of the paper. The specific comments typically run from half a page to a few pages of text.

Sometimes reviews get longer—I have written 6-page reviews, reviews where I wanted to say that I thought the paper was fundamentally interesting and important, but that I disagreed with some important parts of it and that I wanted to argue with the authors about those pieces. I typically sign those reviews because a) I figure it will likely be obvious who wrote it, and b) I am willing to open the discussion with the authors: this isn’t an issue of right-or-wrong, but of opinion and where I think that the science might be best advanced by having the debate.

How to offer a specific recommendation?

Accept: The paper is ready to publish. You should almost never use this on a first review.
Accept following minor revision: The paper needs some polishing, but doesn’t need a “follow-up visit”—i.e. you don’t think it will need re-review.
Reconsider following revision: The paper is wounded, but savable. The problems go beyond clarity or minor edits; the paper requires some rethinking. It will therefore likely need re-review. If you recommend “reconsider,” I hope you will also agree to do that re-review.
Reject: The paper should be allowed to die. Either it is fatally flawed in its scientific core, or the scientific story is so poorly framed or written that it is not worth the editorial team’s investment in working to try to make it publishable.

Keep in mind that as a reviewer, you are typically anonymous. The editor is not. If there really are deep flaws in a paper, give me cover by recommending “reject”! If I choose not to take that advice, it makes me the good guy and helps me push the authors to fix the problems: “Reviewer #1 suggested declining the paper, but I think you might be able to solve the problem, so I’ll give you a chance to try.” That of course implies: “but if you don’t, I will reject it.” If you try to be nice and recommend “reconsider” and I decide instead to reject, then it’s all on me and I’m the bad guy. I signed on to do that job, but I do appreciate your help. Give your most honest and accurate assessment but remember that the editor must make the decision and must attach their name to that decision.

Reviewing Revisions

How does this advice change if you are getting a revised manuscript back for re-review? I’ve seen reviewers get annoyed that authors didn’t do exactly what they had recommended. Don’t. First, remember that the editor likely received two or three external reviews that might have varied in their assessments and recommendations—editors need to synthesize all that input before making a decision and offering guidance to the authors. Then, authors might have different ideas about how to solve the problems and to address reviewers’ concerns. In my experience, reviewers are usually right when they identify problems, but are less reliably so in their suggestions for how to fix them. Authors may often come up with different solutions, and it’s their paper! As long as the authors’ solution works, it works. When doing a re-review, your job is to determine whether the paper has crossed the threshold of acceptability, not whether the authors have done everything that you had suggested, and particularly not whether they did everything in the way you might have suggested. In the triage model, the question is not whether the patient is 100% healed, but are they are healthy enough to release?

The more difficult call is when a paper has improved, but not enough. I expect a paper that starts at “reconsider” to step up to “minor revisions” en route to “accept.” But what if you would rate the paper as needing additional major revisions before it closes on acceptability? The paper might have gotten better, but not enough and the trajectory is looking relatively flat. In such a case, you should probably recommend rejecting the paper. It’s not that the paper can’t become publishable, but having given the authors the advice to improve the paper, they either chose not to take it or couldn’t see how to. Well, too bad for them. You can’t write the paper for them and you can’t force the issue; we all have finite time and energy to invest in a patient that isn’t getting better. At some point, we just have to make the hard call, move them out of the hospital ward, say “I’m sorry,” and let them go.

To wrap up, remember that reviewing is a professional obligation—it’s what we do for each other to advance our science. We help our colleagues by identifying areas where the work is unclear or the arguments weak. Review can be a painful process, but writing science is hard; no one ever gets it completely right on the first shot. No one. Ever*. We all rely on peer review, so embrace the process when you’re being reviewed, and do the best job you can when you are the reviewer.

* At least never in my 30 years of experience.

December 16, 2017 / jpschimel

Surprising long term consequences

Sometimes the consequences of decisions made decades ago pay off in unanticipated ways. The first class I ever taught (at the University of Alaska Fairbanks) was general microbiology with a lab. I screwed up. When I designed the class and set up the curriculum, I assigned too much of the overall grade to the exams. UAF had many older returning students, notably a number of single mothers. Those people were often among the best students—dedicated, hard-working and committed—but equally, often not the best at taking tests. There was one woman who was one of those wonderful students who make teaching a joy—but I think she only got a B-, and it pained me because it was mostly my fault. She was strong in lab and wrote excellent lab reports, but her exams were imperfect. I didn’t feel I could change the rules mid-stride and as a result she didn’t get the grade I knew she deserved. In response to that I restructured how I graded that class, and have organized every class I have ever taught since to more strongly emphasize take-home work: lab reports, problem sets, etc. If students are willing to take the time to do the work, I’ll reward that.

So now many years later, Santa Barbara is under fire siege. I was just evacuated this morning as the fire exploded across Montecito. But as the fire grew last weekend, our Chancellor canceled exam week and rescheduled it to the first week of what would have been Winter quarter. I told students that they had a choice—they could skip the final and I would assign grades based on the work done so far—after all the final was only supposed to be worth 15% of the total! For most students, the final is unlikely to change their course grade and so most are taking the easy out.

I’ve been deeply thankful to the incredible work of the firefighters, but also to that one woman, who, more than anyone else, led me to organize my class so that skipping the final didn’t make a big deal in their final grade and so I could ease all of our lives now that they are in crisis.

Postscript: They have cancelled all evacuation orders for Santa Barbara County and it looks like our house should be intact. Whew. And again, thanks to the fire crews who did amazing work.

Post-postscript: Sycamore Canyon didn’t burn. As a result, my creek barely got more than a few feet deep during the brief intense storm last week–the one that sent horrific floods tearing down Montecito watersheds that had burned. Those floods killed 20 people and trashed hundreds of homes. I feel deeply for the members of my community, but remain grateful my house wasn’t one of them.

November 30, 2017 / jpschimel

Do Species Matter: responding to an op-ed by R.A. Pyron in the Washington Post as a piece of writing.

R. Alexander Pyron just published an op-ed in the Washington Post arguing that we don’t need to protect species from extinction.

https://www.washingtonpost.com/outlook/we-dont-need-to-save-endangered-species-extinction-is-part-of-evolution/2017/11/21/57fc5658-cdb4-11e7-a1a3-0d1e45a6de3d_story.html?utm_term=.ff7c665c6c14

Many, unsurprisingly, are criticizing this piece on grounds that span from ethics to practicality. I want to evaluate it differently: as communication. Writing and rhetoric. The writing is lively and engaging; Dr. Pyron uses words well. But the core of a piece of writing is its structure and argument.

Dr. Pyron’s argument is predicated in the ethical/philosophical belief that “The only creatures we should go out of our way to protect are Homo sapiens.” One can disagree with this belief and one can be appalled by it, but one can not challenge it on scientific grounds—it’s a belief.

Instead, consider the logic of the argument that Dr. Pyron develops from that predicate. When I consider issues of writing, story structure, and even the ethics of scientific communication, I see failures in all areas.

In Writing Science I place the ethics of science central to developing a “good” story. On page 9 I pose the question “Is seeing science writing as storytelling professional or not?” I develop this by noting that “To tell a good story in science, you must assess your data and evaluate the possible explanations.”

On page 139, I state this more explicitly:

“Also remember, you are a scientist — it is not your job to be right. It is your job to be thoughtful, careful, and analytical; it is your job to challenge your ideas and to try to falsify your hypotheses; it is your job to be open and honest about the uncertainties in your data and conclusions.”

So, let’s evaluate Dr. Pyron’s argument as a piece of writing and assess whether he did this—did he live up to his responsibility to be “thoughtful, careful, and analytical”? Certainly there are a number of fundamental truths in what he wrote:

Extinction happens—some species were already doomed to extinction in the near
future.
Planet Earth will continue and will “recover” from the current mass extinction: over
millions of years, biodiversity will recover and “The Tree of Life will continue
branching, even if we prune it back.”
Not all species have a vital role in the functioning of ecosystems. Some biodiversity is
functionally redundant.

But, Dr. Pyron makes an important statement when he says:

“All those future people deserve a happy, safe life on an ecologically robust planet, regardless of the state of the natural world compared with its pre-human condition.”

I accept that ethical predicate. Yet, it creates a crack in his argument, and that crack is based in both science and rhetoric. It focuses on the question: How do we provide those future people “an ecologically robust planet”?

Our understanding of how complex natural systems work is developing. It wasn’t many years ago that we thought the human gut microbiome was purely commensal—microbes lived within us but did nothing fundamentally beneficial for us; animals could live healthily with a sterile gut. How that understanding has changed! How then do we ensure “an ecologically robust planet” when our understanding of what that entails remains imperfect? We know that Homo sapiens depends on natural ecosystems to provide us with essential functions: food, fiber, water purification, and others; Dr. Pyron acknowledges this. But, we don’t actually know what we need to maintain those functions over the long term—centuries to millenia.

How to provide an ecologically robust planet to support Homo sapiens is the essential question at the heart of Dr. Pyron’s piece. Yet, he offers neither an answer nor even a direction as to what that means!

Without a hint of an answer, the op-ed comes across as someone trying to assemble a complex device and ending up with some spare parts left over that they don’t understand, yet then stating confidently: “I’m sure these are unnecessary.”

“We should do this to create a stable, equitable future for the coming billions of people, not for the vanishing northern river shark.” Even if I accept that, it still begs the question: can we maintain an “equitable future for the coming billions” if we don’t maintain healthy ecosystems and ecosystem services? Can we offer the people of New Guinea an equitable future if we pollute the river in which the shark lives? This is an important question that lies at the interface of ecology and sustainability and requires an answer to support Dr. Pyron’s argument.

The logical and literary failures in Dr. Pyron’s piece circle back to this issue: he embraces the idea that functional ecosystems are important to serving humans. But, then he disparages and ignores existing ecological knowledge about how to achieve that and he fails to acknowledge or accept the limits of existing knowledge.

To make a scientific argument without acknowledging existing knowledge or your own ignorance (Dr. Pyron expertise is “theoretical and applied methods in statistical phylogenetics,” not ecology) is, to me, a fundamental failure in science communication. I could go further and argue that it is a failure of science communication ethics and professional norms.

While I consider this op-ed a failure as science writing, as a piece of political writing, unfortunately, it will unquestionably be a success— it was published in a high-impact outlet, Dr. Pyron has solid academic credentials at a major University, and he makes a well-phrased and passionate argument that will resonate with many.

In Writing Science, I focus on professional skills: framing story, developing flow, and using language powerfully. But professional skills should be balanced against professional responsibilities. When you write under your professional byline, and so speak as a scientist, remember:

“It is your job to be thoughtful, careful, and analytical; it is your job to challenge your ideas and to try to falsify your hypotheses; it is your job to be open and honest about the uncertainties in your data and conclusions.”

My complaint with Dr. Pyron’s piece (speaking as a writer on writing) is that he failed to live up to this basic responsibility as a scientist writer.

———————————–

NOTE: There is a postscript to this story. A colleague noted that Dr. Pyron has posted a long commentary on his Facebook page that included the following:

“In the brief space of 1,900 words, I failed to make my views sufficiently clear and coherent, and succumbed to a temptation to sensationalize parts of my argument. Furthermore, I made the mistake of not showing the piece to my colleagues at GWU first; their dismay mirrored that of many in the broader community. As I’ve explained to their satisfaction, and now I wish to explain to the field at large, my views and opinions were not accurately captured by the piece, and I hope the record can now be corrected. In particular, the headlines inserted for the piece for publication said “We don’t need to save endangered species,” and that “we should only worry about preserving biodiversity when it helps us.” I did not write these words, I do not believe these things, and I do not support them.”
(Taken from Dr. Pyron’s FB page.) 

Dr. Pyron is saying things that are scientifically reasonable in his long, thoughtful, Facebook post. Unfortunately few will ever read the FB post; millions may read the op-ed. The public gets the wrong message, and his scientific peers are dismayed. Not a good outcome. Remember, always, that communication isn’t what you think you are giving, but what the reader gets. In this case, most readers get a message that was antithetical to Dr. Pyron’s true beliefs. Ouch. That is a true failure in writing. Worse, it is a failure that Dr. Pyron is going to have to live with because published is forever. You can’t unpublish something.

Equally, this is a powerful lesson in the value of peer review: Dr. Pyron did not run the piece past his peers and so never got feedback that indicated that readers were getting a different message than that he intended.