Skip to content
November 21, 2016 / jpschimel

Titles: statements, conclusions, or questions?

A title’s purpose is to inform you quickly and effectively enough about a paper to allow you to decide whether to invest more time in reading it—at least to check the abstract to get confirmation. Most titles are statements that describe the work and/or results; for example consider titles like:

  • Separating cellular metabolism from exoenzyme activity in soil organic matter decomposition
  • Sensitivity of coral recruitment to subtle shifts in early community succession
  • Direct benefits and indirect costs of warm temperatures for highelevation populations of a solitary bee

The challenge for a good title is  to give you enough information on what the paper is about without getting overloaded with a mass of technical terms that overwhelms a reader. I periodically get asked, however, what I think about titles that either state the conclusion or that ask a question. There has been extensive discussion over these ideas over the years, but I am not going to make any pronouncement on “thou shalt (or shalt not)…” First, I’ve used both approaches in my career and I prefer to limit how frequently I’m a hypocrite. Second, though, and more importantly, just because I wrote a book on writing doesn’t give me license to offer proclamations of personal opinion as “rules” that everyone must follow.

Fundamentally all questions of whether you should or should not do something in writing come down to one question: Does it enhance communication? Does something improve the reader’s understanding or ease in gaining information or insight? If it does, clearly, you should do it. If rather, it hinders communication, equally clearly, you should not. If it  neither hinders nor enhances, but just makes a point in a different way, then it is a matter of personal taste and style—do what you like.

Conclusions as Titles

As an author, it’s your job to be self-critical and to challenge your own conclusions. As reviewers and readers our job is the same. By offering a conclusion in the title, you may prejudice the reader or imply you are trying to sell us a conclusion (rather than asking a question and allowing the conclusion to develop for us). So readers may suspect your motives and objectivity—that undermines communication. Thus, there is some risk in conclusion-as-title papers.

But that doesn’t mean they are all bad and should never be used. The flip side of the argument is that a title should tell us what the paper is about—and what does that better than just telling us the conclusion? So when there is a clear, interesting, and inarguable conclusion, why not just state it? Might readers then not bother reading the paper? After all, they got the story just skimming the title. I wouldn’t worry about that—if readers care so little about the topic that they won’t go further than the title, they probably won’t read the paper anyhow but maybe now you’ve still slid a useful tidbit into their brain. That’s a victory. I looked at the recent web pages for Ecology and for Ecology Letters, two leading journals in the field, to pull some recent examples. Let’s  see where they work or not.

“Large, connected floodplain forests prone to flooding best sustain plant diversity.”
   Johnson et al. (2016; DOI: 10.1002/ecy.1556)

The problem I have with this title is what is implied but unstated: these floodplain forests “best” sustain diversity. But better than what? Every other ecosystem type on the entire planet? After all, “best” is the superlative adjective. Thus, the problem I have with this title isn’t that it’s a statement, but that it is unclear.

“Predators suppress herbivore outbreaks and enhance plant recovery following hurricanes.”
   Spiller et al. (2016; DOI: 10.1002/ecy.1523)

Here, my problem is that “herbivore” can cover anything from aphids to elephants. An “elephant outbreak”? Well maybe not. In fact, this paper discusses a moth outbreak and a lizard predator. Would I have liked to know at least part of that? Yes. Would this have been better as “Predators suppress outbreaks of insect herbivores and enhance plant recovery following hurricanes”? I think so. If I know the herbivores are insects, I may not care what the predators are—but I can guess they are not lions and tigers! Maybe the authors didn’t want to generalize moths to “insects,” but they could have gotten technical and called them “lepidopteran herbivores.” They were clearly trying to stay short and pithy. That, however, is the challenge in writing a good title: give us enough information to understand what the paper is about without bogging us down in detail. What is the right balance?

“Naive tadpoles do not recognize recent invasive predatory fishes as dangerous.”
   Hettyey et al. (2016; DOI: 10.1002/ecy.1532).

This I think is better. It frames the story and makes it clear what the message is—though I found the opening word “naïve” a little odd. The image of a naïve tadpole conjures something from the Far Side cartoon—but with this, the authors were assuming that anyone looking at the title knows what “naïve” means in this context, which is harder when it’s the first word. Thus, I would argue that this one suffers from a minor case of the “curse of knowledge” in which the author may assume readers know too much.

“Mycorrhizal fungi and roots are complementary in foraging within nutrient patches”
   Cheng et al. (2016; DOI: 10.1002/ecy.1514)

Here is one that I think manages to be both short & pithy while still offering enough information that I get a pretty complete story. OK, maybe it’s because I’m a soil biologist. But any ecologist should know (or be able to figure out) what mycorrhizae, roots, and nutrient patches are.

In going through “statement as title” papers, I think authors taking this approach were usually aiming for short and sharp and may therefore be prone to leaving out information. That forces a reader to go to the abstract to figure out what the title means. Ouch. I want to read the title to figure out whether to look at the abstract—I shouldn’t have to read the abstract to figure out whether to look at the title! If you go this route, therefore, be careful to ensure the message is as complete and clear as you can.

So then I went to my own C.V. to check how frequently I’ve been a co-author on papers whose titles are statements, it’s 14 out of over 150. Here are a few:

  • Long-term warming restructures Arctic tundra without changing net soil carbon storage.
  • Invasive Grasses Increase N Availability in California Grassland Soils
  • Drying/rewetting cycles mobilize old C from deep soils from a California annual grassland

Each of these offered a simple and clean story, one that could be captured in a simple declarative statement. So why not do so routinely? Well, in the 130+ other papers I’ve co-authored stories were more complex or we just came up with something different and never questioned whether we should try a conclusion title.

Questions as Titles

How about questions as titles? Some people dislike them, but I think the question titles readers mostly dislike are “Yes/No” questions. Readers are likely to assume that you know the answer is “yes” so why bother to pose it as a question? This may feel “precious.” Consider:

Can we predict ectotherm responses to climate change using thermal performance curves and body temperatures?
Sinclair et al. (DOI: 10.1111/ele.12686)

I don’t like that title. As a reader, I’d assume that the answer is “yes,” but I don’t learn that much from the title itself. I’m not even sure the answer is “yes.” And it it’s “no” then I am sure there must be a deeper story that the title isn’t letting on. Let’s look at another “Yes/No” question title:

“Does habitat unpredictability promote the evolution of a colonizer syndrome in amphibian metapopulations?”
   Cayuea et al. (2016; DOI: 10.1002/ecy.1489)

Again, I can guess that the answer is “yes” but I don’t know that—after all, the authors could have entitled this “Habitat unpredictability promotes the evolution of a colonizer syndrome in amphibian metapopulations” and then I would know the story, but here I’m unsure. To me, that isn’t effective communication, and that’s my criterion. But what about question titles that aren’t simple Yes/No questions? Consider the following paper by Hefley et al. (2016; Ecology Letters DOI: 10.1111/ele.12671).:

“When can the cause of a population decline be determined?”
   Heffley et al. (2016; DOI: 10.1111/ele.12671)

That’s a question I want to know the answer to. This title develops my curiosity while offering a strong sense of what the story is about. This is a question whose answer is likely to be both interesting and important for ecologists. I think that title is effective communication. Here’s another that does something similar:

Why do trees die? Characterizing the drivers of background tree mortality
   Das et al. (2016; DOI: 10.1002/ecy.1497)

Again, that poses a question that is very likely to draw an ecologist’s interest. Most trees can live a long time and survive harsh conditions. So why do they eventually kick the bucket and die? Again, that engages my curiosity but then in the second part gives me some sense of where the paper is going. Nice.

So, from my deep and scholarly analysis of titles (OK, my 20 minutes of skimming journal web pages) my first cut conclusion is that it’s probably best to avoid Yes/No type questions for titles—readers are likely to be sure that you know the answer, but are holding out on them. Thus, I think such titles are not likely to grab a reader’s curiosity and aren’t likely to offer a clearly informative story. If the question has a simple “yes” answer, the title can probably be better written as a statement.

Having drawn that conclusions, let me test my hypothesis against several papers I’ve been a co-author on where we used a question-based title:

  • Cold-season production of CO2 in Arctic soils: can laboratory and field estimates be reconciled through a simple modeling approach?
  • Different NH4+–Inhibition patterns of soil CH4 consumption: a result of distinct CH4 oxidizer populations across sites?
  • Does adding microbial mechanisms of decomposition improve soil organic matter models? A comparison of four models using data from a pulsed rewetting experiment.

Each of these does ask a Yes/No question—so am I making a liar out of myself? I don’t think so. The first two pose the question after framing the issue, rather than as the entire title. The last one asks the question but then also makes it clear how it would be answered and what the main story of the paper really is.

The key is whether the question enhances reader understanding of what the work is about and their engagement with it. I’ll stay with my conclusion that simple Yes/No questions are probably best avoided.

Clever Titles: plays on language

A final issue here is “clever” titles, where there are plays with language. My first message here is to keep in mind that some readers, particularly those who are not native English speakers, may not catch the language play. Then, your cleverness likely undermines communication. And that is bad.

For example, consider the title of a paper I handled recently for Soil Biology & Biochemistry. The original title was:

 “When protection leads to degradation: impacts of protected colonial birds on soil microbial communities.”

This was a nice paper, but consider the initial clause: “When protection leads to degradation…” When you first read that, it means nothing. I suggested turning it around: “Impacts of protected colonial birds on soil microbial communities: when protection leads to degradation.” Now, with the straight information as the main title, the meaning of the subtitle becomes obvious—protecting the birds leads to soil degradation. It still has the interesting “flip” idea that doing something good is actually bad, which generates curiosity, and so to my mind, enhances communication. If the authors had just deleted that text entirely there would be less sense of the story to engage a potential reader—it would just be about the effects of colonial birds on soil communities. But how many of us would care about that? Here’s another from Ecology:

“Hunting on a hot day: effects of temperature on interactions between African wild dogs and their prey.”
   Creel et al. (2016; DOI: 10.1002/ecy.156)

I think the opening clause here doesn’t add much, but it does add enough to be useful. It’s short enough that a reader gets past it very quickly and the terms “hunting” and “hot day” resonate closely with the heavier and more technical terms in the main clause “effects of temperature” and “interactions between African wild dogs and their prey.” The short preamble clause adds a little more sense of what the story is about and does so with lighter language.

“Elephants in the understory: opposing direct and indirect effects of consumption and ecosystem engineering by megaherbivores.”
   Coverdale et al. (2016; DOI: 10.1002/ecy.1557)

In this one I like the sound and image of elephants in the understory. It flows prettily. And it provides a gentle start to what then becomes a heavy and technical main clause. It helps a reader see the issue and it does it with nice language. I think that is good communication.

To wrap up, do not be clever just to be clever or because you came up with an expression that you like. It’s all about the reader and whether you help or hinder their understanding and appreciation of what you offer. A title should offer enough guidance on what the paper is about to be able to make a reasonable choice as to go further and either read the abstract or the whole paper. As with all other issues of communication, the question is whether it improves the reader’s experience and makes their job easy. Remember my first Principle (can I call that the Principal Principle?): As the author, it is your job to make the reader’s job easy. That applies just as much to titles as to any other part of the text.








October 25, 2016 / jpschimel

“Writing Science” in one page: A guest blog by Amy Burgin

I saw Amy’s three-page condensation of “Writing Science” and thought it so wonderful I asked if she would make it available and write a guest blog. Here is the link. Thanks Amy!


The Making of Schimel in a Sheet: How and why I use Writing Science to teach Scientific Communication”

Amy Burgin, Associate Professor, University of Kansas and Kansas Biological Survey

As a lifelong nerd, I relish the freshness of each fall semester with rituals such as preparing for the first day of class, outlining my writing projects for the new academic year, and my now annual refresher on scientific writing principles. For the last four years, I’ve taught Writing Science or Science Communication (#SciComm) using Josh’s book, Writing Science: How to write papers that get cited and proposals that get funded. Communication skills, like those described in the book or developed through Science Communication classes, are what set young scientists apart from the pack vying for limited grant money, publication slots and tenure track positions.

I first encountered the power of the book working with my first Master’s student, Valerie Schoepfer. Valerie defended her M.S. in April 2013 and turned in what is a pretty typical thesis; it was too long, lacked a cohesive story, and generally read like a lab report. Our task was to transform the thesis into publishable manuscripts. I ordered Writing Science, gave it to Valerie, and asked her to do the writing exercises. Afterwards, she sent me a new draft that was almost unrecognizable from the original thesis (now a paper in JGRBiogeosciences). The editor added this note in his acceptance letter:

Personally I read the introduction and liked it a lot. It was tutorial enough to make this work accessible to a wide audience. As a neophyte biogeochemist and wetland ecologist, I found it very clear. I am also looking for our papers to clearly state interesting hypotheses and you do. Scientifically, the seasonal aspect of this work makes the context of the finding and their implication better. Figures are bold and easy to read. Good job. This is what I expect and hope for.”

I’ve had few prouder moments as an advisor than reading this email. It showed a student how the hard work of seemingly endless drafting, revising and editing can lead to positive peer-review comments and a publishable paper. Given that great experience, I started offering a seminar class on Writing Science. For three years, I taught it with a focus on analyzing published papers. (After the first year, I had to institute a “no papers written by your advisor or committee members” rule – analyzing your advisor’s writing leads to awkward class conversations.) This approach made for better readers, but didn’t translate to long-term writing improvement.  Consequently, I’ve modified the class to incorporate the writing exercises – the exact ones Valerie used to improve her initial drafts.  Students write ~15 drafts of an 800-word article using the book’s prompts. They then complete a reflective self-analysis to internalize where they need to improve. Not surprisingly, applying the principles to your own work yields the greatest gain in writing improvement.

At the end of the class, I deliver a “parting gift” I call the Schimel in a Sheet. It functions as an easy reference of the book’s core messages to hang beside your desk – you can see my Schimel Sheet in action from the picture of my work station, below.  The Schimel Sheet highlights the major messages in each chapter.  The quotes reflect marked passages in my heavily annotated copy of Writing Science.  That is, these are the core messages for young scientists and early career writers to internalize. There is a good amount of short hand; thus, the Schimel Sheet may be hard to understand if you’re not familiar with the book. If you’ve studied the book, it serves as a good reminder of the first principles of clean, clear, and concise science writing.


At the end of each class, I also ask my students to summarize the book as simply as possible.  My current class constructed the best summary yet, which fit into this tweet:

tweet“Schimel Sheet in a Tweet” is (necessarily) short, but is the most succinct summary I’ve seen of Writing Science. Get to the point refers to the material in the first four chapters, which focus on establishing a story. The entire book encourages the writer to put themselves in the reader’s shoes, but this theme is particularly apparent in Chapters 5-10 on engaging curiosity by framing a knowledge gap, stating a question and creating an overall logical progression. Students learn to “sweat the small stuff” in Chapters 11-16, which provides crucial tools for fostering clearer writing.  All in, I think this is a pretty good summary of the book. [Josh’s note: me too!]

September 29, 2016 / jpschimel

Mentoring: the power of kindness


Emily Bernhardt (Professor at Duke and President of the Society for Freshwater Science) wrote a lovely column “Being Kind” about the importance of being kind in science.—Being-Kind.cfm

In this she emphasizes the importance and power of being kind. Importantly she distinguishes being kind from being nice:

“In using the word kind I very explicitly do not intend the sometimes synonym nice. As intellectuals struggling to understand the world around us it is vital that we argue, that we hone our understanding through challenging our own views and the views of others. We cannot, and should not, always be nice while intellectually sparring. Yet we can spar while still being kind. We can disagree with a point while respecting the person making it.”

This is analogous to my quote “Friends don’t let friends publish bullshit.” You don’t serve your friends by giving them a pass, but by helping them make their work as good as it can be.

In the column, Emily notes several incidents that for her were notable acts of kindness when more senior colleagues took the time to help her work through challenges. Two of the people she mentioned were Nancy Grimm and myself. Neither Nancy nor I had any more than vague memories of the experience—I’m sure Nancy’s experience was the same as mine: having a fun discussion, puzzling out an interesting problem with a colleague and new friend. I’m sure I had no perception at the time that I was being kind.

But that’s the point: what we experience as our motivation for our actions and what the recipient experiences as the outcome of those actions can be very different. And why it’s important to remember that even what feels like a casual conversation with a junior colleague can have an outsized influence.

My former student Jay Gulledge once highlighted a similar thing, in noting that one of the most significant events of his graduate career was an 8-hour long “argument” we had about an experimental design on a drive from Fairbanks up to our field site at the Toolik Field Station in arctic Alaska. In the end, I gave in and agreed “do it your way.” I remember the trip, but didn’t realize that for Jay it was better than passing his qualifying exams. He knew that I hadn’t caved out of exhaustion—I am as stubborn as Jay if not more so—but because I finally accepted  that although I still had issues with the design, no experiment is perfect, and his approach was as likely to succeed as mine (and it did). For me, that trip was just a trip to Toolik; for Jay it was a rite of passage.

In contrast, I well remember an interaction with a senior colleague who was one of the big wigs in soil biology. I  met him at the first conference I ever attended. I was an insecure newbie Ph.D. student and was standing talking with Mary Firestone, my Ph.D. advisor. This person came up to her and said “Mary I see you’re presenting a paper on heterotrophic nitrification.” She introduced me and pointed out that I was the one who did the work and was presenting the talk. He gave me a sideways glance, and then turned back to Mary and asked “Do you believe your data?” I got the distinct impression that I was beneath his notice—and I have never either forgotten, nor forgiven him.

It can be hard to remember, as you move up the career ladder, just how influential your words and acts can be on junior colleagues who look up to you. We carry those little interactions with our seniors for the rest of our lives. They can be deeply enriching, or they can be scarring. So remember to try to be kind, even when you may be dissecting someone’s work. The world is  more fun when people are friendly and supportive of each other.

I’ll end by passing along another quote that Emily used that was from Anne Galloway: “Everyone here is smart, distinguish yourself by being kind.”

August 15, 2016 / jpschimel

Single vs. Double-Blind Review: Is it really bad to let reviewers know who you are?


A concern with peer review has always been prejudice; prejudice born of reviewers knowing who the authors are but not vice versa. This raises a clear potential for abuse. Shit happens, and I think all experienced researchers have had some experience with inappropriate or personally charged reviews. More recently, the concern has shifted to covert prejudice—quite possibly unconsciously—against women, minorities, other nationalities, or even junior colleagues. A paper authored by John Smith or even J. Smith might review more favorably than one by Jill Smith, Juan Herrera, or Shujin Zhu. Prejudice, whether overt or covert, degrades peer review and scientific publication.

To avoid this, some disciplines and journals are moving to double-blind review in which the names and affiliations of the authors are removed from the paper. In some areas, double-blind is considered a necessary and fundamental requirement of a fair peer review system.

However, in other areas, the counter-argument has been that double-blind is pointless, because reviewers can figure out who the authors are. For example, in environmental field sciences, the combination of topic, approach, and research site can limit the possible research group to such a degree that the reviewer is able to “peek past the blindfold.” If someone is doing work on summertime soil biogeochemistry of California grasslands, working at the Sedgwick Reserve, it wouldn’t be much of a stretch to guess that the work came from my lab. If the paper noted that isotope samples were analyzed in the University of California Santa Barbara Marine Science Institute analytical lab, you’d have it nailed.

Thus, even with double-blind systems, reviewers are often sure they know who the authors are. But research suggests they are regularly wrong; I can vouch for that—I once reviewed a paper and noted to the editor: “This is a really nice paper out of so-and-so’s group.” The paper was covered with “fingerprints” such as personal communication and unpublished data references, but the editor wrote back to tell me I was wrong. My response was that it was clear that there was some relationship between the actual authors and the group I tagged, but that more importantly, it didn’t matter that I was wrong! If I had prejudices, they would still have tainted my review. The counter is that if reviewers are at all uncertain about the authors, it could at least diminish the effects of any prejudice they hold; but in my case, I wasn’t uncertain—I was just wrong. Oops.

In any event, all the discussions I have ever seen have always focused exclusively on eliminating potential bias in the assessment of the manuscript itself, trying to ensure that decisions on the fate of a paper are not a function of who wrote it, but solely on what they wrote.

But submitting a paper is also a form of professional networking. As I mentioned in a previous blog post, “the Editors and reviewers who run the journals are your professors and your colleagues—people you want to be your friends (and maybe your postdoc advisor).” Early career scientists have an interest in becoming known to their senior colleagues. Yet, the papers I read most carefully and pay closest attention to are those I review; I’m likely to register who wrote a paper when I review it. When I get a double-blind manuscript, I may be able to guess where the paper came from, but I can’t know which student or postdoc actually wrote it. Having me know that Dr. Loreau’s group just produced a nice piece of new work may benefit Dr. Loreau, but it does nothing for Ms. Sylvain who actually wrote the paper.

Sometimes, useful relationships even develop from the review process—I started working with Stefano Manzoni, now one of my most valued collaborators, as a result of a review I wrote (and signed) of one of his first papers. He took some ideas I’d included and developed a new model that elaborated on them; he then invited me to be a co-author and we’ve worked together since. Such Cinderella stories may be rare, but they do occur.

If that had been a double-blind review, I couldn’t have told that it was from a group that was newly moving into soil biology and might well not have invested so much in the review. Would I have signed it? I suspect not—anonymity breeds anonymity. And I know I said things that I wouldn’t have said in a completely open review system. Signing that review has benefited both of our careers. Letting the reviewers know who the authors are can help find the glass slipper.

The networking and advertising benefits in classical single-blind review may be modest and occasional; but they are real, and eliminated by double-blind. The debates over single vs. double-blind I’ve seen consider only the balance of risks from prejudice in single-blind vs. the hassles or inefficiencies of double-blind. They don’t consider any overt potential benefit to the authors in classical single-blind. They should.

In some fields the cost-benefit balance of review systems will clearly come down on the side of double-blind. In others (particularly I suspect field-based sciences such as ecology) the balance might well shift to single-blind.

Importantly though, the discussions should consider that the review process is more than a simple evaluation of a manuscript. It also builds relationships among people.

July 24, 2016 / jpschimel

Language and language change: what are the “data”?

Language changes. How else do you explain George Bernard Shaw’s famous quote “England and America are two countries divided by a common language.”

Language changes with time and distance. Words are created, lost, and alter meaning. Particularly when words are adopted from another language, they often shift meaning and usage. In English, a common battle is whether the rules from that other language still necessarily apply to the word as used in English. If we adopt a Latin word, must we still use Latin rules?

“Data” is the word that has probably been fought over the most in science. Many (though a decreasing number) feel that those who would ever use the word “data” as a singular noun are ignoramuses who are debasing the language.

“Data” (in English) is derived from a Latin word and in Latin it is the plural of “datum.” In Latin, therefore to ever use “data” as a singular would be a complete and gross error. But is it an equal error in English? According to the Oxford English Dictionary (the OED), the Latin word means “given, that which is given, neuter past participle of ‘dare’ to give.”

That isn’t the meaning we apply to the word in English, and particularly not in science.

So is the English “data” the same word as the Latin word “data”? No, it isn’t. The OED gives our definition as “In pl. Facts, esp. numerical facts, collected together for reference or information.” So, should the same rules apply?

Some argue yes—that since data is originally a Latin word, then Latin rules should always apply. But standard English usage often treats “data” as a mass, or collective, noun—it is the collection of facts.

In English usage, collective nouns are treated as individuals. “The population is…,” is correct English usage; to say “The population are…” would be incorrect.

So in dealing with the word “data,” we are left with two issues. The first is whether it is ever correct to use data as a singular, collective noun? The second, however, is whether you should?

Based on the OED, Chicago Manual of Style (the CMS), and other sources of grammatical wisdom, you can correctly use “data” as either singular mass noun or as a plural, depending on your meaning:

The plural form: “The data indicate…” implies that it is through evaluating each datum and then synthesizing that information that you establish what is indicated.

The singular form: “The data indicates…” implies that after aggregating the data into a single mass, the whole data set acting as a single entity indicates something.

Don’t forget though, that if you have a single fact, it remains a datum (or a data point). You can’t have “a data.” Don’t use “data” as a true singular.

But, then there is the issue, not of what is grammatically correct, but of what people think is grammatically correct. There remain those who reject the mass noun use, and they tend to be senior colleagues—people you might want to impress. Although the OED and CMS acknowledge and accept the mass noun usage, the OED notes “However, in general and scientific contexts it is still sometimes regarded as objectionable.” and the CMS says “In formal writing (and always in the sciences), use data as a plural.” For me, the ability to use “data” as a mass noun is a tool too useful to ignore,  but it is one that you should use thoughtfully and deliberately, and some conservatism is wise.


Footnote: The OED does note the use of “data” as a count noun with a 2010 citation stating “These datas were likely not missing at random.” But please don’t do that. Not only does it sound horrible and wrong, but almost every reader will be sure that it is. 

June 19, 2016 / jpschimel

How I learned to hate statisticians

OK, I don’t hate statisticians. But have you ever gotten so sick from eating something once that you haven’t been able to look at that dish for years afterward? So how would you feel if an experimental design was foisted upon you on the basis of “statistical perfection” that wasted >$1 million and an entire year’s effort by many, many people on a nationally important study? That was my experience on the Exxon Valdez coastal habitat damage assessment study.

I started as an Assistant Professor at the University of Alaska Fairbanks in January 1989. It was quite the welcome to Alaska—that winter I saw the thermometer read -60 F and my mother was sure that I was going to freeze to death. The ice fog in Fairbanks was so thick that I was stranded on campus for weeks, and with my impressive skills at driving on ice, it was taking many people’s lives into my hands anytime I tried to drive to the supermarket.

But then on March 24, the Exxon Valdez ran aground on Bligh Reef. Everyone with any scientific expertise, it seemed, got caught up in the effort of trying to figure out how to assess the damage to the magical environment of Prince William Sound. How do you assess such damage? The animal people had it “easy”—everyone agreed that you could set a cash value on a dead sea otter; $10,000 per animal? But how do you assess the damage to the habitat that supports those sea otters? How much is a dead barnacle worth? How much are a few fronds of dead Fucus worth? The obvious answer would be that on their own, it would be awfully close to zero. But these are the base of the food chains that support the otters, the murrelets, and the herring. Clearly the value of the ecosystem is far, far from zero—rather it’s mammoth!

So we put together a damage assessment strategy that focused on foodweb concepts, targeting the quantity, quality, and composition of key trophic levels: The Coastal Habitat Damage Assessment. A large group of us developed the core approach over several meetings in Juneau and Anchorage, with a plan to get research teams into the field by August. We called it the “Coastal Habitat” study to emphasize that we were studying basic ecosystem members not for their own sake necessarily, but because they created the habitat for the more charismatic members.

We developed a sampling strategy that would compare heavily oiled sites to lightly or unoiled sites of different habitat types (e.g. exposed rocky shores, sheltered rocky shores, sandy beaches, estuaries), and would have three separate teams spread across the coast of Alaska: one in Prince William Sound itself, one in Kenai, and the third in the Shelikof Straight area of Kodiak and Katmai.

The biologists on the study wanted to make it a paired design where we would use a GIS system to classify the degree of oiling and of habitat type along all the shorelines of Prince William Sound and of the other sampling areas. We would randomly select heavily oiled sites in each habitat type. Then we wanted to pick the nearest available lightly or unoiled site of the same habitat type to use as a paired control. We felt this would balance the need for random sampling with ensuring meaningful biological reality.

But this was a huge effort, coordinated by State and Federal Agencies, and the Management Team had contracted a biometrician who I understood was well known known and respected for work on wildlife, but I’ll leave his name and affiliation anonymous. He insisted that such a paired design was imperfect since it meant selecting control sites non-randomly. He insisted that we select the oiled and control sites independently, and randomly, to create a stronger statistical design. We argued extensively about the alternative designs: paired vs. random. He won that battle.

As a result of his winning that battle, we all lost the “war”—it destroyed the first year of the study. The efforts of about 10 research staff working out of two 50 foot charter vessels for over a month continuously in the field (I think the boats together cost $5,000 per day), plus people working back in Fairbanks on analyzing samples and data. All for naught. Wasted.

It was wasted in part because of one other decision that seemed trivial. That was how to define the sampling “universe” from which sites were selected. The site selection and marking group was based out the Alaska Dept. of Fish and Game (if I remember right). Their job was to a) do the GIS work to map out coastal habitat types and overlay that with level of oiling, b) randomly select sections of coast—5 each oiled and unoiled in each habitat type (no longer than 1 km per section), and then c) send a team to the Sound to mark the sites for the research team that was going to be heading out a few weeks later. The “trivial” decision was to include any map quad with oiled sites in it as part of the sampling Universe.

As it turned out, the map quad that included the northwest section of Prince William Sound had some oiled sites in it. As a result, the entire section became part of the study. But because there wasn’t much oiled coastline in that area, a disproportionate number of control sites ended up in the northwest, some on the mainland, even though most of the oiled sites were on the islands in the more central areas of the Sound. The oil was concentrated on those central islands because the currents that carried it from Bligh Reef through the Sound run right up against and around them.

Well, it rains a lot in Prince William Sound, averaging 60 inches of rain a year, according to NOAA, but near Whittier (in the NW) it can be closer to 200 inches a year! And all that freshwater falling from the sky has only one place to go: Prince William Sound. As a result, up in the coastal areas in the northwest Sound, it isn’t really a marine environment. There can be a freshwater lens sitting on top of the seawater can be more than a foot deep, as we learned once we were out there sampling—you could drink the “seawater.” And you know, marine shoreline organisms like Fucus and barnacles really don’t like freshwater. As a result of having too many of our control sites in that brackish- or even fresh-water dominated area, our first year’s sample collection made it look like a massive oil spill was “good” for the populations of marine coastal organisms. Oops. The entire year’s effort was a complete bust. A waste.

All because of one major conceptual decision forced upon us by the biometrician, coupled to a few what seemed minor decisions made by different groups who were all under enormous pressure to get moving. There wasn’t time to consult widely on how to define the “sampling universe.” There may well be people who could have told us to stay away from those  areas because they are not comparable but when you have to move a complex operation quickly under crisis conditions, it’s not surprising that those experts weren’t where we needed them when we needed them. For example, I have no clue who even decided which map quads to include in the GIS that was used to select sites.

So are you surprised that I developed a healthy skepticism for statisticians and for the “perfect” design? The research teams didn’t know how different the regions of the Sound were, but our intuition was to ensure that control and oiled sites were well matched, even if that gave us a less perfect statistical design. That gave us a strong biological design—the one that was used in later years of the study, and that showed, as expected that crude oil was rough on marine coastal species. Just not as rough, perhaps, as trying to live in freshwater. A contaminated habitat is still, after all, “habitat.”

My relatively short experience with the Exxon Valdez spill taught me valuable lessons—about the challenge of working across agencies and cultures under crisis conditions, of the joys of working off boats under stormy conditions, and importantly to never let some idealized version of the “perfect” or even of just the “better” design trump the common sense practicality of a good and workable design. I’ve also learned about the importance of thinking carefully about how you define the sampling universe and how you think about scaling from that limited area to larger scales of whole systems.

I’ll happily consult with a statistician on how to deal with the data I collect, but never again will I allow one to determine how I set up a study, at least not if their advice goes against my biological intuition.

June 8, 2016 / jpschimel

What do you wish you had known before submitting your first article?

I was just part of a workshop that our Graduate Division organized for Ph.D. students and postdocs to discuss the publication process. A number of students offered questions, and although they spanned a lot of territory I realized that most of the answers were obvious if you would consider that a journal isn’t a faceless corporate entity, but us. A journal may be owned by Elsevier or Wiley (which are indeed large and faceless corporations), but the Editors and reviewers who run the journals are your professors and your colleagues—people you want to be your friends (and maybe your postdoc advisor). The editors who run journals, and the reviewers who work with them, are people who are active in your field. They do these jobs as professional service and to support their academic habitat, rather than as employment. Ergo, they are people whose good opinion you should value and whose time you should be sensitive to wasting.

So just remember you are dealing with busy, overworked, colleagues and friends (even when they are anonymous) and remember also the Golden Rule—treat them as you would wish to be treated. And, voila, almost all the answers to questions students asked become clear:

Is it OK to submit to multiple journals simultaneously? Well that creates unnecessary work for multiple colleagues—so no, and it’s against the rules.

Should I suggest the names of potential reviewers for my paper? Well, will that help the editor do their job? Of course it will, so of course you should do it. But see my blog post on how to do this well.

When is it OK to contact the editor with questions about dealing with reviews? Will it reduce her total workload to address your question off-line, rather than in a resubmitted manuscript? If so, yup, send the e-mail. It may take some time to address your inquiry, but if she is going to have to deal with a resubmitted manuscript, a quick inquiry will likely smooth the evaluation process and might save a round of revision—that would certainly involve more work than answering your e-mail.

Is it OK to submit a rough version of a manuscript to get external input before polishing a paper and resubmitting it? Will submitting a “rough” version create extra work for the editors and reviewers? Of course! So no, it’s not OK. You should submit the best version of the work you possibly can. That should involve getting friendly review before you submit officially, but the people you are asking for review should know that you are asking for pre-submission collegial review. You should make your paper as close to perfect as you can, recognizing that reviewers will still have criticisms and input. Some fields, such as physics, use “preprint servers” where you can post pre-submission versions of papers and invite comment—but that is equivalent to friendly review. Someone can choose to respond or not as they wish.

If a paper is declined, is it OK to just submit the manuscript to a new journal (without revising it to deal with the criticisms)? Well, imagine if the same reviewer got the new submission? Will they be annoyed at having to offer the same comments? Duh… Do you think it unlikely that you won’t get the same reviewers? I know of one paper I reviewed three times for three journals before it eventually got good enough to publish! Reviewer comments are always worth considering. In my experience, when reviewers identify a “problem,” they are almost always right. They may be “wrong” with the solution they propose, but you don’t have to take their solution, as long as you have a good alternative. You can argue with reviewers, but don’t ever blow them off—after all they are us. I know of another paper that I reviewed once and identified a deeply fatal flaw in the methods (based on information that was in a paper they cited), it was rejected and should have been thrown away—the results and conclusions were most likely pure artifact. Instead they submitted it elsewhere without paying any attention to the issues I identified (though I only found that new paper a number of years later). Have I forgotten who the authors are—or that I think they were dishonest to sweep the problem under the rug and publish a paper that they had reason to know was most certainly wrong? Nope. That is an extreme case, but reviewers are us, we have long memories, and we are likely to be asked to review your work in the future—or to write tenure/promotion letters for you! Treat the anonymous peer review community with respect. You may disagree with them, and they may be wrong, but it is still likely that they were trying to follow my reviewers motto: “Friends don’t let friends publish bullshit” and so trying to be constructive.

What do you do when you think one of your reviews was completely off target and the reviewer inept? Certainly, it is possible that the reviewer (or the editor) just completely blew it. We’ve all seen those reviews, and I suspect we’ve all written them. To err is human (to forgive, canine). If you think a rejection was based on a seriously misguided review, call us on it. As a Chief Editor for Soil Biology & Biochemistry, I get one or two appeals a year and I have appealed several decisions in my life. Once, when we contacted the editor of Nature about his rejection of Jeff Chamber’s paper on how old trees in the Amazon rainforest can be (they don’t make annual rings so no one knew), the immediate response we got back included the phrase “I don’t know what I was thinking;” it was sent out for review and ultimately accepted. I am still in awe that the editor (whose name I’ve lost track of) was so forthright and honest about just having had a brain fart and fixing the mistake—a “gold star” moment in journal editing.

When you get a bad review, remember that the brainless idiot of a reviewer was chosen by the editor. So first let it sit for three days to cool your jets. Then consider whether the problem may not have been with the reviewer, but with what he was reviewing—your paper. Did he misunderstand because he’s an idiot, or because you were unclear? It’s unlikely that it was 100% the former. If you choose to appeal, be as considerate and respectful as you can, get an outside reader to double check—how will your e-mail come across to the editor? Acknowledge that there may have been problems with the paper that may have led the reviewer astray and note how you could fix them. Remember, the editor is us, so try to reduce  unnecessary workload and hassle. Dealing with appeals is fully within my responsibility—if I screwed up, I’d rather have a chance to fix it. But dealing with an author who is irate, huffy, and obnoxious causes a lot of extra work and headaches I don’t deserve for just trying to do the best job I can. First I have to convince myself not to react with my initial inclination—to just say “F-off!” Then, I have to sort what may be valid argument rather than just peeve. Being nasty is also a lot more likely to motivate an editor to focus on justifying their decision, rather than reconsidering it. The editor may be human, but the role forces them to make decisions and act as god. You’re asking them the favor of reconsidering that decision.

There were several other questions that arose at the workshop: about impact factor, how to motivate yourself to deal with major revisions, and other important issues. But most questions could be sorted out by remembering that the editors and reviewers are your colleagues. If you have a question about the process, start by putting yourself in their shoes and consider how you would want to be treated. Do that, and you’ll be able to answer 90% of your own questions.