Skip to content
June 19, 2016 / jpschimel

How I learned to hate statisticians

OK, I don’t hate statisticians. But have you ever gotten so sick from eating something once that you haven’t been able to look at that dish for years afterward? So how would you feel if an experimental design was foisted upon you on the basis of “statistical perfection” that wasted >$1 million and an entire year’s effort by many, many people on a nationally important study? That was my experience on the Exxon Valdez coastal habitat damage assessment study.

I started as an Assistant Professor at the University of Alaska Fairbanks in January 1989. It was quite the welcome to Alaska—that winter I saw the thermometer read -60 F and my mother was sure that I was going to freeze to death. The ice fog in Fairbanks was so thick that I was stranded on campus for weeks, and with my impressive skills at driving on ice, it was taking many people’s lives into my hands anytime I tried to drive to the supermarket.

But then on March 24, the Exxon Valdez ran aground on Bligh Reef. Everyone with any scientific expertise, it seemed, got caught up in the effort of trying to figure out how to assess the damage to the magical environment of Prince William Sound. How do you assess such damage? The animal people had it “easy”—everyone agreed that you could set a cash value on a dead sea otter; $10,000 per animal? But how do you assess the damage to the habitat that supports those sea otters? How much is a dead barnacle worth? How much are a few fronds of dead Fucus worth? The obvious answer would be that on their own, it would be awfully close to zero. But these are the base of the food chains that support the otters, the murrelets, and the herring. Clearly the value of the ecosystem is far, far from zero—rather it’s mammoth!

So we put together a damage assessment strategy that focused on foodweb concepts, targeting the quantity, quality, and composition of key trophic levels: The Coastal Habitat Damage Assessment. A large group of us developed the core approach over several meetings in Juneau and Anchorage, with a plan to get research teams into the field by August. We called it the “Coastal Habitat” study to emphasize that we were studying basic ecosystem members not for their own sake necessarily, but because they created the habitat for the more charismatic members.

We developed a sampling strategy that would compare heavily oiled sites to lightly or unoiled sites of different habitat types (e.g. exposed rocky shores, sheltered rocky shores, sandy beaches, estuaries), and would have three separate teams spread across the coast of Alaska: one in Prince William Sound itself, one in Kenai, and the third in the Shelikof Straight area of Kodiak and Katmai.

The biologists on the study wanted to make it a paired design where we would use a GIS system to classify the degree of oiling and of habitat type along all the shorelines of Prince William Sound and of the other sampling areas. We would randomly select heavily oiled sites in each habitat type. Then we wanted to pick the nearest available lightly or unoiled site of the same habitat type to use as a paired control. We felt this would balance the need for random sampling with ensuring meaningful biological reality.

But this was a huge effort, coordinated by State and Federal Agencies, and the Management Team had contracted a biometrician who I understood was well known known and respected for work on wildlife, but I’ll leave his name and affiliation anonymous. He insisted that such a paired design was imperfect since it meant selecting control sites non-randomly. He insisted that we select the oiled and control sites independently, and randomly, to create a stronger statistical design. We argued extensively about the alternative designs: paired vs. random. He won that battle.

As a result of his winning that battle, we all lost the “war”—it destroyed the first year of the study. The efforts of about 10 research staff working out of two 50 foot charter vessels for over a month continuously in the field (I think the boats together cost $5,000 per day), plus people working back in Fairbanks on analyzing samples and data. All for naught. Wasted.

It was wasted in part because of one other decision that seemed trivial. That was how to define the sampling “universe” from which sites were selected. The site selection and marking group was based out the Alaska Dept. of Fish and Game (if I remember right). Their job was to a) do the GIS work to map out coastal habitat types and overlay that with level of oiling, b) randomly select sections of coast—5 each oiled and unoiled in each habitat type (no longer than 1 km per section), and then c) send a team to the Sound to mark the sites for the research team that was going to be heading out a few weeks later. The “trivial” decision was to include any map quad with oiled sites in it as part of the sampling Universe.

As it turned out, the map quad that included the northwest section of Prince William Sound had some oiled sites in it. As a result, the entire section became part of the study. But because there wasn’t much oiled coastline in that area, a disproportionate number of control sites ended up in the northwest, some on the mainland, even though most of the oiled sites were on the islands in the more central areas of the Sound. The oil was concentrated on those central islands because the currents that carried it from Bligh Reef through the Sound run right up against and around them.

Well, it rains a lot in Prince William Sound, averaging 60 inches of rain a year, according to NOAA, but near Whittier (in the NW) it can be closer to 200 inches a year! And all that freshwater falling from the sky has only one place to go: Prince William Sound. As a result, up in the coastal areas in the northwest Sound, it isn’t really a marine environment. There can be a freshwater lens sitting on top of the seawater can be more than a foot deep, as we learned once we were out there sampling—you could drink the “seawater.” And you know, marine shoreline organisms like Fucus and barnacles really don’t like freshwater. As a result of having too many of our control sites in that brackish- or even fresh-water dominated area, our first year’s sample collection made it look like a massive oil spill was “good” for the populations of marine coastal organisms. Oops. The entire year’s effort was a complete bust. A waste.

All because of one major conceptual decision forced upon us by the biometrician, coupled to a few what seemed minor decisions made by different groups who were all under enormous pressure to get moving. There wasn’t time to consult widely on how to define the “sampling universe.” There may well be people who could have told us to stay away from those  areas because they are not comparable but when you have to move a complex operation quickly under crisis conditions, it’s not surprising that those experts weren’t where we needed them when we needed them. For example, I have no clue who even decided which map quads to include in the GIS that was used to select sites.

So are you surprised that I developed a healthy skepticism for statisticians and for the “perfect” design? The research teams didn’t know how different the regions of the Sound were, but our intuition was to ensure that control and oiled sites were well matched, even if that gave us a less perfect statistical design. That gave us a strong biological design—the one that was used in later years of the study, and that showed, as expected that crude oil was rough on marine coastal species. Just not as rough, perhaps, as trying to live in freshwater. A contaminated habitat is still, after all, “habitat.”

My relatively short experience with the Exxon Valdez spill taught me valuable lessons—about the challenge of working across agencies and cultures under crisis conditions, of the joys of working off boats under stormy conditions, and importantly to never let some idealized version of the “perfect” or even of just the “better” design trump the common sense practicality of a good and workable design. I’ve also learned about the importance of thinking carefully about how you define the sampling universe and how you think about scaling from that limited area to larger scales of whole systems.

I’ll happily consult with a statistician on how to deal with the data I collect, but never again will I allow one to determine how I set up a study, at least not if their advice goes against my biological intuition.

June 8, 2016 / jpschimel

What do you wish you had known before submitting your first article?

I was just part of a workshop that our Graduate Division organized for Ph.D. students and postdocs to discuss the publication process. A number of students offered questions, and although they spanned a lot of territory I realized that most of the answers were obvious if you would consider that a journal isn’t a faceless corporate entity, but us. A journal may be owned by Elsevier or Wiley (which are indeed large and faceless corporations), but the Editors and reviewers who run the journals are your professors and your colleagues—people you want to be your friends (and maybe your postdoc advisor). The editors who run journals, and the reviewers who work with them, are people who are active in your field. They do these jobs as professional service and to support their academic habitat, rather than as employment. Ergo, they are people whose good opinion you should value and whose time you should be sensitive to wasting.

So just remember you are dealing with busy, overworked, colleagues and friends (even when they are anonymous) and remember also the Golden Rule—treat them as you would wish to be treated. And, voila, almost all the answers to questions students asked become clear:

Is it OK to submit to multiple journals simultaneously? Well that creates unnecessary work for multiple colleagues—so no, and it’s against the rules.

Should I suggest the names of potential reviewers for my paper? Well, will that help the editor do their job? Of course it will, so of course you should do it. But see my blog post on how to do this well.

When is it OK to contact the editor with questions about dealing with reviews? Will it reduce her total workload to address your question off-line, rather than in a resubmitted manuscript? If so, yup, send the e-mail. It may take some time to address your inquiry, but if she is going to have to deal with a resubmitted manuscript, a quick inquiry will likely smooth the evaluation process and might save a round of revision—that would certainly involve more work than answering your e-mail.

Is it OK to submit a rough version of a manuscript to get external input before polishing a paper and resubmitting it? Will submitting a “rough” version create extra work for the editors and reviewers? Of course! So no, it’s not OK. You should submit the best version of the work you possibly can. That should involve getting friendly review before you submit officially, but the people you are asking for review should know that you are asking for pre-submission collegial review. You should make your paper as close to perfect as you can, recognizing that reviewers will still have criticisms and input. Some fields, such as physics, use “preprint servers” where you can post pre-submission versions of papers and invite comment—but that is equivalent to friendly review. Someone can choose to respond or not as they wish.

If a paper is declined, is it OK to just submit the manuscript to a new journal (without revising it to deal with the criticisms)? Well, imagine if the same reviewer got the new submission? Will they be annoyed at having to offer the same comments? Duh… Do you think it unlikely that you won’t get the same reviewers? I know of one paper I reviewed three times for three journals before it eventually got good enough to publish! Reviewer comments are always worth considering. In my experience, when reviewers identify a “problem,” they are almost always right. They may be “wrong” with the solution they propose, but you don’t have to take their solution, as long as you have a good alternative. You can argue with reviewers, but don’t ever blow them off—after all they are us. I know of another paper that I reviewed once and identified a deeply fatal flaw in the methods (based on information that was in a paper they cited), it was rejected and should have been thrown away—the results and conclusions were most likely pure artifact. Instead they submitted it elsewhere without paying any attention to the issues I identified (though I only found that new paper a number of years later). Have I forgotten who the authors are—or that I think they were dishonest to sweep the problem under the rug and publish a paper that they had reason to know was most certainly wrong? Nope. That is an extreme case, but reviewers are us, we have long memories, and we are likely to be asked to review your work in the future—or to write tenure/promotion letters for you! Treat the anonymous peer review community with respect. You may disagree with them, and they may be wrong, but it is still likely that they were trying to follow my reviewers motto: “Friends don’t let friends publish bullshit” and so trying to be constructive.

What do you do when you think one of your reviews was completely off target and the reviewer inept? Certainly, it is possible that the reviewer (or the editor) just completely blew it. We’ve all seen those reviews, and I suspect we’ve all written them. To err is human (to forgive, canine). If you think a rejection was based on a seriously misguided review, call us on it. As a Chief Editor for Soil Biology & Biochemistry, I get one or two appeals a year and I have appealed several decisions in my life. Once, when we contacted the editor of Nature about his rejection of Jeff Chamber’s paper on how old trees in the Amazon rainforest can be (they don’t make annual rings so no one knew), the immediate response we got back included the phrase “I don’t know what I was thinking;” it was sent out for review and ultimately accepted. I am still in awe that the editor (whose name I’ve lost track of) was so forthright and honest about just having had a brain fart and fixing the mistake—a “gold star” moment in journal editing.

When you get a bad review, remember that the brainless idiot of a reviewer was chosen by the editor. So first let it sit for three days to cool your jets. Then consider whether the problem may not have been with the reviewer, but with what he was reviewing—your paper. Did he misunderstand because he’s an idiot, or because you were unclear? It’s unlikely that it was 100% the former. If you choose to appeal, be as considerate and respectful as you can, get an outside reader to double check—how will your e-mail come across to the editor? Acknowledge that there may have been problems with the paper that may have led the reviewer astray and note how you could fix them. Remember, the editor is us, so try to reduce  unnecessary workload and hassle. Dealing with appeals is fully within my responsibility—if I screwed up, I’d rather have a chance to fix it. But dealing with an author who is irate, huffy, and obnoxious causes a lot of extra work and headaches I don’t deserve for just trying to do the best job I can. First I have to convince myself not to react with my initial inclination—to just say “F-off!” Then, I have to sort what may be valid argument rather than just peeve. Being nasty is also a lot more likely to motivate an editor to focus on justifying their decision, rather than reconsidering it. The editor may be human, but the role forces them to make decisions and act as god. You’re asking them the favor of reconsidering that decision.

There were several other questions that arose at the workshop: about impact factor, how to motivate yourself to deal with major revisions, and other important issues. But most questions could be sorted out by remembering that the editors and reviewers are your colleagues. If you have a question about the process, start by putting yourself in their shoes and consider how you would want to be treated. Do that, and you’ll be able to answer 90% of your own questions.

March 19, 2016 / jpschimel

Take care of the people who take care of you

In Writing Science and in past posts, I’ve discussed survival and success strategies associated with writing. Yet there is another suite of success strategies that underlie our ability to produce those papers. Managing personal relationships with our support team is high on that list. I think we all know how important collaborators are in advancing our research careers. If you are a student or postdoc, I’m sure you’ve learned the hard way about “major professor management;” if you are a professor, you’ve learned about mentoring lab members.

The group of people we often don’t discuss very much is the staff. As faculty and researchers, we all recognize that we carry the mission of the University—we are the people leading the research, teaching the classes, and doing the service. Yet, though we may carry the mission of the University, we wouldn’t carry it very far without a large team behind us. I don’t prepare my budget forms. I don’t pay the vendors. I don’t maintain the files on graduate applications. I don’t fix the plumbing in my lab when it breaks. Bottom line—without the staff, I’d be sitting in an empty field pontificating like Plato.

Together faculty and staff form a single team, each with different responsibilities, but with shared contributions. Many of the staff functions we rely on require knowledgeable and skilled professionals—as systems become more complex that need only increases. Yet, many academics take the contributions of the staff for granted. For example, once, when I was serving on a search committee for a grants coordinator, a faculty member criticized one of the candidates: “Yes, they are willing to stay late and put in the extra time when a proposal crisis comes along, but they let you know they’re doing you a favor.” That comment struck me. We were talking about a staff member, who has a 40-hour a week job without overtime, and when she stays late to get your proposal out the door, she is doing you a favor. And it seems just common human decency to recognize it!

Absolutely, I want staff who are open to doing that favor when needed, but they don’t have to, and do so because they are committed to their job and to us. Keeping the staff on our side is essential to our success as academics. Don’t assume, ever, that staff functions just happen by magic!

No one rates a University on the quality of its staff, but I’ve worked in several universities, and I know my ability to perform as a faculty member is enhanced by the great people I have supporting me. Having a terrific team allows me to focus, as I should, on teaching, research, and service.

The staff works in an environment where all too often the motto might be “Perfection is invisible; anything less, complainable.” No one appreciates that. If you want your staff members to be there for you when you need them, be there for them. At the very least, say thank you and make sure they understand that you do appreciate them. They are our team members, our colleagues, and hopefully, our friends.

If you don’t recognize and appreciate your support staff and their contributions to your work and career, you are making a foolish and dangerous mistake. If you don’t show them that you do, you are making an almost equally large mistake, one that may hobble your academic success.

My version of the Golden Rule is “Take care of the people who take care of you.” That applies to all the people who support us in our jobs—colleagues, students, and importantly, the support staff.

February 5, 2016 / jpschimel

Jargon in the Natural Sciences vs. the Social Sciences and Humanities: A Hypothesis.

For decades, natural scientists have been castigated for talking in jargon. But all fields have their own technical terms that to others are “jargon.” So why do scientists catch so much flack over it? Several of my graduate students recently were taking an interdisciplinary seminar class and were criticized by social science and humanities students for using jargon—but those criticisms were leveled in language my students didn’t understand. Those other students were using language just as deeply “jargonish” yet they had a hard time perceiving it that way. Why?, I wondered.

Certainly, this was partially a classic “Curse of Knowledge” problem: we know what we know, and it’s hard to realize that others don’t. That is what launched the discussion—my students used language that was so intuitive for them, they didn’t realize their words would be jargon to others. But I think the disciplinary jargon divide might also have roots in how our disciplines create technical terms. My distinction between “technical term” and “jargon” (from Writing Science, pg. 147) is:


  • A term that refers to a schema the reader does not hold.
  • A term where there is an adequate plain language equivalent

Technical term:

  • A term that refers to a schema the reader does
  • A term where either there is no plain language equivalent, or where using it would be confusing.

In the natural sciences, we often create new terms, but when we do, we have historically reached for Latin and Greek to provide roots for those terms:


When we use such terms, no one is ever going to mistake those for “plain language.” When a doctor told me “You have a defect in the cartilage of your lateral patellar facet” there was no mistaking that this was doctor-speak. Scientists deliberately developed the tradition of relying on Latin and Greek so that our terminology would be the same across languages and so when the orthopedist explained why my knee hurt it would be the same whether we were discussing “my knee,” “mon genou,” or “mi rodilla.”

But my impression is that scholars in the Social Sciences and the Humanities more commonly develop technical concepts, not by borrowing from Latin or Greek, but from English. Thus, it may be less likely that even the person using the word will recognize that it may be “jargon.” Consider a statement like:

“Parentheticals increase the spatial and textile volume of your prose, opening the breathing space for the reader and enlarging the referential sphere of your engagement with the material”1

 Here every single term draws from common English, but many have a somewhat different meaning in English and in “Humanities.” What is the “textile volume of prose” or the “referential sphere”?

Natural scientists certainly face this same problem—and we have some doozies. Think about the misunderstanding of the word “theory.” Many non-scientists use the word as anywhere from a reasonable suggestion to a complete wild-assed guess. None of the common uses are anywhere remotely near how it is used within the deep conservatism of Philosophy of Science: an idea that has stood up over decades to every sling and arrow of outrageous testing. Thus, a biologist accepts the “Theory of Evolution” as established fact that, only for lack of a time machine to go back and specifically observe past history, is not called a “Law” up there with “Newton’s Laws.” Yet a creationist in Kansas can look at the “Theory of Evolution” and argue, “It’s only a theory!”

When fields develop technical terms by borrowing words from common language, it may make them more immediately tractable and offer easier engagement—“invasive species” is fairly intuitive, but carries meaning that may mislead a new reader to its ecological nuances. It may also make it more difficult for someone to separate the common word from their field-specific technical term.

So back to the seminar my students were talking about. Because social scientists and humanists often borrow their discipline-speak terms from English, they may have a harder time recognizing that they are actually being just as jargon-laden as when the doctor told me about my patellar facet.


1 Hayot, E. The Elements of Academic Style: writing for the humanities. Columbia University Press. Pg. 180.
Please note—I really admire Hayot’s book. Some sections are inspired, notably Chapter 3 “Eight Strategies for Getting Writing Done,” which is the best I have ever read on the subject and by itself justifies buying the book. In fact, his discussion in that chapter of “virtuous procrastination” was probably worth the $20. But Hayot is a humanist writing for humanists—his language reflects that.

November 10, 2015 / jpschimel

Greenhouse Gas Production by Faculty Seminars

Mitch Wagener and I wrote this years ago when I was at the University of Alaska and he was a Ph.D. student. We originally published it online as part of the “First International Virtual Symposium on Mad Science.” That always bugged me because there was no “mad science” involved—we actually collected the data and it’s all real. I still list this as a “technical report” on my C.V. I hadn’t been able to find a copy of this for a long time, but one recently appeared and I thought this blog would be a good place to give it new life. I hope you find it entertaining.

The Production of Greenhouse Gases in Faculty Seminars

Stephen M. Wagener and Joshua P. Schimel

University of Alaska Fairbanks

Recently Hungadunga and McCormick (1991) observed that, in the natural course of their duties, academic biologists produce only a little less CO2 and CH4 than do feedlot cattle. As an interesting aside, they also found that only politicians ranked higher than cattle in the production of these gases. Unfortunately, university administrators were not tested.

It had long been observed at our institute that during faculty seminars the room—lovingly referred to as the Autoclave—gets stuffy and people often fall asleep. We named this phenomenon Seminar Narcolepsy Syndrome (SNS, pronounced snooze). It is also evident that some seminars cause a much higher level of SNS than do others. We hypothesized that the SNS might be caused by excessive CO2 in the seminar room. This raised several issues that we addressed in this experiment: Is SNS actually caused by CO2? Is this phenomenon related to the seminar topic? How might faculty seminars contribute to the production of the greenhouse gases CO2 and CH4?

During the fall and winter of 1991-1992 we took air samples at four locations in the seminar room at 15 minute intervals during faculty seminars. The sample takers were various graduate students and faculty. We also measured beginning and ending temperatures and counted the attendees. We analyzed samples using gas chromatography. We wished to also measure relative humidity and atmospheric mercaptans, but our GC was not set up to analyze mercaptans and we were persuaded that using a sling psychrometer during the seminars would be disruptive.

The ornithology and mammalogy seminars generated the most carbon dioxide per capita. This is not surprising considering the excitement these subjects generate at our biological institute. Since people respire less while asleep, dips in CO2 during the plant ecology, stream ecology and invertebrate zoology seminars probably indicate people nodding off. The microbial ecology seminar produced by far the most methane, although there was a sharp drop 15 minutes before the end. This corresponds to certain people (probably mammalogists) leaving early. Methane concentrations actually went down during the plant ecology seminar.


Our initial hypothesis was that high CO2 induced SNS and caused the audience to fall asleep. In fact we found the opposite: mammalogy and ornithology seminars caused some serious heavy breathing and greatly accelerated CO2 production. Other subjects had both higher incidences of SNS and reduced CO2 build up. It is possible that at a microbiology institute, micrographs of bacterial conjugation would stimulate heavy breathing, while moose conjugation might induce SNS. We suspect that a seminar on human conjugation would induce heavy breathing in all academic institutes.

Spectators at plant ecology seminars appear to be facultative methanotrophs. The only other animals known to consume methane are several species of marine clam; this relationship may therefore require a major reevaluation of the place of plant ecologists in the phylogenetic tree. We also found that carbon dioxide accumulates faster in the east half of the seminar room, while methane increases faster in the west side. This could be due to the fact that certain people consistently sit in the same spot. The effect is that the seminar room can generate its own weather patterns.

We have the following recommendations for making faculty seminars more eco-friendly:

  • Only boring seminars should be allowed. Not only would this reduce the production of greenhouse gases, but it would have the added benefit of reducing the overall stress level of attendees.
  • Each attendee should be encourage to bring a potted plant. Tomato plants would be ideal if one has a strong opinion about the seminar presenter. However, attendees should probably wait until after the seminar to get themselves potted.
  • Room lights should remain on at all times to encourage photosynthesis. However, this could also have the negative and unrealistic effect of encouraging people to stay awake.
  • No one should be allowed in the seminar room who had the burrito special for lunch.


Hungadunga, M.L. and F.G. McCorrnick. 1991. Human sources of greenhouse gases. North Dakota Journal of Natural Gas Production 107:23-45

Table 1. Seminar topic, attendance, temperature, and gas concentrations

                                Number of     Temperature      Ending CO2      Ending CH4

Seminar Topic         Attendees            ° C                     %                     ppm

Ornithology                      70                    27                     0.15                 2.19

Mammalogy                     60                    26                     0.19

Plant Ecology                   75                    27                     0.14                 1.88

Plant Taxonomy               50                    26                     0.10                  2.37

Invertebrate Zoology        75                    26                      0.15

Stream Ecology                75                    25                     0.11                 2.31

Biochemistry                    30                    26                      0.04

Microbial Ecology            50                    26                      0.09                  4.42

October 2, 2015 / jpschimel

How to Recommend Reviewers when you Submit a Paper?

I’m one of the Chief Editors of a major international journal and I need to find reviewers for ~150 papers a year. A substantial fraction of those papers are in sub-fields I’m not deeply expert in. So we ask authors to suggest reviewers. I never rely entirely on that list, but I always start by looking at it for ideas and inspiration.  While some reviewer lists are really helpful,  others are useless. Why? What makes a good list of recommended reviewers?

Keep in mind several principles:

  1. I’m looking for people who I think I can trust to give objective, thoughtful, and critical reviews.
  2. I’m a senior researcher in my field and I know the big names.
  3. The most senior people in your field are very busy.

So how do these principles translate into a good list of recommended reviewers?

First, if you suggest the three biggest names in the field, I already know them! And I already know they are likely to say no. So by suggesting those gods, you’ve done me no help at all.

Alternatively, if you recommend people I suspect might be too close to you, I worry that they won’t give a critical review. For many U.S. and European scientists, I know the relationship structures, but if I’m unsure, I look for reviewers from different Countries, States, or at least different Universities. For example, I might ask someone from York to review a paper from Exeter (in the U.K.), or someone from Umea to review a paper from Lund (in Sweden). But for an American paper, I’ll still usually try to avoid two US. Reviewers; Soil Biology & Biochemistry is an international journal and we take that seriously. If I have reason to suspect the quality of your recommendations, I’m unlikely to use any of them.

So, if you are a scientist in a small country with a small research community, don’t recommend three reviewers from your country! If, for example, a Czech scientist recommends three people with .cz email addresses, I’m likely to disregard all of them, especially if they are names I don’t know well. Honestly, I’d even be suspicious of two .cz addresses.

On this matter, China remains the biggest question mark for me. China may be as large and with as dispersed a research community as the U.S., but the Chinese research community is still emerging on the international stage (at least in my field) and I don’t have as good a sense of where all the different units are or what the relationships are, which I do have for the U.S. and Europe. Chinese naming practices—few, but common, family names—also make it harder for outsiders to assess who’s who. There are many Li’s, Zhu’s, etc. It’s like Wales, where half the population is named Jones, except the Chinese population is 6600 times larger. In contrast I know only one other Schimel in Ecology and he’s my brother. So if you’re Chinese, I’d recommend that you not suggest three Chinese reviewers (and maybe not even two).

O.K., so I’ve told you what not to do. What should you do? How do you put together a reviewer list that I, as editor, will find useful in helping me find good reviewers quickly?

Give a list of people who aren’t the obvious “usual suspects” in the broad field. In terms of seniority, focus on mid-career (e.g. Associate Prof. level in the U.S. rank scale is often ideal); junior faculty or even postdocs can also be great if they’ve done interesting and insightful work in your area. Often younger researchers do the best reviews, and the ideal is someone who’s had enough experience to develop vision and perspective, but who still has the time in their life to commit to doing a thoughtful review. The perfect name is one to which my response will be “Ah ha, of course! I hadn’t thought of her, but she’d be great.”

Give me three of those, and I will be grateful and impressed. Never a bad way for the editor to feel when he’s beginning the process of determining your paper’s fate.

P.S. Oh, and don’t recommend as a reviewer someone who is already one of the Chief Editors for the journal. It really doesn’t help me if I’m handling a paper as editor and someone listed me as a suggested reviewer.



February 10, 2015 / jpschimel

Data vs. Knowledge: a sorry letter from 1963

Thanks to Andy Hoffman for a column that brought to my attention this letter (Chaos in the Brickyard) that was published in Science in 1963. Already, the author, Dr. Bernard Forscher was concerned that science was increasingly focused on producing “bricks”–little pieces of information, rather than “Edifices”–real bodies of useful knowledge. That was in 1963 people!

Forscher Bricks vs. Edifices

Dr. Forscher wrote this as a parable, but the idea is directly analogous to my distinction between data, information, knowledge, and understanding. Ultimately science is about edifices: knowledge and understanding. Yet many scientists never do more than make bricks–individual nuggets of information.

I think Dr. Forscher was visionary: our reward systems increasingly skew toward rewarding bricks. The more we focus these systems on simple quantitative metrics, such as the Impact Factor of the journal a paper was published in, the more Dr. Forscher’s vision becomes true and the more we end up focused on the “bricks for brick-ness’ sake” rather than on the ultimate goal of building structures.

Reward systems should target direct measures of a scientists contribution to building real structures of knowledge, rather than just on their ability to produce bricks. That is something that peers can usually do easily, and sophisticated network mapping software can do with difficulty (e.g. Thompson Reuters Citation Laureates), but the simple metrics are mostly incapable of–they only measure bricks.

The concerns Dr. Forscher raised in 1963 have only become more extreme as has the importance of the reminder that to have real impact as a scientist, you need to force yourself and your trainees to aim for edifices. The ultimate goal of science is to advance human understanding. We build our great edifices of understanding out of individual pieces of information (bricks), but a brick’s value is only as great as its contribution to an edifice. A brick, by itself, on it’s own and isolated, remains a block of clay and straw. The papers that ultimately matter are those that offer more than just information–they offer real knowledge and insight.

Andy Hoffman’s column is available at: (