A concern with peer review has always been prejudice; prejudice born of reviewers knowing who the authors are but not vice versa. This raises a clear potential for abuse. Shit happens, and I think all experienced researchers have had some experience with inappropriate or personally charged reviews. More recently, the concern has shifted to covert prejudice—quite possibly unconsciously—against women, minorities, other nationalities, or even junior colleagues. A paper authored by John Smith or even J. Smith might review more favorably than one by Jill Smith, Juan Herrera, or Shujin Zhu. Prejudice, whether overt or covert, degrades peer review and scientific publication.
To avoid this, some disciplines and journals are moving to double-blind review in which the names and affiliations of the authors are removed from the paper. In some areas, double-blind is considered a necessary and fundamental requirement of a fair peer review system.
However, in other areas, the counter-argument has been that double-blind is pointless, because reviewers can figure out who the authors are. For example, in environmental field sciences, the combination of topic, approach, and research site can limit the possible research group to such a degree that the reviewer is able to “peek past the blindfold.” If someone is doing work on summertime soil biogeochemistry of California grasslands, working at the Sedgwick Reserve, it wouldn’t be much of a stretch to guess that the work came from my lab. If the paper noted that isotope samples were analyzed in the University of California Santa Barbara Marine Science Institute analytical lab, you’d have it nailed.
Thus, even with double-blind systems, reviewers are often sure they know who the authors are. But research suggests they are regularly wrong; I can vouch for that—I once reviewed a paper and noted to the editor: “This is a really nice paper out of so-and-so’s group.” The paper was covered with “fingerprints” such as personal communication and unpublished data references, but the editor wrote back to tell me I was wrong. My response was that it was clear that there was some relationship between the actual authors and the group I tagged, but that more importantly, it didn’t matter that I was wrong! If I had prejudices, they would still have tainted my review. The counter is that if reviewers are at all uncertain about the authors, it could at least diminish the effects of any prejudice they hold; but in my case, I wasn’t uncertain—I was just wrong. Oops.
In any event, all the discussions I have ever seen have always focused exclusively on eliminating potential bias in the assessment of the manuscript itself, trying to ensure that decisions on the fate of a paper are not a function of who wrote it, but solely on what they wrote.
But submitting a paper is also a form of professional networking. As I mentioned in a previous blog post, “the Editors and reviewers who run the journals are your professors and your colleagues—people you want to be your friends (and maybe your postdoc advisor).” Early career scientists have an interest in becoming known to their senior colleagues. Yet, the papers I read most carefully and pay closest attention to are those I review; I’m likely to register who wrote a paper when I review it. When I get a double-blind manuscript, I may be able to guess where the paper came from, but I can’t know which student or postdoc actually wrote it. Having me know that Dr. Loreau’s group just produced a nice piece of new work may benefit Dr. Loreau, but it does nothing for Ms. Sylvain who actually wrote the paper.
Sometimes, useful relationships even develop from the review process—I started working with Stefano Manzoni, now one of my most valued collaborators, as a result of a review I wrote (and signed) of one of his first papers. He took some ideas I’d included and developed a new model that elaborated on them; he then invited me to be a co-author and we’ve worked together since. Such Cinderella stories may be rare, but they do occur.
If that had been a double-blind review, I couldn’t have told that it was from a group that was newly moving into soil biology and might well not have invested so much in the review. Would I have signed it? I suspect not—anonymity breeds anonymity. And I know I said things that I wouldn’t have said in a completely open review system. Signing that review has benefited both of our careers. Letting the reviewers know who the authors are can help find the glass slipper.
The networking and advertising benefits in classical single-blind review may be modest and occasional; but they are real, and eliminated by double-blind. The debates over single vs. double-blind I’ve seen consider only the balance of risks from prejudice in single-blind vs. the hassles or inefficiencies of double-blind. They don’t consider any overt potential benefit to the authors in classical single-blind. They should.
In some fields the cost-benefit balance of review systems will clearly come down on the side of double-blind. In others (particularly I suspect field-based sciences such as ecology) the balance might well shift to single-blind.
Importantly though, the discussions should consider that the review process is more than a simple evaluation of a manuscript. It also builds relationships among people.
Language changes. How else do you explain George Bernard Shaw’s famous quote “England and America are two countries divided by a common language.”
Language changes with time and distance. Words are created, lost, and alter meaning. Particularly when words are adopted from another language, they often shift meaning and usage. In English, a common battle is whether the rules from that other language still necessarily apply to the word as used in English. If we adopt a Latin word, must we still use Latin rules?
“Data” is the word that has probably been fought over the most in science. Many (though a decreasing number) feel that those who would ever use the word “data” as a singular noun are ignoramuses who are debasing the language.
“Data” (in English) is derived from a Latin word and in Latin it is the plural of “datum.” In Latin, therefore to ever use “data” as a singular would be a complete and gross error. But is it an equal error in English? According to the Oxford English Dictionary (the OED), the Latin word means “given, that which is given, neuter past participle of ‘dare’ to give.”
That isn’t the meaning we apply to the word in English, and particularly not in science.
So is the English “data” the same word as the Latin word “data”? No, it isn’t. The OED gives our definition as “In pl. Facts, esp. numerical facts, collected together for reference or information.” So, should the same rules apply?
Some argue yes—that since data is originally a Latin word, then Latin rules should always apply. But standard English usage often treats “data” as a mass, or collective, noun—it is the collection of facts.
In English usage, collective nouns are treated as individuals. “The population is…,” is correct English usage; to say “The population are…” would be incorrect.
So in dealing with the word “data,” we are left with two issues. The first is whether it is ever correct to use data as a singular, collective noun? The second, however, is whether you should?
Based on the OED, Chicago Manual of Style (the CMS), and other sources of grammatical wisdom, you can correctly use “data” as either singular mass noun or as a plural, depending on your meaning:
The plural form: “The data indicate…” implies that it is through evaluating each datum and then synthesizing that information that you establish what is indicated.
The singular form: “The data indicates…” implies that after aggregating the data into a single mass, the whole data set acting as a single entity indicates something.
Don’t forget though, that if you have a single fact, it remains a datum (or a data point). You can’t have “a data.” Don’t use “data” as a true singular.
But, then there is the issue, not of what is grammatically correct, but of what people think is grammatically correct. There remain those who reject the mass noun use, and they tend to be senior colleagues—people you might want to impress. Although the OED and CMS acknowledge and accept the mass noun usage, the OED notes “However, in general and scientific contexts it is still sometimes regarded as objectionable.” and the CMS says “In formal writing (and always in the sciences), use data as a plural.” For me, the ability to use “data” as a mass noun is a tool too useful to ignore, but it is one that you should use thoughtfully and deliberately, and some conservatism is wise.
Footnote: The OED does note the use of “data” as a count noun with a 2010 citation stating “These datas were likely not missing at random.” But please don’t do that. Not only does it sound horrible and wrong, but almost every reader will be sure that it is.
OK, I don’t hate statisticians. But have you ever gotten so sick from eating something once that you haven’t been able to look at that dish for years afterward? So how would you feel if an experimental design was foisted upon you on the basis of “statistical perfection” that wasted >$1 million and an entire year’s effort by many, many people on a nationally important study? That was my experience on the Exxon Valdez coastal habitat damage assessment study.
I started as an Assistant Professor at the University of Alaska Fairbanks in January 1989. It was quite the welcome to Alaska—that winter I saw the thermometer read -60 F and my mother was sure that I was going to freeze to death. The ice fog in Fairbanks was so thick that I was stranded on campus for weeks, and with my impressive skills at driving on ice, it was taking many people’s lives into my hands anytime I tried to drive to the supermarket.
But then on March 24, the Exxon Valdez ran aground on Bligh Reef. Everyone with any scientific expertise, it seemed, got caught up in the effort of trying to figure out how to assess the damage to the magical environment of Prince William Sound. How do you assess such damage? The animal people had it “easy”—everyone agreed that you could set a cash value on a dead sea otter; $10,000 per animal? But how do you assess the damage to the habitat that supports those sea otters? How much is a dead barnacle worth? How much are a few fronds of dead Fucus worth? The obvious answer would be that on their own, it would be awfully close to zero. But these are the base of the food chains that support the otters, the murrelets, and the herring. Clearly the value of the ecosystem is far, far from zero—rather it’s mammoth!
So we put together a damage assessment strategy that focused on foodweb concepts, targeting the quantity, quality, and composition of key trophic levels: The Coastal Habitat Damage Assessment. A large group of us developed the core approach over several meetings in Juneau and Anchorage, with a plan to get research teams into the field by August. We called it the “Coastal Habitat” study to emphasize that we were studying basic ecosystem members not for their own sake necessarily, but because they created the habitat for the more charismatic members.
We developed a sampling strategy that would compare heavily oiled sites to lightly or unoiled sites of different habitat types (e.g. exposed rocky shores, sheltered rocky shores, sandy beaches, estuaries), and would have three separate teams spread across the coast of Alaska: one in Prince William Sound itself, one in Kenai, and the third in the Shelikof Straight area of Kodiak and Katmai.
The biologists on the study wanted to make it a paired design where we would use a GIS system to classify the degree of oiling and of habitat type along all the shorelines of Prince William Sound and of the other sampling areas. We would randomly select heavily oiled sites in each habitat type. Then we wanted to pick the nearest available lightly or unoiled site of the same habitat type to use as a paired control. We felt this would balance the need for random sampling with ensuring meaningful biological reality.
But this was a huge effort, coordinated by State and Federal Agencies, and the Management Team had contracted a biometrician who I understood was well known known and respected for work on wildlife, but I’ll leave his name and affiliation anonymous. He insisted that such a paired design was imperfect since it meant selecting control sites non-randomly. He insisted that we select the oiled and control sites independently, and randomly, to create a stronger statistical design. We argued extensively about the alternative designs: paired vs. random. He won that battle.
As a result of his winning that battle, we all lost the “war”—it destroyed the first year of the study. The efforts of about 10 research staff working out of two 50 foot charter vessels for over a month continuously in the field (I think the boats together cost $5,000 per day), plus people working back in Fairbanks on analyzing samples and data. All for naught. Wasted.
It was wasted in part because of one other decision that seemed trivial. That was how to define the sampling “universe” from which sites were selected. The site selection and marking group was based out the Alaska Dept. of Fish and Game (if I remember right). Their job was to a) do the GIS work to map out coastal habitat types and overlay that with level of oiling, b) randomly select sections of coast—5 each oiled and unoiled in each habitat type (no longer than 1 km per section), and then c) send a team to the Sound to mark the sites for the research team that was going to be heading out a few weeks later. The “trivial” decision was to include any map quad with oiled sites in it as part of the sampling Universe.
As it turned out, the map quad that included the northwest section of Prince William Sound had some oiled sites in it. As a result, the entire section became part of the study. But because there wasn’t much oiled coastline in that area, a disproportionate number of control sites ended up in the northwest, some on the mainland, even though most of the oiled sites were on the islands in the more central areas of the Sound. The oil was concentrated on those central islands because the currents that carried it from Bligh Reef through the Sound run right up against and around them.
Well, it rains a lot in Prince William Sound, averaging 60 inches of rain a year, according to NOAA, but near Whittier (in the NW) it can be closer to 200 inches a year! And all that freshwater falling from the sky has only one place to go: Prince William Sound. As a result, up in the coastal areas in the northwest Sound, it isn’t really a marine environment. There can be a freshwater lens sitting on top of the seawater can be more than a foot deep, as we learned once we were out there sampling—you could drink the “seawater.” And you know, marine shoreline organisms like Fucus and barnacles really don’t like freshwater. As a result of having too many of our control sites in that brackish- or even fresh-water dominated area, our first year’s sample collection made it look like a massive oil spill was “good” for the populations of marine coastal organisms. Oops. The entire year’s effort was a complete bust. A waste.
All because of one major conceptual decision forced upon us by the biometrician, coupled to a few what seemed minor decisions made by different groups who were all under enormous pressure to get moving. There wasn’t time to consult widely on how to define the “sampling universe.” There may well be people who could have told us to stay away from those areas because they are not comparable but when you have to move a complex operation quickly under crisis conditions, it’s not surprising that those experts weren’t where we needed them when we needed them. For example, I have no clue who even decided which map quads to include in the GIS that was used to select sites.
So are you surprised that I developed a healthy skepticism for statisticians and for the “perfect” design? The research teams didn’t know how different the regions of the Sound were, but our intuition was to ensure that control and oiled sites were well matched, even if that gave us a less perfect statistical design. That gave us a strong biological design—the one that was used in later years of the study, and that showed, as expected that crude oil was rough on marine coastal species. Just not as rough, perhaps, as trying to live in freshwater. A contaminated habitat is still, after all, “habitat.”
My relatively short experience with the Exxon Valdez spill taught me valuable lessons—about the challenge of working across agencies and cultures under crisis conditions, of the joys of working off boats under stormy conditions, and importantly to never let some idealized version of the “perfect” or even of just the “better” design trump the common sense practicality of a good and workable design. I’ve also learned about the importance of thinking carefully about how you define the sampling universe and how you think about scaling from that limited area to larger scales of whole systems.
I’ll happily consult with a statistician on how to deal with the data I collect, but never again will I allow one to determine how I set up a study, at least not if their advice goes against my biological intuition.
I was just part of a workshop that our Graduate Division organized for Ph.D. students and postdocs to discuss the publication process. A number of students offered questions, and although they spanned a lot of territory I realized that most of the answers were obvious if you would consider that a journal isn’t a faceless corporate entity, but us. A journal may be owned by Elsevier or Wiley (which are indeed large and faceless corporations), but the Editors and reviewers who run the journals are your professors and your colleagues—people you want to be your friends (and maybe your postdoc advisor). The editors who run journals, and the reviewers who work with them, are people who are active in your field. They do these jobs as professional service and to support their academic habitat, rather than as employment. Ergo, they are people whose good opinion you should value and whose time you should be sensitive to wasting.
So just remember you are dealing with busy, overworked, colleagues and friends (even when they are anonymous) and remember also the Golden Rule—treat them as you would wish to be treated. And, voila, almost all the answers to questions students asked become clear:
Is it OK to submit to multiple journals simultaneously? Well that creates unnecessary work for multiple colleagues—so no, and it’s against the rules.
Should I suggest the names of potential reviewers for my paper? Well, will that help the editor do their job? Of course it will, so of course you should do it. But see my blog post on how to do this well.
When is it OK to contact the editor with questions about dealing with reviews? Will it reduce her total workload to address your question off-line, rather than in a resubmitted manuscript? If so, yup, send the e-mail. It may take some time to address your inquiry, but if she is going to have to deal with a resubmitted manuscript, a quick inquiry will likely smooth the evaluation process and might save a round of revision—that would certainly involve more work than answering your e-mail.
Is it OK to submit a rough version of a manuscript to get external input before polishing a paper and resubmitting it? Will submitting a “rough” version create extra work for the editors and reviewers? Of course! So no, it’s not OK. You should submit the best version of the work you possibly can. That should involve getting friendly review before you submit officially, but the people you are asking for review should know that you are asking for pre-submission collegial review. You should make your paper as close to perfect as you can, recognizing that reviewers will still have criticisms and input. Some fields, such as physics, use “preprint servers” where you can post pre-submission versions of papers and invite comment—but that is equivalent to friendly review. Someone can choose to respond or not as they wish.
If a paper is declined, is it OK to just submit the manuscript to a new journal (without revising it to deal with the criticisms)? Well, imagine if the same reviewer got the new submission? Will they be annoyed at having to offer the same comments? Duh… Do you think it unlikely that you won’t get the same reviewers? I know of one paper I reviewed three times for three journals before it eventually got good enough to publish! Reviewer comments are always worth considering. In my experience, when reviewers identify a “problem,” they are almost always right. They may be “wrong” with the solution they propose, but you don’t have to take their solution, as long as you have a good alternative. You can argue with reviewers, but don’t ever blow them off—after all they are us. I know of another paper that I reviewed once and identified a deeply fatal flaw in the methods (based on information that was in a paper they cited), it was rejected and should have been thrown away—the results and conclusions were most likely pure artifact. Instead they submitted it elsewhere without paying any attention to the issues I identified (though I only found that new paper a number of years later). Have I forgotten who the authors are—or that I think they were dishonest to sweep the problem under the rug and publish a paper that they had reason to know was most certainly wrong? Nope. That is an extreme case, but reviewers are us, we have long memories, and we are likely to be asked to review your work in the future—or to write tenure/promotion letters for you! Treat the anonymous peer review community with respect. You may disagree with them, and they may be wrong, but it is still likely that they were trying to follow my reviewers motto: “Friends don’t let friends publish bullshit” and so trying to be constructive.
What do you do when you think one of your reviews was completely off target and the reviewer inept? Certainly, it is possible that the reviewer (or the editor) just completely blew it. We’ve all seen those reviews, and I suspect we’ve all written them. To err is human (to forgive, canine). If you think a rejection was based on a seriously misguided review, call us on it. As a Chief Editor for Soil Biology & Biochemistry, I get one or two appeals a year and I have appealed several decisions in my life. Once, when we contacted the editor of Nature about his rejection of Jeff Chamber’s paper on how old trees in the Amazon rainforest can be (they don’t make annual rings so no one knew), the immediate response we got back included the phrase “I don’t know what I was thinking;” it was sent out for review and ultimately accepted. I am still in awe that the editor (whose name I’ve lost track of) was so forthright and honest about just having had a brain fart and fixing the mistake—a “gold star” moment in journal editing.
When you get a bad review, remember that the brainless idiot of a reviewer was chosen by the editor. So first let it sit for three days to cool your jets. Then consider whether the problem may not have been with the reviewer, but with what he was reviewing—your paper. Did he misunderstand because he’s an idiot, or because you were unclear? It’s unlikely that it was 100% the former. If you choose to appeal, be as considerate and respectful as you can, get an outside reader to double check—how will your e-mail come across to the editor? Acknowledge that there may have been problems with the paper that may have led the reviewer astray and note how you could fix them. Remember, the editor is us, so try to reduce unnecessary workload and hassle. Dealing with appeals is fully within my responsibility—if I screwed up, I’d rather have a chance to fix it. But dealing with an author who is irate, huffy, and obnoxious causes a lot of extra work and headaches I don’t deserve for just trying to do the best job I can. First I have to convince myself not to react with my initial inclination—to just say “F-off!” Then, I have to sort what may be valid argument rather than just peeve. Being nasty is also a lot more likely to motivate an editor to focus on justifying their decision, rather than reconsidering it. The editor may be human, but the role forces them to make decisions and act as god. You’re asking them the favor of reconsidering that decision.
There were several other questions that arose at the workshop: about impact factor, how to motivate yourself to deal with major revisions, and other important issues. But most questions could be sorted out by remembering that the editors and reviewers are your colleagues. If you have a question about the process, start by putting yourself in their shoes and consider how you would want to be treated. Do that, and you’ll be able to answer 90% of your own questions.
In Writing Science and in past posts, I’ve discussed survival and success strategies associated with writing. Yet there is another suite of success strategies that underlie our ability to produce those papers. Managing personal relationships with our support team is high on that list. I think we all know how important collaborators are in advancing our research careers. If you are a student or postdoc, I’m sure you’ve learned the hard way about “major professor management;” if you are a professor, you’ve learned about mentoring lab members.
The group of people we often don’t discuss very much is the staff. As faculty and researchers, we all recognize that we carry the mission of the University—we are the people leading the research, teaching the classes, and doing the service. Yet, though we may carry the mission of the University, we wouldn’t carry it very far without a large team behind us. I don’t prepare my budget forms. I don’t pay the vendors. I don’t maintain the files on graduate applications. I don’t fix the plumbing in my lab when it breaks. Bottom line—without the staff, I’d be sitting in an empty field pontificating like Plato.
Together faculty and staff form a single team, each with different responsibilities, but with shared contributions. Many of the staff functions we rely on require knowledgeable and skilled professionals—as systems become more complex that need only increases. Yet, many academics take the contributions of the staff for granted. For example, once, when I was serving on a search committee for a grants coordinator, a faculty member criticized one of the candidates: “Yes, they are willing to stay late and put in the extra time when a proposal crisis comes along, but they let you know they’re doing you a favor.” That comment struck me. We were talking about a staff member, who has a 40-hour a week job without overtime, and when she stays late to get your proposal out the door, she is doing you a favor. And it seems just common human decency to recognize it!
Absolutely, I want staff who are open to doing that favor when needed, but they don’t have to, and do so because they are committed to their job and to us. Keeping the staff on our side is essential to our success as academics. Don’t assume, ever, that staff functions just happen by magic!
No one rates a University on the quality of its staff, but I’ve worked in several universities, and I know my ability to perform as a faculty member is enhanced by the great people I have supporting me. Having a terrific team allows me to focus, as I should, on teaching, research, and service.
The staff works in an environment where all too often the motto might be “Perfection is invisible; anything less, complainable.” No one appreciates that. If you want your staff members to be there for you when you need them, be there for them. At the very least, say thank you and make sure they understand that you do appreciate them. They are our team members, our colleagues, and hopefully, our friends.
If you don’t recognize and appreciate your support staff and their contributions to your work and career, you are making a foolish and dangerous mistake. If you don’t show them that you do, you are making an almost equally large mistake, one that may hobble your academic success.
My version of the Golden Rule is “Take care of the people who take care of you.” That applies to all the people who support us in our jobs—colleagues, students, and importantly, the support staff.
For decades, natural scientists have been castigated for talking in jargon. But all fields have their own technical terms that to others are “jargon.” So why do scientists catch so much flack over it? Several of my graduate students recently were taking an interdisciplinary seminar class and were criticized by social science and humanities students for using jargon—but those criticisms were leveled in language my students didn’t understand. Those other students were using language just as deeply “jargonish” yet they had a hard time perceiving it that way. Why?, I wondered.
Certainly, this was partially a classic “Curse of Knowledge” problem: we know what we know, and it’s hard to realize that others don’t. That is what launched the discussion—my students used language that was so intuitive for them, they didn’t realize their words would be jargon to others. But I think the disciplinary jargon divide might also have roots in how our disciplines create technical terms. My distinction between “technical term” and “jargon” (from Writing Science, pg. 147) is:
- A term that refers to a schema the reader does not hold.
- A term where there is an adequate plain language equivalent
- A term that refers to a schema the reader does
- A term where either there is no plain language equivalent, or where using it would be confusing.
In the natural sciences, we often create new terms, but when we do, we have historically reached for Latin and Greek to provide roots for those terms:
When we use such terms, no one is ever going to mistake those for “plain language.” When a doctor told me “You have a defect in the cartilage of your lateral patellar facet” there was no mistaking that this was doctor-speak. Scientists deliberately developed the tradition of relying on Latin and Greek so that our terminology would be the same across languages and so when the orthopedist explained why my knee hurt it would be the same whether we were discussing “my knee,” “mon genou,” or “mi rodilla.”
But my impression is that scholars in the Social Sciences and the Humanities more commonly develop technical concepts, not by borrowing from Latin or Greek, but from English. Thus, it may be less likely that even the person using the word will recognize that it may be “jargon.” Consider a statement like:
“Parentheticals increase the spatial and textile volume of your prose, opening the breathing space for the reader and enlarging the referential sphere of your engagement with the material”1
Here every single term draws from common English, but many have a somewhat different meaning in English and in “Humanities.” What is the “textile volume of prose” or the “referential sphere”?
Natural scientists certainly face this same problem—and we have some doozies. Think about the misunderstanding of the word “theory.” Many non-scientists use the word as anywhere from a reasonable suggestion to a complete wild-assed guess. None of the common uses are anywhere remotely near how it is used within the deep conservatism of Philosophy of Science: an idea that has stood up over decades to every sling and arrow of outrageous testing. Thus, a biologist accepts the “Theory of Evolution” as established fact that, only for lack of a time machine to go back and specifically observe past history, is not called a “Law” up there with “Newton’s Laws.” Yet a creationist in Kansas can look at the “Theory of Evolution” and argue, “It’s only a theory!”
When fields develop technical terms by borrowing words from common language, it may make them more immediately tractable and offer easier engagement—“invasive species” is fairly intuitive, but carries meaning that may mislead a new reader to its ecological nuances. It may also make it more difficult for someone to separate the common word from their field-specific technical term.
So back to the seminar my students were talking about. Because social scientists and humanists often borrow their discipline-speak terms from English, they may have a harder time recognizing that they are actually being just as jargon-laden as when the doctor told me about my patellar facet.
1 Hayot, E. The Elements of Academic Style: writing for the humanities. Columbia University Press. Pg. 180.
Please note—I really admire Hayot’s book. Some sections are inspired, notably Chapter 3 “Eight Strategies for Getting Writing Done,” which is the best I have ever read on the subject and by itself justifies buying the book. In fact, his discussion in that chapter of “virtuous procrastination” was probably worth the $20. But Hayot is a humanist writing for humanists—his language reflects that.
Mitch Wagener and I wrote this years ago when I was at the University of Alaska and he was a Ph.D. student. We originally published it online as part of the “First International Virtual Symposium on Mad Science.” That always bugged me because there was no “mad science” involved—we actually collected the data and it’s all real. I still list this as a “technical report” on my C.V. I hadn’t been able to find a copy of this for a long time, but one recently appeared and I thought this blog would be a good place to give it new life. I hope you find it entertaining.
The Production of Greenhouse Gases in Faculty Seminars
Stephen M. Wagener and Joshua P. Schimel
University of Alaska Fairbanks
Recently Hungadunga and McCormick (1991) observed that, in the natural course of their duties, academic biologists produce only a little less CO2 and CH4 than do feedlot cattle. As an interesting aside, they also found that only politicians ranked higher than cattle in the production of these gases. Unfortunately, university administrators were not tested.
It had long been observed at our institute that during faculty seminars the room—lovingly referred to as the Autoclave—gets stuffy and people often fall asleep. We named this phenomenon Seminar Narcolepsy Syndrome (SNS, pronounced snooze). It is also evident that some seminars cause a much higher level of SNS than do others. We hypothesized that the SNS might be caused by excessive CO2 in the seminar room. This raised several issues that we addressed in this experiment: Is SNS actually caused by CO2? Is this phenomenon related to the seminar topic? How might faculty seminars contribute to the production of the greenhouse gases CO2 and CH4?
During the fall and winter of 1991-1992 we took air samples at four locations in the seminar room at 15 minute intervals during faculty seminars. The sample takers were various graduate students and faculty. We also measured beginning and ending temperatures and counted the attendees. We analyzed samples using gas chromatography. We wished to also measure relative humidity and atmospheric mercaptans, but our GC was not set up to analyze mercaptans and we were persuaded that using a sling psychrometer during the seminars would be disruptive.
The ornithology and mammalogy seminars generated the most carbon dioxide per capita. This is not surprising considering the excitement these subjects generate at our biological institute. Since people respire less while asleep, dips in CO2 during the plant ecology, stream ecology and invertebrate zoology seminars probably indicate people nodding off. The microbial ecology seminar produced by far the most methane, although there was a sharp drop 15 minutes before the end. This corresponds to certain people (probably mammalogists) leaving early. Methane concentrations actually went down during the plant ecology seminar.
Our initial hypothesis was that high CO2 induced SNS and caused the audience to fall asleep. In fact we found the opposite: mammalogy and ornithology seminars caused some serious heavy breathing and greatly accelerated CO2 production. Other subjects had both higher incidences of SNS and reduced CO2 build up. It is possible that at a microbiology institute, micrographs of bacterial conjugation would stimulate heavy breathing, while moose conjugation might induce SNS. We suspect that a seminar on human conjugation would induce heavy breathing in all academic institutes.
Spectators at plant ecology seminars appear to be facultative methanotrophs. The only other animals known to consume methane are several species of marine clam; this relationship may therefore require a major reevaluation of the place of plant ecologists in the phylogenetic tree. We also found that carbon dioxide accumulates faster in the east half of the seminar room, while methane increases faster in the west side. This could be due to the fact that certain people consistently sit in the same spot. The effect is that the seminar room can generate its own weather patterns.
We have the following recommendations for making faculty seminars more eco-friendly:
- Only boring seminars should be allowed. Not only would this reduce the production of greenhouse gases, but it would have the added benefit of reducing the overall stress level of attendees.
- Each attendee should be encourage to bring a potted plant. Tomato plants would be ideal if one has a strong opinion about the seminar presenter. However, attendees should probably wait until after the seminar to get themselves potted.
- Room lights should remain on at all times to encourage photosynthesis. However, this could also have the negative and unrealistic effect of encouraging people to stay awake.
- No one should be allowed in the seminar room who had the burrito special for lunch.
Hungadunga, M.L. and F.G. McCorrnick. 1991. Human sources of greenhouse gases. North Dakota Journal of Natural Gas Production 107:23-45
Table 1. Seminar topic, attendance, temperature, and gas concentrations
Number of Temperature Ending CO2 Ending CH4
Seminar Topic Attendees ° C % ppm
Ornithology 70 27 0.15 2.19
Mammalogy 60 26 0.19
Plant Ecology 75 27 0.14 1.88
Plant Taxonomy 50 26 0.10 2.37
Invertebrate Zoology 75 26 0.15
Stream Ecology 75 25 0.11 2.31
Biochemistry 30 26 0.04
Microbial Ecology 50 26 0.09 4.42