In Writing Science and in past posts, I’ve discussed survival and success strategies associated with writing. Yet there is another suite of success strategies that underlie our ability to produce those papers. Managing personal relationships with our support team is high on that list. I think we all know how important collaborators are in advancing our research careers. If you are a student or postdoc, I’m sure you’ve learned the hard way about “major professor management;” if you are a professor, you’ve learned about mentoring lab members.
The group of people we often don’t discuss very much is the staff. As faculty and researchers, we all recognize that we carry the mission of the University—we are the people leading the research, teaching the classes, and doing the service. Yet, though we may carry the mission of the University, we wouldn’t carry it very far without a large team behind us. I don’t prepare my budget forms. I don’t pay the vendors. I don’t maintain the files on graduate applications. I don’t fix the plumbing in my lab when it breaks. Bottom line—without the staff, I’d be sitting in an empty field pontificating like Plato.
Together faculty and staff form a single team, each with different responsibilities, but with shared contributions. Many of the staff functions we rely on require knowledgeable and skilled professionals—as systems become more complex that need only increases. Yet, many academics take the contributions of the staff for granted. For example, once, when I was serving on a search committee for a grants coordinator, a faculty member criticized one of the candidates: “Yes, they are willing to stay late and put in the extra time when a proposal crisis comes along, but they let you know they’re doing you a favor.” That comment struck me. We were talking about a staff member, who has a 40-hour a week job without overtime, and when she stays late to get your proposal out the door, she is doing you a favor. And it seems just common human decency to recognize it!
Absolutely, I want staff who are open to doing that favor when needed, but they don’t have to, and do so because they are committed to their job and to us. Keeping the staff on our side is essential to our success as academics. Don’t assume, ever, that staff functions just happen by magic!
No one rates a University on the quality of its staff, but I’ve worked in several universities, and I know my ability to perform as a faculty member is enhanced by the great people I have supporting me. Having a terrific team allows me to focus, as I should, on teaching, research, and service.
The staff works in an environment where all too often the motto might be “Perfection is invisible; anything less, complainable.” No one appreciates that. If you want your staff members to be there for you when you need them, be there for them. At the very least, say thank you and make sure they understand that you do appreciate them. They are our team members, our colleagues, and hopefully, our friends.
If you don’t recognize and appreciate your support staff and their contributions to your work and career, you are making a foolish and dangerous mistake. If you don’t show them that you do, you are making an almost equally large mistake, one that may hobble your academic success.
My version of the Golden Rule is “Take care of the people who take care of you.” That applies to all the people who support us in our jobs—colleagues, students, and importantly, the support staff.
For decades, natural scientists have been castigated for talking in jargon. But all fields have their own technical terms that to others are “jargon.” So why do scientists catch so much flack over it? Several of my graduate students recently were taking an interdisciplinary seminar class and were criticized by social science and humanities students for using jargon—but those criticisms were leveled in language my students didn’t understand. Those other students were using language just as deeply “jargonish” yet they had a hard time perceiving it that way. Why?, I wondered.
Certainly, this was partially a classic “Curse of Knowledge” problem: we know what we know, and it’s hard to realize that others don’t. That is what launched the discussion—my students used language that was so intuitive for them, they didn’t realize their words would be jargon to others. But I think the disciplinary jargon divide might also have roots in how our disciplines create technical terms. My distinction between “technical term” and “jargon” (from Writing Science, pg. 147) is:
- A term that refers to a schema the reader does not hold.
- A term where there is an adequate plain language equivalent
- A term that refers to a schema the reader does
- A term where either there is no plain language equivalent, or where using it would be confusing.
In the natural sciences, we often create new terms, but when we do, we have historically reached for Latin and Greek to provide roots for those terms:
When we use such terms, no one is ever going to mistake those for “plain language.” When a doctor told me “You have a defect in the cartilage of your lateral patellar facet” there was no mistaking that this was doctor-speak. Scientists deliberately developed the tradition of relying on Latin and Greek so that our terminology would be the same across languages and so when the orthopedist explained why my knee hurt it would be the same whether we were discussing “my knee,” “mon genou,” or “mi rodilla.”
But my impression is that scholars in the Social Sciences and the Humanities more commonly develop technical concepts, not by borrowing from Latin or Greek, but from English. Thus, it may be less likely that even the person using the word will recognize that it may be “jargon.” Consider a statement like:
“Parentheticals increase the spatial and textile volume of your prose, opening the breathing space for the reader and enlarging the referential sphere of your engagement with the material”1
Here every single term draws from common English, but many have a somewhat different meaning in English and in “Humanities.” What is the “textile volume of prose” or the “referential sphere”?
Natural scientists certainly face this same problem—and we have some doozies. Think about the misunderstanding of the word “theory.” Many non-scientists use the word as anywhere from a reasonable suggestion to a complete wild-assed guess. None of the common uses are anywhere remotely near how it is used within the deep conservatism of Philosophy of Science: an idea that has stood up over decades to every sling and arrow of outrageous testing. Thus, a biologist accepts the “Theory of Evolution” as established fact that, only for lack of a time machine to go back and specifically observe past history, is not called a “Law” up there with “Newton’s Laws.” Yet a creationist in Kansas can look at the “Theory of Evolution” and argue, “It’s only a theory!”
When fields develop technical terms by borrowing words from common language, it may make them more immediately tractable and offer easier engagement—“invasive species” is fairly intuitive, but carries meaning that may mislead a new reader to its ecological nuances. It may also make it more difficult for someone to separate the common word from their field-specific technical term.
So back to the seminar my students were talking about. Because social scientists and humanists often borrow their discipline-speak terms from English, they may have a harder time recognizing that they are actually being just as jargon-laden as when the doctor told me about my patellar facet.
1 Hayot, E. The Elements of Academic Style: writing for the humanities. Columbia University Press. Pg. 180.
Please note—I really admire Hayot’s book. Some sections are inspired, notably Chapter 3 “Eight Strategies for Getting Writing Done,” which is the best I have ever read on the subject and by itself justifies buying the book. In fact, his discussion in that chapter of “virtuous procrastination” was probably worth the $20. But Hayot is a humanist writing for humanists—his language reflects that.
Mitch Wagener and I wrote this years ago when I was at the University of Alaska and he was a Ph.D. student. We originally published it online as part of the “First International Virtual Symposium on Mad Science.” That always bugged me because there was no “mad science” involved—we actually collected the data and it’s all real. I still list this as a “technical report” on my C.V. I hadn’t been able to find a copy of this for a long time, but one recently appeared and I thought this blog would be a good place to give it new life. I hope you find it entertaining.
The Production of Greenhouse Gases in Faculty Seminars
Stephen M. Wagener and Joshua P. Schimel
University of Alaska Fairbanks
Recently Hungadunga and McCormick (1991) observed that, in the natural course of their duties, academic biologists produce only a little less CO2 and CH4 than do feedlot cattle. As an interesting aside, they also found that only politicians ranked higher than cattle in the production of these gases. Unfortunately, university administrators were not tested.
It had long been observed at our institute that during faculty seminars the room—lovingly referred to as the Autoclave—gets stuffy and people often fall asleep. We named this phenomenon Seminar Narcolepsy Syndrome (SNS, pronounced snooze). It is also evident that some seminars cause a much higher level of SNS than do others. We hypothesized that the SNS might be caused by excessive CO2 in the seminar room. This raised several issues that we addressed in this experiment: Is SNS actually caused by CO2? Is this phenomenon related to the seminar topic? How might faculty seminars contribute to the production of the greenhouse gases CO2 and CH4?
During the fall and winter of 1991-1992 we took air samples at four locations in the seminar room at 15 minute intervals during faculty seminars. The sample takers were various graduate students and faculty. We also measured beginning and ending temperatures and counted the attendees. We analyzed samples using gas chromatography. We wished to also measure relative humidity and atmospheric mercaptans, but our GC was not set up to analyze mercaptans and we were persuaded that using a sling psychrometer during the seminars would be disruptive.
The ornithology and mammalogy seminars generated the most carbon dioxide per capita. This is not surprising considering the excitement these subjects generate at our biological institute. Since people respire less while asleep, dips in CO2 during the plant ecology, stream ecology and invertebrate zoology seminars probably indicate people nodding off. The microbial ecology seminar produced by far the most methane, although there was a sharp drop 15 minutes before the end. This corresponds to certain people (probably mammalogists) leaving early. Methane concentrations actually went down during the plant ecology seminar.
Our initial hypothesis was that high CO2 induced SNS and caused the audience to fall asleep. In fact we found the opposite: mammalogy and ornithology seminars caused some serious heavy breathing and greatly accelerated CO2 production. Other subjects had both higher incidences of SNS and reduced CO2 build up. It is possible that at a microbiology institute, micrographs of bacterial conjugation would stimulate heavy breathing, while moose conjugation might induce SNS. We suspect that a seminar on human conjugation would induce heavy breathing in all academic institutes.
Spectators at plant ecology seminars appear to be facultative methanotrophs. The only other animals known to consume methane are several species of marine clam; this relationship may therefore require a major reevaluation of the place of plant ecologists in the phylogenetic tree. We also found that carbon dioxide accumulates faster in the east half of the seminar room, while methane increases faster in the west side. This could be due to the fact that certain people consistently sit in the same spot. The effect is that the seminar room can generate its own weather patterns.
We have the following recommendations for making faculty seminars more eco-friendly:
- Only boring seminars should be allowed. Not only would this reduce the production of greenhouse gases, but it would have the added benefit of reducing the overall stress level of attendees.
- Each attendee should be encourage to bring a potted plant. Tomato plants would be ideal if one has a strong opinion about the seminar presenter. However, attendees should probably wait until after the seminar to get themselves potted.
- Room lights should remain on at all times to encourage photosynthesis. However, this could also have the negative and unrealistic effect of encouraging people to stay awake.
- No one should be allowed in the seminar room who had the burrito special for lunch.
Hungadunga, M.L. and F.G. McCorrnick. 1991. Human sources of greenhouse gases. North Dakota Journal of Natural Gas Production 107:23-45
Table 1. Seminar topic, attendance, temperature, and gas concentrations
Number of Temperature Ending CO2 Ending CH4
Seminar Topic Attendees ° C % ppm
Ornithology 70 27 0.15 2.19
Mammalogy 60 26 0.19
Plant Ecology 75 27 0.14 1.88
Plant Taxonomy 50 26 0.10 2.37
Invertebrate Zoology 75 26 0.15
Stream Ecology 75 25 0.11 2.31
Biochemistry 30 26 0.04
Microbial Ecology 50 26 0.09 4.42
I’m one of the Chief Editors of a major international journal and I need to find reviewers for ~150 papers a year. A substantial fraction of those papers are in sub-fields I’m not deeply expert in. So we ask authors to suggest reviewers. I never rely entirely on that list, but I always start by looking at it for ideas and inspiration. While some reviewer lists are really helpful, others are useless. Why? What makes a good list of recommended reviewers?
Keep in mind several principles:
- I’m looking for people who I think I can trust to give objective, thoughtful, and critical reviews.
- I’m a senior researcher in my field and I know the big names.
- The most senior people in your field are very busy.
So how do these principles translate into a good list of recommended reviewers?
First, if you suggest the three biggest names in the field, I already know them! And I already know they are likely to say no. So by suggesting those gods, you’ve done me no help at all.
Alternatively, if you recommend people I suspect might be too close to you, I worry that they won’t give a critical review. For many U.S. and European scientists, I know the relationship structures, but if I’m unsure, I look for reviewers from different Countries, States, or at least different Universities. For example, I might ask someone from York to review a paper from Exeter (in the U.K.), or someone from Umea to review a paper from Lund (in Sweden). But for an American paper, I’ll still usually try to avoid two US. Reviewers; Soil Biology & Biochemistry is an international journal and we take that seriously. If I have reason to suspect the quality of your recommendations, I’m unlikely to use any of them.
So, if you are a scientist in a small country with a small research community, don’t recommend three reviewers from your country! If, for example, a Czech scientist recommends three people with .cz email addresses, I’m likely to disregard all of them, especially if they are names I don’t know well. Honestly, I’d even be suspicious of two .cz addresses.
On this matter, China remains the biggest question mark for me. China may be as large and with as dispersed a research community as the U.S., but the Chinese research community is still emerging on the international stage (at least in my field) and I don’t have as good a sense of where all the different units are or what the relationships are, which I do have for the U.S. and Europe. Chinese naming practices—few, but common, family names—also make it harder for outsiders to assess who’s who. There are many Li’s, Zhu’s, etc. It’s like Wales, where half the population is named Jones, except the Chinese population is 6600 times larger. In contrast I know only one other Schimel in Ecology and he’s my brother. So if you’re Chinese, I’d recommend that you not suggest three Chinese reviewers (and maybe not even two).
O.K., so I’ve told you what not to do. What should you do? How do you put together a reviewer list that I, as editor, will find useful in helping me find good reviewers quickly?
Give a list of people who aren’t the obvious “usual suspects” in the broad field. In terms of seniority, focus on mid-career (e.g. Associate Prof. level in the U.S. rank scale is often ideal); junior faculty or even postdocs can also be great if they’ve done interesting and insightful work in your area. Often younger researchers do the best reviews, and the ideal is someone who’s had enough experience to develop vision and perspective, but who still has the time in their life to commit to doing a thoughtful review. The perfect name is one to which my response will be “Ah ha, of course! I hadn’t thought of her, but she’d be great.”
Give me three of those, and I will be grateful and impressed. Never a bad way for the editor to feel when he’s beginning the process of determining your paper’s fate.
P.S. Oh, and don’t recommend as a review someone who is already one of the Chief Editors for the journal. It really doesn’t help me if I’m handling a paper as editor and someone listed me as a suggested reviewer.
Thanks to Andy Hoffman for a column that brought to my attention this letter (Chaos in the Brickyard) that was published in Science in 1963. Already, the author, Dr. Bernard Forscher was concerned that science was increasingly focused on producing “bricks”–little pieces of information, rather than “Edifices”–real bodies of useful knowledge. That was in 1963 people!
Dr. Forscher wrote this as a parable, but the idea is directly analogous to my distinction between data, information, knowledge, and understanding. Ultimately science is about edifices: knowledge and understanding. Yet many scientists never do more than make bricks–individual nuggets of information.
I think Dr. Forscher was visionary: our reward systems increasingly skew toward rewarding bricks. The more we focus these systems on simple quantitative metrics, such as the Impact Factor of the journal a paper was published in, the more Dr. Forscher’s vision becomes true and the more we end up focused on the “bricks for brick-ness’ sake” rather than on the ultimate goal of building structures.
Reward systems should target direct measures of a scientists contribution to building real structures of knowledge, rather than just on their ability to produce bricks. That is something that peers can usually do easily, and sophisticated network mapping software can do with difficulty (e.g. Thompson Reuters Citation Laureates), but the simple metrics are mostly incapable of–they only measure bricks.
The concerns Dr. Forscher raised in 1963 have only become more extreme as has the importance of the reminder that to have real impact as a scientist, you need to force yourself and your trainees to aim for edifices. The ultimate goal of science is to advance human understanding. We build our great edifices of understanding out of individual pieces of information (bricks), but a brick’s value is only as great as its contribution to an edifice. A brick, by itself, on it’s own and isolated, remains a block of clay and straw. The papers that ultimately matter are those that offer more than just information–they offer real knowledge and insight.
Andy Hoffman’s column is available at: (http://chronicle.com/article/Isolated-Scholars-Making/151707/)
This isn’t about “Writing Science,” rather it is the ultimate of personal writing by someone who I respect deeply for his science but equally for his humanity and eloquence in the face of adversity. I repost it because they are people who are close to me, a message that is equally so, and it shows a power of language. Alan, Diana, and Neva, the thoughts and wishes of all your friends are with you.
My wife Diana was recently diagnosed with a brain tumor, the second brain tumor to hit our family in fifteen months. They are unconnected events – lightning striking twice unimaginably. For the first one, still very much a part of our young daughter’s life, we shared a great deal here. We may do the same for this latest challenge, or we may not. It’s a bridge we have not yet crossed. But on World Cancer Day, here are a few words.
I’ve had a recurrent dream since last Friday. I am afloat on an equatorial sea, a sweep of beach to my back. Before me, lines of aquamarine rise from the depths in metronomic intervals as though the sea itself is but the rippling skin of some unseen leviathan.
The waves begin to build, each one just a touch higher than the last, and in the crystalline walls appear the faces of those…
View original post 852 more words
Here’s another word I want to add to my pet peeve list: “Impact.” It’s overused to the point of meaninglessness, and it often has little meaning anyhow.
I blame Thompson-Reuters for this over-use because they developed the “Impact Factor.” This, of course is a way of assessing the average citation levels for journals. As the impact factor has become the dominating metric of the publishing world, so has it emphasized the word “impact” as something to strive for. So, I see many people using “impact” in hopes of increasing their impact factor. Sorry, it doesn’t work that way.
I know some avowed language purists argue that “impact” should be restricted to a physical blow (its original meaning), but its figurative meaning “”the effective action of one thing or person upon another; the effect of such action; influence; impression” [OED] has been in use since at least the early 1800’s. Used well and carefully, it can be a great word.
No, my issues with “impact” are massive over-use and minimal clarity. When you try to express the impactfullness of your work by describing every effect and influence as an impact, the real impact of all your “impacts” is to deaden readers’ senses and reduce the impact of “impact.” Ok, that’s overkill, but you get the point.
More important though, is that “impact” is a fuzzy verb (Writing Science section 14.2, pg. 137). It sounds specific and concrete, but it isn’t—rather, it says there was an effect, but not what the effect was. That is a problem that goes way beyond mere overuse, which is a crime against linguistic aesthetics and style. Being unclear is failing in communication.
Consider sentences such as the following:
“There was a strong impact of added ESX on the growth of Daphnia.”
“The rate of GDH activity was impacted by altered salinity.”
Here, “impact” is a synonym for “effect,” and is devoid of content. Did ESX increase or decrease Daphnia growth? If ESX were presented as a potential toxin, I might infer that the effects were bad, since “impact” suggests a strong result, but the word itself doesn’t tell me—thus if fails in its mission. Instead, tell me what happened: “ESX substantially reduced growth” or “GDH activity was increased.”
You can sometimes use words like “influenced” or “altered” as an opening when the patterns of response were complex and you can’t capture the pattern with a single word, for example, “ESX altered the growth of Daphnia, initially increasing growth rate but later reducing it, such that overall yield was reduced.” But “impact” is a poor choice for that role, as it implies simple, direct, substantial action (e.g. an asteroid’s impact crater).
Remember the writer’s rule: “Show, don’t tell.” If you show us that there were interesting and significant influences and patterns in your results, you will convince us. Describing those influences as “impacts” to make them feel important won’t. Rather, it will feel like you’re trying too hard to speak for your results, which makes me suspicious. The role of language in science is to help the data speak for themselves. It’s to bring out and highlight what nature is showing us—not to try to impose a message upon nature.
“Impact” often feels like the author is trying to pump up their results, is heavily overused, and is surprisingly lacking in substance. That’s a trifecta that earns it this week’s place in Josh’s language pet peeves.