Skip to content
May 1, 2018 / jpschimel

How to write an effective proposal review

In a recent post, I discussed how to do a good manuscript review. I analogized that to battlefield medicine, where the first step is triage: determine whether the patient can be saved. But the truly critical step is the second one: treatment. If the “patient”—the paper—has potential, then your job as reviewer is to help make it as strong as possible. Submitted manuscripts always need revision and editing to reach their potential. Peer review provides a service to journals in their decision making, but the greater service is the one we provide each other.

Proposal review is different. It is almost entirely evaluative, with essentially no “treatment.” We don’t review proposals for their authors, but for the funding agency. That ultimately serves the community, because we want agencies to make good decisions, and so we help them with that. But, our job is to tell tell the agency whether they should fund the project, not to tell the Principal Investigators1(PIs) how to make the work better. The PIs will see your review, but they are not its audience—the review panel and program officers are.

In making recommendations, remember that research proposals are works of science fiction: the PIs are not going to do exactly what they wrote. A proposal isn’t a promise, but a plan, and the military maxim “no plan survives contact with the enemy” applies. The PIs may have great ideas, but nature won’t cooperate, or they’ll recruit a student or postdoc who takes the work in different directions. That’s the nature of science. In a research project, you must aim to achieve the project’s core goals, but it will mutate. If you knew enough to describe exactly what you will do over three years, you knew enough to not need to do it! We do the research because we don’t know all the answers. We rely on PIs to use their judgement to sort out glitches that arise.

To recommend funding a proposal, therefore, it should be pretty awesome; awesome enough that you have confidence that a) it is worth doing, b) enough of it will likely work and c) the PIs will be able to work around the parts that don’t and still achieve their goals. If major elements are likely to fail, or you lack confidence the investigators will be able to solve the problems that arise, you should say so and recommend rejection. When you are reviewing a proposal, therefore, you must answer two questions:
1) Is the proposal exciting and novel enough to be worth investing limited resources?
2) Is the proposal technically sound enough to be doable?

PIs show the novelty of the questions by demonstrating the knowledge gap. This calls for clearly defining the boundaries of existing knowledge (not just saying “little is known about this”) and by framing clear, falsifiable hypotheses (not just fluff: “increasing temperatures will alter the structure of forest communities” but how they think it will alter them). PIs demonstrate that the work will likely succeed by clearly explaining the experimental design (the logic is often more important that the gory details though), discussing methods in appropriate detail, describing how they will address risks and alternative strategies in case things don’t work, etc. The better the PIs thought through the plan, the better positioned they are to cope when things go off track.

One challenge in reviewing is that since only the best proposals will be funded, reviewing is inherently relative: how does this one stack up against the competition? Since you aren’t reading those, you have to assume a baseline to compare against. That is why the first proposal I ever reviewed took several days; now it sometimes only takes an hour. I had to develop a reference standard for what a good proposal looks like—the job gets easier the more you review2.

Also, keep in mind that success rates have often sunk below 10%, which means that many strong proposals fail. This is a shift from when I started, when success rates were 20-30%. That sounded bad until I served on my first panels and realized that only about 40-50% of the proposals were worth funding, creating a “functional” funding rate  closer to 50%. With two panels a year, that meant if a good proposal didn’t get funded this time, it had a strong shot next time. That’s no longer true. Now, many seriously good proposals are not going to succeed, not this time, likely not next time, and quite possibly not ever. Ouch. As reviewers, though, just keep pushing—if you read a proposal that you really think deserves funding, say so. Force the panels and program officers to make the hard calls: which great proposals to reject—that’s the job they signed on for. It also helps them argue for increased support to say “We were only able to fund a third of the ‘high priority’ proposals.”

Scores
I know how NSF defines rating scores3, but in my experience, NSF’s definitions don’t quite match reality, and their connection to reality has weakened as funding rates have dropped. Over the years, I’ve developed my own definitions that I believe more closely match how the scores work in practice.

Excellent: This is a very good proposal that deserves funding. Exciting questions and no major flaws. If I’m on the panel, I am going to go to fight to see that this one gets funded.
Very Good: This is a good proposal. The questions are interesting, but don’t blow me away, and there are likely some minor gaps. I’m not going to fight to see this funded, but it wouldn’t bother me if it were. Functionally, this is a neutral score, not really arguing strongly either way.
Good: This is a fair proposal; the ideas are valid but not exciting and/or the approaches are weak (but not fatally so). The proposal might produce some OK science, but I don’t think it should be funded and will say so, if not vociferously.
Fair: This is a poor proposal. It should absolutely not be funded, but I don’t want to be insulting about it. There are major gaps in the conceptual framing, weaknesses in the methods, and/or it seriously lacks novelty.
Poor: This score is not really for the program officer, but for the PI. For me, giving a “poor” is a deliberate act of meanness, giving a twist of the knife to an already lethal review. It says: I want you to hurt as much as I did for wasting my time reading this piece of crap! I would never assign “poor” to a junior investigator who just doesn’t know how to write a proposal. Nope, “poor” is reserved for people who should know better and for some bizarre reason submitted this “proposal” anyhow.

In just about every panel I’ve served on, there are only a few proposals that are so terrific that there is essentially unanimous agreement that they are Must Fund. Those would probably have rated so regardless of who was serving on the panel and are the true Excellent proposals. Most of us probably never write one. Then there are the proposals that define Very Good: these comprise a larger pool of strong proposals that deserve funding—but there isn’t likely to be enough money available to fund all of them. Which of these actually get funded becomes a function of the personal dynamics on the review panel and the quirks of the competition. Did someone become a strong advocate for the proposal?Were there three strong proposals about desert soil biological crusts? It’s not likely an NSF program would fund all three if there were also strong proposals about tropical forests or arctic tundra. Were any one in the panel, it would likely have been funded, but with all three, two might well fail. When resources are limited, agencies make careful choices about how to optimize across areas of science, investigators, etc. I support that approach.

Broader Impacts
One required element of NSF proposals is Broader Impacts. These can include societal benefits, education, outreach, and a variety of other activities. Including this was an inspired move by NSF to encourage researchers to integrate their research more effectively with other missions of the NSF and of universities. When NSF says that broader impacts are co-equal with intellectual merit as a review criterion, however, sorry, they’re lying. We read proposals from the beginning but broader impacts are at the end. We start evaluating with the first words we read, and if at any point, we conclude a proposal is uncompetitive, nothing afterwards matters. If the questions are dull or flawed, the proposal is dead and nothing can save it—not a clever experiment and not education and outreach efforts! Because broader impacts activities are described after the actual research, they are inherently less important in how we assess a project.

Broader impacts may be seen as an equal criterion because a proposal will only get funded if all of its elements are excellent. A proposal is successful when you grab reviewers with exciting questions, and then don’t screw it up! The approaches must address the questions and the education and outreach activities must be well thought out, specific and effective. Great broader impacts won’t save bad science, but weak broader impacts will sink strong science. The relative strengths of broader impacts activities may also decide which scientifically awesome project makes it to the funding line; but they won’t prop up weak science.

To wrap up: to write a useful proposal review, remember you are making a recommendation (fund vs. don’t fund) to the funding agency, and then providing justification for that recommendation. If you think a proposal is super, why? What is novel? Why are the experiments so clever? Why is the inclusiveness part more than just “we’ll recruit underrepresented students from our local community college”? How have the PIs shown that this effort is woven into the research? As an ad hoc reviewer, bring your expertise to the table to argue to the panel what they should recommend. As a panelist, give the program officer the information and rationale they need to help them decide. Do those things well, and your reviews will be useful and appreciated.

—————————————————–

1Please do not call them “Principle Investigators”—that is one common error of language that drives me nuts: a “principle investigator” investigates “principles”: i.e. a philosopher, not a scientist! A “principal investigator” is the lead investigator on a project. When I see people careless with that language, I wonder: are they equally careless with their samples and data? Do you really want me asking that when I’m reviewing your proposal?

2When I was a Ph.D. student, my advisor, Mary Firestone, came to the lab group and said she’d just been invited to serve on the Ecosystem Program review panel (two panels a year for three years) and asked what we thought. We all said, “no, don’t do it—we already don’t see enough of you!” She responded with “You haven’t come up with anything I haven’t already thought of, so I’m going to do it.” We all wondered why she asked us if she was going to ignore our input. We were clueless and wrong; Mary was considerate to even check. By serving on review panels you learn how to write good proposals—as I learned when I started serving on panels! It’s a key part of developing a career. Mary understood that; we didn’t. Sorry for the ignorant ill thoughts, Mary.

3NSF Definitions of Review Scores
Excellent Outstanding proposal in all respects; deserves highest priority for support.
Very Good High quality proposal in nearly all respects; should be supported if at all possible.
Good A quality proposal, worthy of support.
Fair Proposal lacking in one or more critical aspects; key issues need to be addressed.
Poor Proposal has serious deficiencies.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: