Stories
Stories
Research Brief: Subject Expert Matters
Karim Lakhani (photo by Dana Maxson)
Every year billions of dollars are granted to scientists based on the evaluation of peer-reviewed proposals. The final decisions about whose project gets funded, which study is published, or who gets hired rest entirely with expert committees, which are ubiquitous in science. Yet exactly how those experts deliberate and what factors may influence or bias their determinations has remained a mystery.
“To ensure that the decisions coming out of such committees result in the most valuable science, we must understand how they arrive at their decisions, and where biases may enter the process,” says Professor Karim Lakhani. The working paper “Do Experts Listen to Other Experts? Field Experimental Evidence from Scientific Peer Review,” by Lakhani and a group of researchers from Harvard and the University of Michigan, is the first step toward breaking that code.
In 2017, the team conducted a field study in collaboration with Harvard Medical School, which announced a call for research proposals for solutions to human health problems. The researchers recruited 277 faculty members from seven medical schools to evaluate the proposals. A double-blind process allowed reviewers to see how others had graded the proposal and then offered them the option of changing their scores. About 47 percent of the time, reviewers decided to update their initial scores.
More importantly, the study turned up substantial disparities in gender and status: Individuals with particularly high academic status—“superstar” scientists—altered their scores 24 percent less often than those of lesser status; and female reviewers changed their scores 13 percent more often than their male counterparts. “The disproportionate influence of men and superstar scientists on collective evaluations can thus result in substantial bias toward ‘their’ applicants,” the paper states, potentially leaving proposals that could yield significant scientific contributions going unnoticed and unfunded.
“Our study highlights that biases are likely to occur, even when everyone in the room is a successful expert, and even when interactions are anonymous,” says Lakhani. The study also generalizes to other settings where experts deliberate, he adds, from VC partners and funding committees to professional awards and admissions settings. Speaking from the perspective of the scientific community, Lakhani says this study represents an important starting point—and that more work must be done to help craft a fairer review process.
Post a Comment
Featured Faculty
Related Stories
-
- 01 Sep 2024
- HBS Alumni Bulletin
Research Brief: Hear Me Out
Re: Julian J. Zlatev (Associate Professor of Business Administration); By: Jen McFarland Flint -
- 01 Dec 2023
- HBS Alumni Bulletin
Research Brief: Staying in the Game
Re: Benjamin Iverson (PHDBE 2013); Shai Benjamin Bernstein (MBA Class of 1960 Professor of Business Administration); By: Jennifer Myers -
- 01 Dec 2023
- HBS Alumni Bulletin
Drop Everything, Read This
Re: Ray Baker (MBA 1960); Kartik Varma (MBA 2002); Ted Seides (MBA 1999); Allegra Jordan (MBA 1995); Christine Cuoco (MBA 2004); Alex Kruglov (MBA 2006); Cecile Seth (MBA 1994) -
- 01 Dec 2023
- HBS Alumni Bulletin
Happier-ness at Work
Re: Arthur C. Brooks (Professor of Management Practice)