Stories
Stories
The Rankings Game
Depending on which business school ranking you consult, the best MBA program in 2010 was at the University of Chicago, HBS, the London Business School, or Stanford University. Among the five leading media rankings, HBS fell no lower than fourth place last year, compiling the best overall showing. Chicago and Stanford each disappeared from the top five in at least one ranking. And the London Business School showed up only once in the top five — as No. 1.
Two questions leap from this jumble. How can four schools all be No. 1 simultaneously? And how can the No. 1 school in one ranking fail to make the top five in others?
The answer lies in the methods the publications use to evaluate schools. The big five — Bloomberg Businessweek, U.S. News & World Report, Forbes, the Financial Times, and the Economist — each use very different rating systems. In fact, the methodologies differ so much that “the best” school in one ranking can fall far behind in others. So while the rankings game is great for selling publications, it’s confusing for readers.
Bloomberg Businessweek bases its ranking largely on “customer” satisfaction, giving equal weight to opinions of recent MBA graduates and corporate recruiters. U.S. News is alone in asking deans and MBA program administrators for their views on top schools, and it gives significant weight to undergrad GPAs and GMAT scores. Forbes relies solely on measuring monetary return on the two-year MBA educational investment. (See chart.)
The Financial Times and the Economist go global with their surveys, including schools in Europe and Asia. (Bloomberg Businessweek creates a separate list of the top international programs.) The FT emphasizes MBAs’ salary growth and career progression three years out. The Economist derives 80 percent of its data from a lengthy questionnaire filled out by the schools, heavily weighting student career services, diversity of corporate recruiting, and starting salaries.
With such a wide range of approaches, the results clearly depend on who is surveyed, what is asked, and how much weight is assigned to the various answers. “I think people would be surprised just how different the methodologies are,” says Brian Kenny, chief marketing and communications officer at HBS whose office manages the data requests that arrive from the various publications. “All the rankings go to great lengths to explain their methodologies,” he adds, “but most people don’t have the time or inclination to go that deep. They just stop at the number.”
Behind the Numbers
Going beyond the numbers to look at the methodologies reveals weaknesses in each survey. Critics have often pointed out that the surveys used by Bloomberg Businessweek and U.S. News are subject to bias by relying so heavily on judgments from recent graduates, deans, and administrators. Students understand the value of graduating from a top-ranked institution and may grade their school accordingly. They also can be coached to give their school high marks. Critics also note that the ratings deans and MBA program administrators give to competitors for the U.S. News survey may be based more on reputation than actual knowledge of the other schools’ MBA programs.
The emphasis on salaries in the Forbes and FT surveys strikes many as misguided. “Imagining what will happen to your career in all its dimensions — from job quality to earnings power to wealth creation potential — is more important,” says Senior Associate Dean for External Relations William Sahlman, the Dimitri V. d’Arbeloff–MBA Class of 1955 Professor of Business Administration. “The Forbes measure doesn’t make sense to me. Calculating return on investment based on salary five years out means Forbes rewards schools that mainly feed investment banks, private equity, and hedge funds. By implication, it also undervalues entrepreneurial careers or those in social enterprise.”
To a lesser degree, the FT also rewards schools that turn out MBAs with high-paying jobs in financial services, basing 40 percent of the ranking on salary growth three years out. Meanwhile, the paper’s editorial pages frequently have taken business schools to task for grooming too many MBAs more concerned with personal wealth than social value, notes Kenny.
The Economist, too, has its flaws. Critics note that 80 percent of its rankings are based on unaudited data submitted by the participating schools. That leaves the door wide open to self-serving submissions that fudge the numbers. The Economist also gathers data on the percentage of international and female students and the number of languages offered.
Meaningless Beauty Contests?
BusinessWeek fired the first shot in what became a media ranking “arms race” in November 1988 with publication of its cover story on “The Best B-Schools.” Up to that point, business schools built their reputations largely on the research productivity and scholarly reputations of their faculties. BusinessWeek’s customer-focused look at MBA programs was an instant hit; the issue flew off newsstand shelves. Two years later, U.S. News followed with its own business school ranking. The two publications ruled the market for nearly a decade before the FT, Forbes, the Economist, and the Wall Street Journal all launched their own rankings between 1999 and 2002. (The Wall Street Journal discontinued its recruiter-based survey after 2007.)
Prospective students made the rankings cash machines for the publishers, and for better or worse, the rankings came to dominate would-be students’ perceptions of business schools. As the media rankings gained popularity, criticism of their impact grew louder. Business schools complained about the time and expense involved in data collection. Deans complained that focusing on rank diverted prospective students from the truly important question: Which school is the best fit? Academics who studied the rankings challenged their methodology and warned that they forced schools to plow resources into short-term “fixes” to move up in the rankings rather than focusing on long-term educational quality.
As tensions rose, HBS and Wharton declared in 2004 that the media rankings were “meaningless beauty contests,” and they would no longer cooperate with any of the surveys. That meant saying no to all requests for data and assistance in contacting students and alumni. As an alternative to the media rankings, the two schools spearheaded an effort with the Graduate Management Admission Council (GMAC) to develop a database of standardized, objective information to facilitate comparison of MBA programs.
In hindsight, taking a principled stand against the rankings didn’t yield the expected benefits. The GMAC database, MBA.com, was launched in 2006, but it never gained the support needed to make it a truly viable alternative source of information. As for the HBS and Wharton boycotts, a ranking without the two perennial leaders would have questionable credibility in the marketplace, so the publications found ways to obtain data and survey students and alumni without the two schools’ cooperation. Not surprisingly, HBS and Wharton suffered in some rankings.
Beginning in 2009, the schools softened their official position and once again offered to cooperate with the rankings. Both assigned someone to forward online surveys to recent graduates and alumni, and to coordinate data collection from various departments and funnel it back to the publications. (HBS has a strict privacy policy that bars handing out student or alumni e-mail addresses to commercial enterprises.) “My feeling is that rankings do serve a purpose by aggregating information for people who are trying to make a difficult decision,” says Kenny, who came to HBS in 2008. But there’s a limit to their usefulness, he adds. “I don’t believe that a ranking number should ever drive someone’s decision about where to go to school.”
Sahlman agrees. In his estimation, “the rankings are a trivial piece of information in the overall scheme of the work prospective students should do to figure out what school is best for them.” High on his list of musts: visit the campus, sit in on a class, dine at the student cafeteria, and talk with students, faculty, and alumni. The bigger challenge, he continues, is trying to understand the extent to which any institution has a transformational impact on the students who go there. That impact unfolds over a career and cannot be measured by getting recent graduates’ impressions of their educational experience, asking recruiters to assess recent hires, or measuring short-term growth in salaries, he explains. “All these rankings are based on relatively short-term, superficial measures,” says Sahlman.
Despite their evident shortcomings, even the harshest critics acknowledge that business school rankings are here to stay. No doubt, students, alumni, and the schools themselves will continue to pay attention to them. But as Kenny and Sahlman point out, fans and critics alike should go behind the numbers to understand the methodologies employed. The rankings only answer the questions that are asked, and they arguably aren’t that helpful in understanding the real differences among the schools.
Post a Comment
Related Stories
-
- 01 Mar 2023
- HBS Alumni Bulletin
An Investment in Tomorrow's Leaders
Re: Srikant M. Datar (George F. Baker Professor of Administration Dean of the Faculty); By: Jennifer Gillespie -
- 01 Mar 2023
- HBS Alumni Bulletin
Making Dreams Attainable
-
- 12 May 2021
- The First Five Years
The First Five Years: Sophie Bai (MBA 2020)
Re: Sophie Bai (MBA 2020) -
- 15 May 2018
- HBS Newsroom
HBS MBA Admissions Shifts to Two Rounds
Re: Chad Losee (MBA 2013)
Your Comments
1. Rankings differ because ranking methodologies differ 2. Each ranking methodology has its own shortcomings/weaknesses. 3. 1 and 2 lead some to state that "the rankings are a trivial piece of information".
But what about an equal-weighted average of the rankings by various orgs (say the 4 mentioned in the beginning of this article). Should the meta-analysis/study produce information that is (more) useful to an MBA applicant or potential employer? Intuitively, one thinks so.