The sabermetrics of university admissions

Many college applicants are awaiting decisions. They went through the process of getting recommendations, taking standardized tests, writing essays, doing volunteer work, filling out applications, being interviewed, and so on. The sloppy way in which students are admitted for these schools is at odds with the use of statistics elsewhere.

Consider the use of statistics in Major League Baseball (sabermetrics). This is standard operating procedure in how baseball teams select players, decide what to pay them, and so on. The same is true for the National Football League. The New England Patriots and Philadelphia Eagles, for example, use statistical methods in deciding whom to draft or trade and how much to pay players. In addition, they use statistics in deciding which plays to run. Ditto for NBA teams.

The same is not done for medical school admissions, at least not to the same degree. According to the American Association of Medical Colleges, medical school admissions committees holistically evaluate applicants. Holistic evaluation considers applicants as individuals. It involves looking at applicants’ diversity, life experiences, and personality in part through essays and interviews as well as looking at more traditional academic indicators such as grade point average (GPA) and standardized test scores. Committees usually consider applicants’ charitable work, performance in unstructured interviews, physician shadowing, race and ethnicity, recommendations, and undergraduate major. The different factors are then matched to the admission committee’s view of its community, school, and medicine in general.

These non-academic factors (consider, for example, physician shadowing) have not been validated. They either don’t predict performance or are not known to predict it. In addition, they water down consideration of factors that do predict performance and, in fact, do so reasonably well.

A number of studies have found that the combination of the medical college admissions test (MCAT) and undergraduate grade point average (GPA) is the best predictor of success in medical school. MCAT scores also correlate with medical board scores. The way to choose applicants who will be the best doctors is to rely on these factors. This is particularly true if these factors are combined with a personality test, such as one that looks for statistically validated personality features such as grit or conscientiousness.

The MCAT’s predictive power is similar to that of other standardized tests. Test scores predict undergraduate and graduate student performance reasonably well. They also predict job performance at a range of jobs. Even when restricted to gifted students, the SAT still predicts who will perform better. Standardized test scores even predict who will be a better professor.

The predictive power of standardized tests is unsurprising. Standardized test scores (such as the SAT) correlate with IQ and IQ is a reasonably strong predictor of job performance. IQ predicts job performance better than do other measures such as interests, personality, reference checks, and interview performance. The more complex the job, the better it predicts performance.

Medical schools should only consider standardized test score, GPA, and, perhaps, a couple of other validated factors such as personality-test score and structured-interview score. The same is true for universities admitting undergraduates, especially elite universities.

A defense of the holistic admissions system for medical school is that it predicts which physicians have better bedside manners, care more, or are more likely to serve poor or rural people. One problem with this defense is that there is no reason to think that admissions committees can outperform validated tests for desired personality features. If personality features are important (consider, for example, emotional intelligence), they should be tested in the best available way.

A second problem is the lack of evidence that committees can predict these features. For example, I can’t find any evidence that admissions staffers can predict bedside manner from a half-hour interview. A third problem is that an argument is needed as to why these factors are more important than ability. Physician shortage and medical error are costly. Medical error is one of the leading causes of death. Choosing a student whose demographics or numbers suggest they will work fewer hours or will be a less capable physician than their replacement is similar to replacing a second-round draft pick in the NFL with a sixth-round pick. It is unclear why a gain in bedside manner or caring is more important than greater ability.

A second argument for holistic admissions is that diversity is important and holism is a better way to achieve diversity than a purely statistical system. A problem with this objection is that if diversity is worth doing, then it is worth doing with formal weighting. By analogy, if a NFL team is required to hire Jewish players, it would make sense for the team to use the same statistics-based ranking that it uses to rank other players. This way a team might have to substitute a third-round draft pick for a first-round pick, but avoids substituting a seventh-round player for a first round pick.

In addition, if diversity-related cost (less merit per doctor and fewer doctors) outweighs the diversity-related benefit (for example, role models and stereotype elimination), then it fails a cost-benefit analysis. We have a general idea how markets view diversity. Competitive markets do not put value in it much if at all. Consider, for example, Hollywood, MLB, NBA, and NFL. In general, markets are better than other methods for weighing costs and benefits. The cost of diversity in medical school is worth considering. It would be helpful to know, for example, how many additional deaths and injuries from physician error will come about for a 1 percent increase in medical-class diversity and whether the tradeoff is worthwhile.

The case for using a statistics-based admissions system for undergraduate admissions is similar to that for medical school. University admissions should be simple (standardized test score, GPA, and, perhaps, a few other validated factors) and done by a computer. At the very least, it should be done in a way similar to what is done in professional sports.

Stephen Kershnar is a philosophy professor at the State University of New York at Fredonia. Send comments to editorial@observertoday.com

COMMENTS