Against Prediction Markets in College Admissions
Chris Peterson is an admissions officer, teaching assistant, and researcher at MIT. He works at the intersection of digital strategy, new media, and social change.
In addition to his research affiliation with Civic, he is on the Board of the National Coalition Against Censorship, a Fellow at the National Center for Technology and Dispute Resolution, and the founder, owner, and sole-proprietor of BurgerMap.org.
He earned his B.A. in Critical Legal Studies from the University of Massachusetts, Amherst, where he completed his thesis on Facebook privacy and/as contextual integrity advised by Ethan Katsh and Alan Gaitenby. He earned his S.M. in Comparative Media Studies from MIT, where he completed his thesis on user-generated censorship advised by Ian Condry, Ethan Zuckerman, and Nancy Baym.
Against Prediction Markets in College Admissions
"Each standard and each category valorizes some point of view and silences another. This is not inherently a bad thing—indeed it is inescapable. But it is an ethical choice, and as such it is dangerous—not bad, but dangerous."
—Bowker and Star, Sorting Things Out, p. 5-6*
GMU Professor Robin Hanson has an entry up at OvercomingBias called College Admissions Markets. Hanson argues for incorporating prediction markets into college admissions process, writing:
The problem comes mainly from granting discretion to admissions personal to make subjective judgements. One solution is to just use objective features like test scores…
As usual, my solution involves prediction markets. As I posted here five years ago, we could hide clearly identifying info about students, post their application packets to the web for all to see, and let anyone bet on the consequences of each student going to each school. Students might care about their chance of graduating, their income later, and some measure of satisfaction. Elite schools might care more about the chances of students being “successful” someday. Different schools might use different measures of success, such as with different weights for achievement in sports, politics, business, arts, etc. Schools could admit the students with the best chance to succeed by their measure, and students could apply to and then go to the school giving the best chance if achieving their goals. Or students could not go to school at all, if that was estimated to be best.
There are a lot of things wrong with college admissions. The "subjective" components of applications were born in sin to help keep Jews out of Harvard. The behavioral economics literature has devoted many volumes to the falsifiable fallibility of experts. While I am a strong believer in affirmative action, the quota / points model rejected by the Supreme Court in Bakke is bad for a whole host of reasons. And there are always new places to look for information. I've written extensively about the challenges of making better admissions decisions here, here, here, and here.
With that said: Hanson's post is wrong, and it is wrong for two specific reasons.
The first reason is that Hanson does not explain why we should expect prediction markets to be good at this kind of work. Cass Sunstein, in his Infotopia: How Many Minds Produce Knowledge, painstakingly provides the precepts of prediction markets. He claims, based on both the literature and informal experiments, that "crowds" are not always better than "experts." A market may be very good at picking the weight of a cow, the number of marbles in a jar, or the value of a company; it may be very bad at estimating the number of light-years to a galaxy, or the number of students at a particular school. It is not a priori obvious which type of task giving college admissions advice through a prediction market is, and Hanson does not provide any reasons for why he assumes it is well suited to this task.
Hanson, a leader in the field of prediction markets, surely knows this better than I (and probably Sunstein). Without any explanation, I would guess he sees this as, at worst, an experiment. Prediction markets can't hurt, he may think, and could well help, so let's build them and see. At the very worst, won't we just learn something new?
Well, no, because the second reason is that any configuration of college admissions which would provide market information for students while "[hiding] clearly identifying info" is either meaningless, misleading, or outright oppressive.
At the one extreme, you might imagine simply GPA and score, which would indeed be difficult to reattach as such to a given student. Yet it's not clear how or why this would provide any actionable information for the market. Schools do not all grade the same. Test scores vary, not only by parental income, but also by region and type of test: roughly speaking, kids on the coasts more commonly take and perform better on the SATs, while kids in the heartland more commonly take and perform better on the ACTs. GPA and score are quite literally meaningless without context. So what context do you begin to provide? Parental income? Region? Average school GPA? As soon as you add meaning to the data you also make it more identifiable.
What about things beyond tests and scores? Things like academic/extracurricular achievements, essays, or letters of recommendation? All of these factors are supposed to differentiate students. But to differentiate - to make them stand out - goes directly against the idea of hiding identifying information, of making the public information anonymous. The more meaningful and useful you make the information - arguably the moment you make the information meaningful - you make it radically more likely that the student will be identified.
Which is wrong, and bad, because radical transparency always privileges the privileged and disadvantages the disadvantaged. If your achievements are mainstream and unobjectionable (a perfect SAT score, an IMO team member, being the star quarterback of a top football team) then you are perfectly happy if you are identified with that which society valorizes.
But what if you are subaltern in some way? When I was an admissions officer I read an application from a brilliant homeless undocumented immigrant. Her application was inextricably bound up in this. Her essays and letters of recommendation touched on her struggle. Her SAT scores were unremarkable for the general applicant pool, but absolutely sky-high for someone of her personal background. What if she had known there was a nontrivial chance her application was made public? Would she have written these essays? Would she have applied at all?
Of course this young woman was an exceptional case. But that's the point: all students who are admissable to highly selective colleges are exceptional. They are merely exceptional in different ways. Some are uncontroversially exceptional because of their social power and privilege. Others occupy a much more fraught position. Radical transparency, trained on them, would cause them to wither and hide those elements of their stories which could allow them to appear exceptional, further harming those who have already suffered. So while we don't know why prediction markets would help college admissions, but we do know that they would either be meaningless or hurt.
As I said before, there are real problems with college admissions. But advocating prediction markets are a solution imposing themselves on the problem without inquiry, without articulation, and without care for the consequences. When I wrote A Modest Proposal for Disaster Markets I was satirizing such apparently uncritical belief in a means without consideration of the execution or ends. Alas, in his posts and comments, Hanson seems committed to keeping the faith.
* h/t to CMS classmate Denise Cheng for the apropos quote.