Ok, not a blog post title likely to set your pulse racing, but with so much discussion in the arts world the past few years on the uses of data, a caveat. It is brought to mind by a story from Money magazine (a branch of Time), which has tried to make inroads on US News turf, and do some college rankings. These new rankings are all about ‘value for money’ – what does it cost to attend; what is the payoff in salary. Whatever your thoughts on that focus, my issue is with how the rankings are constructed.
Money’s method is given here.
The Chronicle of Higher Education assesses the new rankings as follows:
In a world full of frivolous rankings (colleges with the best weather!), Money set out to compile a highly objective one. The result is relatively heavy on outcomes data and light on subjective prestigery like the reputation surveys used by U.S. News & World Report. To develop the rankings, Money joined with Mark S. Schneider, a vice president at the American Institutes for Research and a former commissioner of the Department of Education’s National Center for Education Statistics.
The list ranks 665 colleges according to 17 factors in three categories. “Quality of education” includes each college’s six-year graduation rate, student-to-faculty ratio, and a “value-added” graduation rate, which reflects the difference between the actual rate and the expected rate based on students’ academic and economic backgrounds. “Affordability” includes borrowing by students and parents, student-loan default rates, and estimates of the average net price of a degree (based on a college’s sticker price, total institutional aid, tuition inflation, and average time to graduation). And “Outcomes” includes various measures of early- and mid-career earnings, based on raw data from Payscale.com.
Naturally, critics of rankings will find plenty to quibble with. The earnings data, as Moneyacknowledges, are self-reported by only those alumni who chose to complete the survey, so it’s not a true measure of an entire class’s average salary. The net-price measure is based on averages, which means the figure might be much higher or lower than what a given student ends up paying. The ”quality of education” measure is based in part on the standardized-test scores of incoming students, a variable that says more about the socioeconomic characteristics of a college’s students than about anything else.
In short, all rankings are flawed, no matter how much precision they might imply. As one smart person has written, rankings ultimately reflect choices made by real, live human beings: “Who comes out on top, in any ranking system, is really about who is doing the ranking.”
Here is the problem. The Chronicle focuses on how different outcomes are measured – are they accurate? are they the best measure of the outcome in which you are interested? But that line of criticism suggests that if Money had found very, very accurate ways of measuring outcomes, the rankings would be useful. But they would not be. The problem is that the long list of outcomes is turned into an index through a weighting scheme that is entirely arbitrary. Accurate data does not solve the problem of how apples are to be added together with oranges. And frozen waffles and bottles of sesame oil. Look at the percentage weights given to factors by Money. They are completely drawn from thin air. And so the index is not objective in the least.
Even if you have a terrific set of accurate data, data that could be very useful to your organization in seeing where you are achieving successes and where you are falling behind, if you assign arbitrary weights in order to come up with an ‘index’, you lose all that useful information in order to come up with one single catch-all result. You will have made the reporting and understanding of the data worse.
Are there implications in the arts? Yes, I’m looking at you, Americans for the Arts. It is great that AFTA has collected so much data. But the ‘National Arts Index’ gives us nothing of value: much better to look at the various series without trying to aggregate them through an arbitrary scheme (and giving 78 different data sources an equal share in the NAI is completely arbitrary).
Are all indices bad? No, not if there is a rationale for the weighting system. The Consumer Price Index covers prices of all the various items bought by consumers, but the weighting system makes sense: weights are assigned according to the proportion of the average consumer spending budget accounted for by each item. It actually does give us something interesting beyond just providing a list of the thousands of prices we face and how they changed from one year to the next. But the Money ranking of colleges, and the AFTA National Arts Index, don’t do that.
As more and more data come available, it becomes ever more clear: we need to know how to interpret it, and what questions to ask of it. And, as we see here, the limitations on how we can aggregate it.
Andrew Taylor says
Great post, Michael. Although you’re assuming that indices like those from Money and Americans for the Arts are intended to provide useful, actionable information to those being indexed and those who support their success. Money’s index is intended to attract readers and advertisers, and take a slice of the revenue and brand value of its competitors. Americans for the Arts’ index is intended to start conversations, particularly among advocates and their politicians.
I completely agree that the transparency, intelligence, and testability of the weighting assumptions are essential to the utility and value of the index. I’m just not sure that utility and policy value are primary goals here. I wish they were.