Two recent publications derive indices to rank different cities according to their cultural vibrancy – from the National Center for Arts Research in the United States, and the European Commission for European cities. They have the same fundamental problem.
In each report, a selection of data series applying to cities’ cultural ecosystems is chosen. These are then weighted. The US study makes a ranking of cities according to different measures, then applies weights to how the city ranked – you can see the method on page 7 here: measures of arts employment are given a weight of 45%, spending on the arts at 45%, and then (with a degree of double-counting from the prior entry), state and federal funding to local organizations is weighted at 10%. In the European study, arts measures are combined with some demographic measures of diversity. The method is shown on page 14 here: with ‘cultural vibrancy’ having a weight of 40%, ‘creative economy’ at 40%, and enabling environment at 20%.
The problem is not with the quality of the data. Many of the raw data series will be of interest.
The problem is in the very nature of the exercise. These city rankings are meant to have a veneer of science and objectivity. Yet there are two degrees of arbitrariness and subjectivity: the selection of which data series to include in the indices, and the weights they are assigned. That Newark is ranked as more culturally vibrant than Chicago is a function of which data are included and how they are weighted. Different data and different weights would produce a different ordering (which is why there are so many different rankings of world universities – they count different things, and weigh them differently). The rankings are subjective, and yet because quantitative are meant to suggest something more.
It’s not a problem of which choices were made. I might have chosen different series and different weights, and even though I have worked in cultural economics and arts policy for a few decades now, there would be nothing superior in my choices relative to the choices made by NCAR or the EU. That each of these organizations might have consulted with experts doesn’t change things. Weighting arts employment and arts spending in a city equally, as NCAR does, is arbitrary, and weighting them 55% to 35% instead of 45% to 45% wouldn’t be better or worse. In other words, I have no suggestion for improving the rankings – the problem is inherent in trying to form an index in the first place, and is as subjective as any ranking that forgets about numbers altogether and simply gives an informed, aesthetic judgment.
These rankings generate attention: “how did my city do?”. But they don’t give direction to anyone working in policy, they won’t inform Chicago officials on how to be as vibrant as Newark. Like university rankings (I dealt with them in a prior blog post), they are meant to attract “clicks”. But there is no there there, they are neither informative to social scientists or to practitioners.
BPJ says
Yes. The arts and humanities must be themselves, rather than aspire to be the sciences.
V.Verlaine says
Sorry Michael, but I quite disagree. I believe that everything is calculabe and a thing that cannot be measured, doesnt really exist. If we manage to form the right algorithm then we may reach the right conclusions. What we should really do is to estimate the influence of every value, and we should be extremely analytic. The amount of work may seem huge but maybe it is worthwhile