My last post criticizing a recent study on the distribution of the benefits of NEA funding generated a lot of commentary. I thank everyone for contributing, and will try to respond to at least some of the points raised.
First, I was not staking any claim on whether public funding of the arts is a good thing. As it turns out, I do think it is a good thing – I’ll post my thoughts on this later – but all I was doing in the initial post was saying that the recent study out of SMU on arts funding was flawed. That the study was trying to say something positive about the NEA, and that I criticized the methods of the study, does not imply that I want to say something negative about the NEA. I don’t.
On the media picking up the study: I think there is a problem with reporting on studies that come from universities and think tanks – “A new study says…”. The problem is that one study will never be definitive. The reporter with time on her hands needs to dig a little deeper, to find what the literature on the subject says. Any new study on the distributional impacts of arts funding, or the effects of increasing the minimum wage, or of the benefits of pre-school education, ought to be reported in the context of what the profession has generally had to say on the matter. If the new study contradicts everything that has come before it, then the reporter should say that, and try to explain why it is that the new study comes up with something different. If the literature is generally divided on a question, then the reporter should say that the new study is a contribution to an area where we still haven’t reached consensus. Provide context, do not just report as a new set of facts a document that happens to come with some numbers and charts.
On distribution: every government funded activity, including regulation, has some distributional impact, whether the funding be on the arts, schools, highways, flood control, health insurance, policing, you name it. Taxes are collected from various sources, and spent in a way that benefits some more than others. In general, I will venture to say there is a consensus that if the redistribution between taxes and spending happens to be from rich to poor, it is seen as a positive aspect of the program. It is not the only thing that matters (although sometimes redistribution is the goal itself), but it is something to be considered. If a spending program happens to mostly benefit the well-off, that doesn’t mean the program ought to be scrapped, but it is a consideration that needs to be considered when the policy is evaluated. Arts spending includes a lot of different sorts of things, and some of it tends to benefit the well-off and some is explicitly directed at people who otherwise wouldn’t get much access to the arts because of income or local offerings. Programs should be evaluated on their own terms – what is the goal? Who benefits? Is this the most cost-effective way to achieve those benefits?
Janis says
Academics should also stop punishing those of their number who can communicate clearly, and start valuing the ability to put their ideas into clear, engaging language that a non-academic can understand. No more, “Wow, that guy must be really smart, I didn’t understand a thing he said!” No more bumping an article in the peer review process because, since the paper explained itself so well, the ideas are too obvious and self-evident to be worth publication.
I saw this because, realistically, a reporter cannot do a research survey before reporting on every single topic they write about, not without becoming researchers themselves. In a cursory way, maybe … but overall, no. Not possible. Not when this reporter will probably be working on a very separate topic next week, or next day. The reporters can’t do the researchers’ jobs for them.
I do think that reporters can and should be more damned responsible for what they say. However, I also think that researchers can’t justify being too ticked off that their story is told wrong when so much of the research environment is geared to encourage obfuscation, making so many researchers such utter failures at explaining what they’re doing for themselves.