I’m writing this from Chicago, on the first stop of our multi-city tour to disseminate the results of the intrinsic impact research. Over this past weekend, Alan Brown and I have been pulling together the presentation for these stops, and I’ve found myself thinking about and reacting to the wonderful coverage we received this past week from HowlRound, Jumper, You’ve Cott Mail and others—and the response that that coverage has received.
Last week, I had the fortune to participate in the Weekly Howl, HowlRound’s curious (and curiously intense to participate in) weekly hour-long Twitter conversation. It’s a fascinating format, and if you haven’t taken part before, I encourage you to take a look – it’s Tuesdays from 3 to 4pm Eastern time, and to join all you have to do is follow the tag #newplay. Anyway, I was participating to discuss the intrinsic impact research and to answer questions, and among many very astute comments, multiple people (including both Rachel Grossman, formerly of Woolly Mammoth Theatre Company in Washington, DC, and Diane Ragsdale of Jumper—both contributors to the book) kept returning to this absolutely essential theme: what do we do with the data? How is this applicable?
Navel gazing is so easy, in a way, isn’t it? Simply thinking, proposing opinions, waxing on about the state of things. But eventually, if you’re really hoping for a big shift, you need to move beyond the theoretical. You need to describe the practical application.
In the case of intrinsic impact, seeing a way clear to practical application took quite a while—WolfBrown’s initial work on the subject was interesting but diffused, focused on multiarts presenters and conducted without setting up any sort of comparative goals within the organization, as was our initial pilot study into impact assessment here in the Bay Area. Essentially, we measured the impact that the art had on the people watching—and also, incidentally, asked them separately to tell us what they thought the impact was going to be beforehand—but we didn’t stop to ask what the impact was supposed to be. What the arts organizations were looking for.
What we found was that the line back to actionable change was too tenuous—yes, you might be able, as WolfBrown did in the Major University Presenters study, to see how a misaligned marketing campaign (in this case a classical poster for an experimental Shakespeare piece) directly affected the resonance and impact of a piece—but the connection between the reported impacts and the systems from which the art emerged were insufficient.
So we added this wonderfully simply component: we asked the people who were making the art (and marketing it, and raising money for it, and selling it) what they thought the work would do. What the impact of the work was supposed to be. Moreover, we spent a tremendous amount of time sorting out how to convey the results to the people working on the art—we created this online dashboard component so that all of the graphs are at their fingertips, but we also came to understand that for many people, including most notably most of the artistic people we most wanted to engage in this work, the graphs needed translation. WolfBrown, thanks to a generous pilot grant from the Pew Center for Arts & Heritage, has now created a document to aid organizations (and, in the future, representatives from Theatre Bay Area who will help interpret the data) to understand the most they can about the information.
Ultimately, as Alan Brown is fond of saying, the data isn’t useful unless it instigates a conversation—a conversation internally to better understand whether, for example, there is mission misalignment between departments, a conversation externally to more accurately convey the particular impacts of the work you do and to remind stakeholders of the specificity of your particular mission.
At Arena Stage, in Washington, DC, efforts are already underway to utilize the insights of their intrinsic impact surveying to adjust and augment their pre- and post-engagement materials. As an organization with a greater-than-usual debt to the particular, multi-cultural neighborhood in which it resides, Arena is using this information to try and understand on a deeper level how it can maximize the meaning and memory derived from the work.
In the Bay Area, City Lights Theater Company has taken their results—in particular the data that shows that, while they were worried their audiences were being offended and made uncomfortable, the audiences themselves actually felt relatively safe—and is using them to work with their board to ensure that they stay true to their mission of pushing their audiences to the edge. Moreover, using word clouds the staff at City Lights has been able to see where they are bringing out the emotional reaction they were trying for with their patrons and where they are not—and to address disconnects around the production by looking at unanswered questions submitted by the audience.
A couple companies are toying with how to use the new almost-instant data availability provided by the new online dashboard to create interactivity with their audiences during and after the show. Could, for example, a television be set up in the lobby that showed an ever-evolving word cloud of emotional responses? Could conversation stations be set up around questions posed the previous night?
While few if any organizations are using impact results to dictate changes in work (and nor, really, should they be), we have heard some cases of artistic leadership coming back to the core questions of impact assessment either formally or informally—why am I choosing this work to be a part of this season? What is the particular transformation I’m trying to make occur in my audience? How can I ensure that the long arc of engagement begins correctly and carries through long after the art is done?
I think practical applications will continue to crop up as more organizations begin to do this work. That is, in fact, part of why the Doris Duke Charitable Foundation awarded Theatre Bay Area a “Continuing Innovation” grant to provide some subsidies to thirty groups seeking to try out this work. By testing out the online tool, crafting materials to allow for better assessment of the data, and working with the arts organizations to develop practical, useful, transformative things that can be done with impact assessment, we hope to continue driving such work into the mainstream.
Trevor O'Donnell says
“a conversation externally to more accurately convey the particular impacts of the work you do”
I like this line in particular because it suggests that the data can help arts organizations describe what they do in a way that motivates new audiences that are looking for that particular impact. “We know you want X, which is why we know you’ll like Y.”
The language of arts marketing is comprised mostly of recycled pablum that’s based on impact assumptions that were made half a century ago under radically different market conditions. It seems to me that we can use real contemporary data about what audiences are looking for to describe what we do more persuasively. It doesn’t mean changing what we do, but rather changing the way we talk about what we do to make certain that our promotional language resonates with what new audiences are actually looking for.