Download this speech in PDF format (requires free Acrobat reader) |
…and Who Makes Value, Anyway?
a keynote speech by Andrew Taylor to the
New Jersey Theatre Alliance conference:
"Arts Alive: Staying Ahead of the Curve"
September 23, 2005
Hyatt Regency, New Brunswick, New Jersey
NOTE: This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License, which means that you may copy it, print it, distribute it to colleagues, paper your wall with it, or republish it in your own newsletters or web sites without the specific permission of the author. Just follow the basic rules of the license.
First of all, let me say out loud what many of you may be thinking right now: "measuring value," how awful. How clinical. How stale and stifling. It’s like one of George Carlin’s famous oxymorons: "jumbo shrimp," "military intelligence," and I often add another, "arts administration"—a word so small and dry alongside another that’s vast and expansive. I’ll ask you to suspend that distaste at the idea of measuring something so broad, complex, and personal as "value," however we define it. At the end of the session today, you can re-engage that distaste if you like.
But I’m going to suggest that we all measure value in our own ways. So we might as well bring those measures out into the open air.
With that, let’s get to it:
My statistics professor liked to tell the story of a man with his head in the freezer and his feet in the furnace. On average, it turned out, he was quite comfortable.
Such is the challenge of evaluation and evidence: they only barely describe—and often badly describe—the true nature of human experience. And what is broadly true for all human activity is particularly true of creative endeavor. An individual’s or a community’s interaction with creative expression and cultural experience has an impact, has a footprint, has a value. But it’s a slippery and elusive critter to track in ways that can be measured and described.
That’s the puzzle I’m here to talk with you about today—the puzzle of measuring value. And it’s a puzzle I hope you’ll play an active role in unraveling and reassembling. We’ll explore why we measure or evaluate the outcomes of creative activity. We’ll talk about how we have chosen to measure and evaluate in the various spheres that do such things—the individual, the organization, the community, the society. And we’ll talk about some of the traps that distract us along the way, and how we might escape them.
Since we’re all at the end of a journey together—this two-day convening that I’ve so enjoyed—I’m also hoping we can intermingle some of your comments and perspectives as we chew on this rather abstract stuff. And I’ll try to mix in some specifics of my own from the sessions and conversations I’ve experienced here.
But let’s start with some framing statements, so you all know what I’m talking about. When I say "evaluation," I’m talking about any conscious attempt to assess an action or initiative. Evaluation is the thoughtful gathering of evidence, to understand scope or scale, to count, to compare against some stated goal, perhaps.
Why do we bother to evaluate, to count, to measure, to assess, to compare? In short, we have no choice, because we are constantly choosing. As artists, organizations, or communities, we can’t do everything, nor would we want to. So we choose. And in the process of that choosing, our efforts to find feedback, information, insight, and assessment of how the world responds to our actions helps us eventually—we hope—make more productive choices, and help others around us do so, as well.
The active artists among you know all about choice…nudging a vision against the constraints of reality to forge something new. All along the way in the creation of a new work or the invocation of an existing work, you choose, you change, you reconsider. You may not do so by measuring or counting, but you’re evaluating all along.
Managers of cultural institutions know all about choices, as well. Facing fixed resources of time, talent, energy, capital, and cash, you choose how to mix and mingle those limited elements, you choose how to frame the problem with your colleagues, you choose a range of possible approaches, and you choose when to choose another way—perhaps when faced with new constraints or unexpected twists in the road.
Funders certainly understand choice. There’s so much to do, and only so much at hand to do it with. How do you know when you’re choices have been successful? How do you know how to learn from past choices to make better ones in the next grant cycle? How do you know if the grant cycle is even the appropriate road?
And communities choose, too. They choose how to allocate their own collective resources. They choose how to frame the playing field—with laws, policies, regulations, and incentives—to encourage individual choices that contribute to the common good. And they choose representatives and agents to make those choices in their interest and on their behalf.
So, we choose—each of us individually and all of us together. And evaluation is an integral part of that choosing, whether we state it out loud or not, whether we understand how we do it, or not. And we’ve been choosing and evaluating ever since our ancestors found the capacity to do so.
You might ask, then, if we’ve been choosing and evaluating forever, why is measurement and evaluation such a hot topic now? Why is it bubbling up at regional meetings like this, around foundation board tables, and at national meetings of artists, arts leaders, and arts supporters? Believe me. It’s bubbling.
The answer has a few pieces that seem to be working in concert: constraint, complexity, and scale. First, constraint: for a dozen different reasons, many of the inputs and assets that make nonprofit cultural institutions work have smacked up against constraint. Endowments were severely impacted by the economic downturn following September 11, 2001. The personal wealth of major donors was impacted in similar ways. And so was the accumulated wealth of major foundations. The labor pool that fed the growth of our industry over the past decades has also begun a sharp decline. Stack onto that a decrease in leisure spending, and severe budget imbalances at city, state, and federal levels, and you’ve got what many have called the "perfect storm" of constraints—Katrina and Rita notwithstanding.
Then, throw complexity and scale into the mix: the massive growth in numbers and sophistication of nonprofit organizations in the past three decades—arts included. Even with the fairly radical growth in philanthropic resources since 1960, the scale, complexity, sophistication, and professionalism of the field have grown to keep pace, if not to pull ahead.
From that intersection of constraint, complexity, and scale has inevitably come the call to measure ever more, to choose based on evidence, and to evaluate our choices by some common criteria. This call most often comes from professional funders, but also from city, state, and national government officials seeking some “return on investment” ratios to help support their allocation of constrained public resources. At the same time, ever more professional and strategic cultural organizations are seeking their own measures of performance and success…driven either by executive leadership or by business-minded boards. Terms like “organizational effectiveness” and “capacity building” have crept into our conversations, both requiring benchmarks and progress measures.
So, what’s the problem? It sounds like a perfectly reasonable and responsible thing to do. When faced with constraint, why shouldn’t you increase your efforts to make good choices, and to confirm that those choices were good by measuring some results? The problem is captured in a quote that’s been attributed to Albert Einstein. Whether or not he actually said it, it shines the spotlight in the right direction:
“Not everything that counts can be counted.
And not everything that can be counted counts.”
For individuals, organizations, and groups that foster and capture cultural expression, the true power, value, and profound beauty of what we work for can easily get lost in the numbers, and our passion and purpose can get lost or distracted along with it.
Systems ecologist Donella Meadows expressed a similar challenge when she wrote: “We try to measure what we value. We come to value what we measure. The feedback process is common, inevitable, useful, and full of pitfalls” (Indicators and Information Systems for Sustainable Development, Sustainability Institute, 1998).
The arts world is certainly not alone in this challenge—which is either comforting or confounding. Many sectors and industries are confronting the lure of measurement and evaluation against the ephemeral and indescribable outcomes they truly value. It’s obvious in current discussion about rankings and recommendation engines on the web, that tend to highlight popular and glib over the focused and insightful. It’s also obvious in education, especially K-12, where, according to many, measures have come to eclipse learning. Consider the perspective of psychologist Kenneth Kenniston (as quoted in The Hurried Child, by David Elkind, 1988):
“We measure the success of schools not by the kinds of human beings they promote but by whatever increases in reading scores they chalk up. We have allowed quantitative standards, so central to the adult economic system, to become the principal yardstick for our definition of our children’s worth.”
There’s certainly evidence of this challenge here at this conference. Just a few of the statements I heard as I wandered from session to session prove the point: Said one participant, "we’re constantly trying to fit ourselves into what others want us to be." Said one funder on a panel discussion, "We’re moving away from relationship-based philanthropy," toward funding based on matrices and aligned with corporate brand. And said one member of the storytelling workshops, "I’m so busy, I’ve become disconnected from the stories of my own organization." Stories are a powerful form of feedback that often get lost in the struggle for measurable results.
In almost every conversation I heard, there seemed to be an effort to measure our outcomes by someone else’s criteria—by the criteria of K-12 education, for example, or economic development, or social services. And just as plants grow toward the light, we often bend our organizations toward the measures or the money that seem to shine most brightly. In doing so, we can distort our goals and our efforts away from the elements that give our work meaning and value in the first place. For example:
- The Bias of Time
Evaluation criteria and feedback measurements can often emphasize the short-term over the glacial. A continuous decision process can bias us all toward measures that move quickly, rather than those that take generations to evolve. Consider Ralph Waldo Emerson’s belief that "The measure of a master is his success in bringing all men round to his opinion twenty years later." - The Bias of Disconnection
As we search for the measurable outcomes of our actions, it’s too easy to assume that our separate and distinct actions had the results we see in the world. In reality, meaning and value in any experience comes from a complex web of previous experiences. Our efforts and organizations are lucky if we’re just a tiny sliver of the cause behind a meaningful moment. - The Bias of Utility
Acts of measurement and evaluation continually draw us back into thinking about utility, about the concrete "usefulness" of what we provide in the world. Alexis de Tocqueville recognized this tendency in his early analysis of America centuries ago. He said, “Democratic nations….will habitually prefer the useful to the beautiful, and they will require that the beautiful should be useful.”
So what do we do about these biases, and the challenge of having to measure what cannot be measured?
First off, we need to get used to it, and brace ourselves. This trend and this tendency isn’t going away anytime soon, and is likely to grow stronger as resources plateau and our nonprofit infrastructure continues to grow in size and sophistication.
Second, we can attempt to change the measures that define us. The arts world certainly has a unique opportunity to attempt this, due to our close and direct connections to the funders that support us. But many of the measures are beyond our direct control, as they are driven by larger sectors of society—public education, city planning and development, government, and such.
Third, we can make every effort to construct measures and evaluations of our own, to guide us against external measures that might distort what we do. These evaluations must grow from our mission, our purpose, and our internal compass. They must be established with grace and nuance to encourage our work, rather than diffuse and distract it.
Sounds like a tall order, and it is. But there are steps to get us there. I’ll suggest a few:
Step one: We must explore and consider who actually makes the value we seek to measure. Is the power and meaning of what we do as cultural institutions something delivered and received? Do our artists and organizations construct and complete it and then release it to the world? Or is the value and meaning we seek a co-construction, begun by us, perhaps, but always completed by someone else—our audience, our communities, our peers? Our measures of success will be derived from this core metaphor of what we do, so we must work continuously to understand it.
Step two: We must broaden and clarify our efforts to evaluate our own work, beginning with a simple question: "What evidence would we expect a successful effort to leave behind?" If our organizations were working at their highest capacity, what would be the observable residue of our actions? Would it be an audience that looks like the community in which we live? Would it be an extra second of silence after the curtain draws to a close? Would it be a greater number of artists contacting us to ask about joining our work? These need not be complicated and clinical metrics, but merely connected to our work as we define it.
Step three: Once this potential evidence has been considered and defined, we must then consider how to measure it in ways that help us rather than waste our time. Can we enlist our entire staff and board to ask a single question of patrons they encounter, and report back the results? Can we extract the information from data already being collected in our contact lists or box office activities or member services? Can we simply stand, watch, and listen as individuals engage our organization’s work, being receptive to whatever discoveries emerge? There are ways of observing and listening available to us from the worlds of design, anthropology, consumer research, sociology, human factors engineering, and a dozen other disciplines. What can we learn and apply from these existing ways of watching?
Step four: As we build our criteria and our capacity to evaluate them, we must always remember that the measure is not the goal. It’s important to watch for the footprints of what we do, but they are only footprints. It’s easy to forget about the giant that left them there.
What does an industry look like when its measures become its goals? Consider telecommunications. A colleague with expertise in the field tells me that the incremental cost of a phone call is now zero. What you’re paying for, he says, is the system required to track your call and bill you for it. It sounds bizarre, but I’m sure it also sounds eerily familiar to some arts managers out there who are increasingly expected to provide outcome measurements. It can feel as though your organization is more about measuring what you do, than actually doing something. And that’s a most unpleasant place to be.
So, where are we? We can’t stop ourselves from measuring the value of what we do. We can’t stop others from determining criteria for their measures of us. We can’t deny the biases that come with these perspectives on the world—the compression of time, the illusion of causality, the lure of making things seem useful. What we can do is embrace this unsolvable problem as we embrace so many others in the process of creative expression and experience. Complexity and tension is our business, it’s the stuff of art.
We do things that count. Much of it can’t be counted. But the effort to discern, evaluate, measure, and assess is part of what keeps us connected. Let’s make it an open and dynamic element of how we do our work.
Edwin Taylor says
This week we saw “Romeo and Juliet” in the new theater for the New Rep company, now in Watertown, MA. For a couple of years we have gone to their performances in a cramped space in a Newton Highlands church because of their at-the-edge productions that almost always successfully balance on the rim of the socially acceptable. This week I was deeply aware of the new more luxurious and capable space, the cost, the yet-to-be-completed fund drive, the facilities for a wider range of performances. All of this can be evaluated, and the evaluation is important for its continued health. It is the base.
But the evening belonged to a superb Romeo and Juliet, with Mercutio’s especially shining performance. (Have they moved away from the edgy and unsafe? The remainder of the season will tell.) The experience was immeasurable except in the lead coin of our increased loyalty. The churches of Florence, Italy now charge admission as museums, which I gladly pay. But the contents, properly selected, are simply The Best. The Statue of Liberty stands on a solid base, but the up-thrust lamp tells it all.