[contextly_auto_sidebar id=”jbun5DGmmxWBseMNdsmmWShezjYLTTqQ”]
Not long ago I spent some time with the executive director of a regional orchestra. She’d set some changes in motion, some well thought-out new programming that might develop a new audience, and root the orchestra more deeply in its community.
As she and I talked, I got curious about something I often wonder about, which is how the success of these changes might be measured. How do you know if they’re working? I’ve seen situations in which that question isn’t asked, leading to confusion a year or two down the road. A new project launches, with a worthy but — if we want to be rigorous — vague goal. Get the audience more involved with the orchestra. Make the orchestra more accessible, friendlier to its community.
These goals sound like we know what they mean, but do we? Well, if five years go by, and now you have people in your town beating down your doors, wanting to go to your concerts, talking about you online, telling you that the changes you’ve made have caught their attention — well, then you know that you’ve reached your goal of being friendlier.
But what if it’s not that clear-cut? What if the change isn’t night and day, but just night and twilight? What if ticket sales haven’t much risen, but you do see more hits on your website, and more comments online? Are you succeeding, or have you failed, because after five years you should have accomplished more?
And — something very important — how do you know after just a year whether you’re headed in the right direction? A year might not, or surely isn’t, enough time to bring you major success, but shouldn’t there be some way to measure whether you’ve started to succeed?
If you don’t think these things through, you might find yourself — and I’ve seen this happen — ready to cancel your project after just a year or two. That new concert series: People who go to it seem to love being there, but you wish there were more of them, and the series loses money. So then maybe you cancel it, especially if, for the moment, your finances seem a little troubled.
As I’ve said, I’ve seen that happen, But the decision to cancel may not make sense. The new concerts, after a year or two, might be doing exactly as well as you might have expected. Might have expected, that is, if you’d made some projections, and decided how well they should be doing after the first year, and after the second. Maybe your decision to cancel was premature.
Maybe the concerts were really a long-range plan, one that needed five years or more to mature. If you lose money during that time, you accept that, as an investment in your future, just as the Met Opera accepted losing money on their HD streams to movie theaters, during the first years of that project. (Which now makes a profit.)
Back to the executive director I had an enjoyable talk with. How, I asked her, would she judge whether her new programming seemed to be working? What would be best, I suggested, would be objective measurements, measures of success that could be quantified, so there’s never any doubt about how things stand.
The question made sense to her, and she had an answer. She suggested several measurements. If she was getting her community more interested in the orchestra, then hits on its website should rise. And — this, I thought, was a very fine metric — community groups of all sorts, including local businesses, should be contacting the orchestra, asking to join in the orchestra’s new projects. The orchestra already had partnered with a community business, in launching some new concerts. So if community interest continued to rise, then maybe more people in the community would want to be partners. Made sense to me!
So i suggested a way to formalize all this. Make a list, I suggested, of four or five metrics, things that could be counted, and that ought to be rising, if the orchestra’s overall goals were being met. Revisit those metrics every few months. Are they all rising? Are they rising as much as you hoped they would? Are some of them rising, while others are flat?
And then decide how to act on this data. If nothing is rising, then you may not be meeting your goals. Should you revise the goals downward? Expect less? Or expect things to take longer than you’d at first thought?
Or maybe you’re not doing enough. Maybe you need to find ways to put your plans in higher gear.
And if some of your metrics are rising while others aren’t, then maybe the ones that aren’t rising weren’t realistic goals. Or maybe you just need to work on them harder. If you do work on them harder, a few months later you’ll have another review, to see if your work is paying off.
If you operate this way, it seems to me that you’ll always know where you stand. And, because you know how you’re reviewing your progress, you can — at least to some extent — relax your mind about progress during the time between reviews. (One of the most important ways to be focused is to know when not to think about things on your plate. Which then depends on having some system to bring those things into focus at regular times, so you know that even if you don’t think about them now, you’re sure to think about them down the road, when you need to.)
The executive director I suggested this to seemed to find it helpful. I’m offering it here in hope that it might help others, too.
Doug says
Want to quickly build resentment with the musicians and artistic staff? Just let the executive director “set out some changes in programming.” Works everytime. Simple question: if the standards of symphonic music cannot “root the orchestra more deeply in its community” here’s a suggestion: turn the hall into a Starbucks. That should even solve the financial situation.
Ron Nadel says
I agree, to bring about desired change and achieve a vision, a team needs objective criteria for making decisions and then measuring cause/effect. The sequence of implemented changes must be tied to an action plan with a schedule, instead of just a bunch of changes resulting from a new exciting goal.
Historically, this approach is NOT adopted by school boards, new CEOs looking for a “quick win”, politicians who are expected to show immediate results, and others who feel panicky and believe planning is not action.
It’s true that the risks of planning/defining criteria include analysis paralysis and committee thrashing. There needs to be a leader who defines the limits so decisions, progress, and course corrections are made.
Criteria and planning without action is like having a map and a souped up car with no fuel. Action without planning and criteria is a fast car with no driver, driving without a destination.
Linda Essig says
Greg: Jeffrey Nytch write an interesting piece in the first issue of Artivate about programming changes at the Pittsburgh New Music Ensemble that might be interesting relative to this topic: http://www.artivate.org/?p=88
Greg Sandow says
Wow, Linda — what a fabulous piece. Should be required reading for people in classical music. Thanks so very much for pointing it out to me. And to the rest of us. I’d happily recommend that anyone interested in attracting a new audience.