Continuing from my previous post, about applying criteria to measure how well orchestras play…
Because, to begin with, we for the most part discuss how well orchestras play only in the most general way. We have an idea, let’s say, that Cleveland (or at least this used to be the belief) stands above most American orchestra. Or that Berlin might be the best orchestra in the world. But what exactly do we mean by that?
Or we think that San Francisco, under MTT, stands very high. But do we mean that their programming does, or their playing? How does their playing rank, compared to other American orchestras their size?
Compare this to what any baseball fan knows. You’re a Mets fan? If you’re serious about it, you know their strengths and weaknesses, position by position. Stellar shortstop, really good third baseman (though he’s injured), promising young first baseman (also injured), left fielder who forgot how to hit.
And if you’re even more serious, you can compare the Mets, position by position, with every other team in the National League. These comparisons are written up in detail by sportswriters.
Orchestras, you’d think, are more important than baseball teams, unless we think that great art isn’t important. But position by position orchestral comparisons — or, to use orchestral terms, section by section comparisons — just aren’t available. So the public, it seems to me, is largely in the dark about how orchestras compare to each other. (If you think I’m wrong about this, please point me to the writing that proves me wrong!)
So that’s the public side of this. But now let’s look at the inside view. How are orchestras harmed from the inside, by not knowing (or openly talking about) how they compare to each other?
Because the board now can’t properly govern the orchestra. How’s their orchestra doing? Well, you’d think one measure of that would be how well the orchestra plays. But the board may well not know that. I very much doubt that board members make detailed comparisons. Some might, of course. But are the comparisons openly talked about at board meetings?
If baseball teams had boards, of course the comparisons would be talked about.
And this strikes home with special force in smaller orchestras. At least the big ones tour, and get reviewed around the country, especially in New York, when they come to Lincoln Center or Carnegie Hall. So the board at least can read the reviews.
But if you’re on the board of (let’s say) the Des Moines Symphony, what kind of information do you have about other orchestras of its size? Is your orchestra playing as well as it should? Well, sure, there are regional differences (availability of musicians, how far the musicians have to drive to play concerts; if it’s a long distance, it might be harder to get the musicians you want). But still. Are you getting, within your limitations, the best musicians you can? And are they playing up to their ability, or maybe (with a good music director) even above their ability? (As happened, just for instance, when Mariss Jansons was music director in Oslo.)
How does the board judge these things? Wouldn’t it be helpful if they had exactly the kind of detailed information any Mets fan has about the Mets? Or (which ought to be readily available) about the minor league team in their city, if the city happens to have one?
But instead, I suspect (from everything I’ve heard and, for that matter, in some cases seen firsthand), that boards of smaller orchestras don’t often know how well — compared to other orchestras of the same size — their orchestra played. If comparative section by section rankings were readily available…
And no, I’m not saying that — at least under present conditions — it would be easy to get those rankings. But, really now: If you were responsible for the health of an orchestra, wouldn’t you want to know how well (compared to other orchestras their size) they play?
Joe Shelby says
Well, repertoire matters, and hand-in-hand with that is the expectations by the audience (as well as the musical director and the orchestra itself). A local orchestra may not be expected to take on Ligeti or Takemitsu, or may be expected based on the conductor to take on new (generally tonal) music more often (Seattle under Schwartz, or Baltimore under Alsop, both of whom are champions of new composers, and new American composers at that).
So, too, the San Francisco you cite – I really don’t know MTT’s tastes beyond what shows up on the PBS shows, which are mostly early and late Romantic, or tonal 20th Century (Copland). Even his late-period Stravinsky recording was with the LSO. For those that don’t “live with the orchestra”, its hard for us to know how large a range of material it is they play.
So in this, the baseball analogy does somewhat fall short. In baseball, everybody plays, well, baseball. Orchestras are judged by the quality of the “core” rep (the Beethoven cycle, the Brahms cycle, the Wagner operas, Stravinsky’s Rite, Debussy’s Faun), the diversity of works they play in a particular period, and the diversity of periods they can play, much of which is the decision of the board and the orchestra’s leads when they select a music director.
This is different again from baseball where the owner (representing the board) selects the manager who drives the emphasis from there. In orchestras, the members have a say in who they pick, which in turn has an impact on what they play as well as how well they play it.
Thus, a comparison of De Moines vs NYPO is much more an apples-oranges comparison than it is to just compare a minor league ball team with a major…and that’s even before the ways an orchestra can rise above its status under a talented leader, like Birmingham under Rattle did throughout the 90s (who still knew his limits – Birmingham played a number of Mahler symphonies, but he never recorded the 9th with them…).
Phillip says
Well, the obvious difference between evaluating the performance of the Mets and the performance of an orchestra is that you have a very concrete tangible set of measurements for the former (wins vs. losses being the clearest, but also many statistical rankings of offensive and defensive performance) whereas comparing one orchestra to another, or one soloist to another for that matter, is ultimately subjective.
As for regional orchestras, my own (subjective, of course) take is that most are playing at levels unheard of in their history, a result of the abundance (oversupply, really) of outstanding young musicians our conservatories and top university music departments are churning out. There are, of course, variables between these orchestras based on many factors, the quality of the conductor being the most glaringly obvious but not the only one. Market effects play a role, but a diminishing one the higher on the pay scale one goes: the truth which Detroit symphony players were understandably reluctant to acknowledge is that any difference in playing level there might be between an orchestra where everybody makes 6 figures and an orchestra with base pay of $70,000/year is going to be nearly imperceptible to 90% of the audience.
Joe Shelby says
Why would it be imperceptible to so many? Again, that comes back to the question of repertoire. Musicians have been playing the standard rep for so long (and have so many reference points – again, how many Beethoven Cycles do we need?) so the higher skills of an orchestra would really only shine on extremely difficult (re: contemporary) musics which the regional orchestras would rarely play.
So Beethoven’s 7th may be only detected by 10%, but how many orchestras would really take on W Schuman’s 8th, or Vaughn William’s difficult 4th, or even Sibelius’s dark 4th, etc etc. The very fact that they might not take such works on itself defines a difference between the layers of the orchestras.
But then again, as I said before, how much of what an orchestra plays is defined by what the local audience is willing to listen to? If nobody shows up for a Schuman Symphony, no matter how brilliantly it might be performed, then why play it?
Tom Whittaker says
Greg,
All of your criteria for measuring an orchestra’s quality are valid. No debating that.
However, we start to enter the utopian world of artists with MFA, MM, etc. degrees once we try to apply your criteria, I fear.
First of all, no board member without an advanced music degree, or at least a vastly self-developed acuity about classical music, would have the faculties to analyze their own – or other, for comparison – orchestras using your criteria.
Purely apart from technical errors, such as faulty intonation, wrong rhythms and notes, the quality of an orchestra or a performance will be debatable among people with highly developed auraul skills and knowledge of classical music performance and history.
Your attempt at analogizing a baseball team’s performance with that of an orchestra is nonsense. Sure, one could compile orchestra stats that enumerate number of off-pitch notes by instrument, instrument group, entire ensemble, foul entries by individual musicians, number of bad notes played, etc. etc., but it would be pretty meaningless. Ultimately, one of two baseball teams wins. An orchestra doesn’t win anything no matter how well or badly it plays.
Yes, it may win positive reviews, and it may win larger audiences, but those are ultimately irrelevant beyond that orchestra’s particular market, and its standards. And comparison between these market standards is meaningless.
Equally important, bad management can sink musically excellent orchestras, just as good management can’t save bad musicianship from disaster.
Does the fact that you can’t exhibit e.g. Serrano’s homoerotic photos in, say, Nashville (let alone Washington D.C.!) without causing an enourmous cry of “foul” from the market, while such an exhibition would be welcomed in NYC, make Serrano a good, bad, worse or better artist (and can he be defined as an artist at all, based on the evaluation of diverse markets)?
While something that plays well in Peoria may play well everywhere else doth not the reverse make true. Of what relevance is a live concert by the Des Moines Symphony given last week to me if I am in Birmingham, Alabama? It’s not like one of those orchestras will win the national series further down the road because they played better. I don’t even know how well they played, because I was at neither concert.
The audiences in either city are not as sophisticated as in New York, or Chicago or other major cities, so they are probably just as happy with their orchestra’s performance as someone in New York or Chicago is with theirs. If not, put on a CD and listen to the Berlin Philharmonic instead.
The whole idea of “quality” of orchestras is a nepharious and insidious concept, which nobody really can compare to any useful end. Its only use is for ambitious music directors or orchestra CEO’s to prod boards or ministries of culture into approving higher budgets for the hiring of musicians who have a higher “perceived” quality as players at a given moment, such as a competition for a position, or in increasing the orchestra size.
since we can’t see who won the concerts in last night’s world playoff series, please spare us any further fruitless and frustrating speculation on this matter. Your criteria are excellent, but your proposed methodologies and uses are pie-in-the-sky, whose only value is one of academic speculation.