When faced with a complex and important decision in our lives, how do we choose? How do we filter the available options, weigh their various merits and costs, and navigate the series of decisions and actions required to move on?
It’s a question at the core of cultural management, even though our community’s choice to attend or donate or volunteer may not be life-changing on its own. And it’s certainly a question at the core of complex, resource-intensive, and time-consuming service providers — like colleges or universities, or major gift campaigns.
Increasingly, third-party ranking or evaluation agents are stepping in to help frame and filter the decision process. Charity Navigator runs the numbers on nonprofits for prospective donors, measuring their financial and organizational health through a complex point system. GuideStar offers a similar evaluation service, now linked with a direct opportunity to contribute. In the college marketplace, magazines (like U.S. News) and other information providers rank and cluster colleges and universities through their own algorithms.
These efforts to rank and filter are certainly necessary, as choice and reach expand ever faster for those with a decision to make. And yet, the underlying assumptions that drive these rankings pose a fundamental challenge to the systems they seek to inform.
Nowhere is this challenge more evident than in higher education. Rankings have been around for a long time, but the reach of the Internet and the exploding competition for resources have brought them to a new plateau in influence. Rising in national rankings has become a core promise in alumni giving campaigns, and a key indicator of institutional success. Many curriculum and admissions decisions are now driven, in larger part, by their impact on the rankings, and measured against the algorithms of these third-party assessors.
It is, perhaps, the inevitable result of an increasingly complex and connected system, when true value is impossible to define. But it is a challenge that will likely continue its reach into more markets (like arts and culture).
Many universities and colleges are beginning to push back, and question their increasing emphasis on external rankings over internally defined measures of success. One cluster is working through the Education Conservancy, which circulated an open letter to presidents of higher education last May. That letter claimed the system was flawed and dangerous because such rankings:
- imply a false precision and authority that is not warranted by the data they use;
- obscure important differences in educational mission in aligning institutions on a single scale;
- say nothing or very little about whether students are actually learning at particular colleges or universities;
- encourage wasteful spending and gamesmanship in institutions’ pursuing improved rankings;
- overlook the importance of a student in making education happen and overweight the importance of a university’s prestige in that process; and
- degrade for students the educational value of the college search process.
These rankings and algorithms can’t be ignored or stopped, but they most definitely should be understood by the leaders they might influence. To the extent that their evaluations align with your mission and reflect your larger purpose, all the better for you. But when the indicators downplay or fail to capture your organization’s unique value or impact, you’ll need to decide where to draw the line in the sand.