NOTE: I wrote this opinion piece on philanthropy and communications technology projects for The Chronicle of Philanthropy, where it was published in the March 7, 2002 issue. I include it here because it still rings true with the barriers and habits I continue to see among foundation-funded technology projects and other funded initiatives. — Andrew Taylor
We are all creatures of habit, so it should be no surprise that the philanthropic organizations we create are much the same way. Sometimes, however, the decisions of habit can lead us in entirely the wrong direction.
Consider a perennial philanthropy favorite: the feasibility study. Among foundations and grant-proposal writers, the feasibility study has become the habitual response to new ideas. Instead of taking active steps to build a solution, we commission a study to outline if and how that solution will work. Someone assembles relevant readings. Someone gathers experts together. Someone takes notes of their thoughts and responses and drafts a final report.
In most cases, it all makes perfect sense. Fiscal responsibility, appropriate allocation of resources, and an informed perspective of the issue at hand are all important in any project. But in certain cases — such as nonprofit communication-technology projects — this reflex response can lead to investment with no return.
As the Internet has grown, all of us in the nonprofit realm see the need to expand our ability to communicate with one another and our constituents. So, we bring our old tools to the table and set to work: framing a problem, posing a solution, and proposing a study to justify moving forward. But the decision to ‘study’ is based on pretenses that may no longer be true: that we understand the question to be answered, that we see the full range of possible responses, and that we can identify the individuals or organizations relevant to our analysis.
A basic truth about the Internet is that we can’t yet know the basic truth. In the face of this uncertainty, the feasibility study can certainly be comforting, but it can also leave us months behind the curve, with less money, energy, time, and flexibility remaining when we staple the final report. Worse, the process itself can reinforce ineffective results. Among the potential barriers:
Size and complexity. The high costs of a study — lots of time, effort, and individuals involved — can encourage final recommendations of equally large size and complexity. That impulse runs contrary to the dynamics of the Internet, where small changes can in many cases have the biggest impact.
Too much structure. Even in the early 19th century, Alexis de Tocqueville remarked on the American obsession with creating new associations — a national habit often reinforced by the feasibility study. When two or more people gather to think about a new idea, the reflex response is to create a new structure (such as a new Web portal, a new association of associations, or a new think tank), rather than enabling existing systems already in place. This again runs contrary to the online world, where structure can actually limit innovation. For example, the MP3 audio-compression format didn’t create a new structure, but enabled a thousand new uses and users of audio content in ways and places we could never predict. The Internet itself has no official structure or central control, which is part of its explosive power and adaptability.
Inflexible evaluation standards. Feasibility studies, by their nature, contain the seeds of future evaluation — that is what makes them so useful in so many situations. But in a dynamic environment, actual impact and measurable outcomes may be entirely different than we expect. When those outcomes are unknown and unknowable, predefined standards can inhibit discovery, redirection, and midcourse corrections, all of which are essential to communications-technology projects.
So what is a foundation to do? In this unknowable world, perhaps action can provide a better form of study.
Talk to any entrepreneur or start-up company manager and they will probably strike a similar theme: We didn’t know what we were doing until after we did it wrong.
In the high-speed, high-feedback world of online communications, direct engagement with users is often the only way to test an assumption, probe an opportunity, or jimmy with a perceived systemic flaw.
This approach is common to the software-development industry, where early (or ‘beta’) versions of software products are released for use and review, long before the final product hits the market. In those projects, the user becomes co-collaborator — exploring, commenting, complaining, stress testing, and recommending modifications to the system, or even suggesting alternate users and uses not considered by the original developer.
The same process, to a radical degree, is evident in the open-source software development world, where not only product previews but also the entire source code for projects is made available to the world for direct enhancements, additions, integration with other software, and re-engineering.
Foundation-supported projects in online communications could clearly benefit from at least experimenting along similar lines. Instead of a study, a team could develop a working ‘beta’ version of a project, testing it among users to define the next steps. Even complex communications-technology projects can be broken down into essential components, each being tested and refined with real users in the real world.
Two points are essential in making this connection among start-ups, software, and foundation technology projects. One is that planning and action are not mutually exclusive — the online world calls for action as planning, with all the same rigors of analysis but at a different part of the process.
The second point is that, given the unknowable nature of our environment, both the study and the action are guaranteed to be wrong (either wrong by degree or by target audience or by underlying concept). Taking action gives us a more grounded and dynamic understanding of how the original idea might be adjusted for greater success.
A few examples of this approach in philanthropy do exist.
The eBase database, which provides nonprofit groups with a tool to manage memberships, donations, and activist information as well as e-mail communication, was developed in close collaboration with its nonprofit users. And Northwestern University’s Collaboratory Project works with open-source development tools to help elementary- and secondary-school teachers and students integrate the Internet into the learning experience. But such examples are buried in a sea of studies, and wave upon wave of studies yet to come.
If only a fraction of the time, money, energy, and expertise behind these studies could be directed to actually connecting ideas and users, just imagine the things we could learn. Of course, we would still be missing the target, since we can’t know where that target is, exactly. But at least we’d be casting off old habits, and giving the dart a chance.