Ah, the anthropomorphic fallacy. Is there any more comfortable way of avoiding having to deal with the darker impulses of human nature? And like a lot of other optimistic views of technology, What Technology Wants is steeped in it. Kelly even puts it in the title, straight off begging the question: does technology want anything?
Now, I should clarify up front that I don’t think anthropomorphization is necessarily a bad thing. It can be a useful way to illustrate ideas, an interesting lever to peel back assumptions, &c.–that is, as long as, on some level, we still acknowledge that it is a fallacy, that it’s a metaphor or and allegory we’re adopting for rhetorical convenience. Kelly’s Technium, his nummulosphere of technology that he proposes as the Seventh Kingdom of Life, is a great metaphor, a neat mind-bend that makes the provocative sweep of his book possible. The problem is, at least so far (I’m only halfway through the book-I had four concerts to review and a rehearsal and two services to play this past weekend, cut me some slack), Kelly doesn’t think it’s a metaphor. Every time he comes close to acknowledging it, he immediately falls back into it (as early as pp. 15-16, for example). Which means that he makes some equally unacknowledged assumptions-assumptions that consistently push aside the responsibility of human beings to, once in a while, not take the path of least resistance.
The most important one of these assumptions-and one that, idly skipping ahead, it seems he will maintain for the entire book-is that Progress is a Good Thing. Kelly introduces a bunch of metrics-longevity, urbanization, as so forth-that are, indeed, progressive, steadily increasing over time. Kelly then emphatically, if not literally, capitalizes the P in progress. On page 100, he cites the steady increase in life expectancy over the past century, asking, “If this is not an example of progress, then what is it?” What it is is a particular datum that is increasing over time. I think most everybody would be pleased with this increase (that is, when we’re not looking at long-term Social Security projections), but that is still opinion. Kelly thinks that it’s fact. A page later, he states his creed: “Progress is real.” No, it’s not. Progress is a belief system.
Which is not to say it’s not a useful belief system when it comes to making sense of the world. But, like any belief system, it brings with it the danger of ignoring anything that might disrupt its order. Chapter 4 of What Technology Wants sets up information as a force to balance entropy. Here’s how Kelly rather nicely defined entropy:
Each difference… becomes less different very quickly because every action leaks energy down the tilt. Difference within the universe is not free. It has to be maintained against the grain.
The thing is, the same thing happens with thinking vis-Ã -vis belief systems. Belief systems are the entropy of intellectual activity, shunting thought down more frictionless channels. Which is why, I think, Kelly goes on to talk about information as an impersonal entity. I think you can make a reasonable case that entropy exists outside of human observation, but information? Isn’t information pretty much defined as a signal that is useful to us in ordering our sense of the world? It’s why we call the other signals noise. But for Kelly, information is a work-around that maintains Progress in the face of the accelerating disorder of entropy.
One more example for today: on page 16, Kelly introduces us to the PR2 robot, programmed to find its own power source:
Before the software was perfected, a few unexpected “wants” emerged. One robot craved plugging in even when its batteries were full, and once a PR2 took off without properly unplugging, dragging its cord behind it, like a forgetful motorist pulling out of the gas station with the pump hose still in the tank. As its behavior becomes more complex, so will its desires.
This is a great anecdote. At first, it seems to confirm the idea that technology is evolving beyond our control in a life-like way-even developing the capacity for behavior analogous to human compulsion and irrationality. (That robot’s crazy!) But look closer: who says that such cravings and behavior meant that the PR2’s software was imperfect? That’s right-we do.
The PR2 and its software reminded me of one of my favorite books, Michael Foucault’s Madness and Civilization, in particular Foucault’s analysis of the factors that led to the institutional confinement of the insane:
[I]n the history of unreason, it marked a decisive event: the moment when madness was perceived on the social horizon of poverty, of incapacity for work, of inability to integrate with the group; the moment when madness began to rank among the problems of the city. The new meanings assigned to poverty, the importance given to the obligation to work, and all the ethical values that are linked to labor, ultimately determined the experience of madness and inflected its course. [p. 64, emphasis added]
We define madness-whether it be in other people or in the machines we build-in terms of the order, however consciously or unconsciously, we want to maintain: another belief system.
Am I enjoying the book? Yeah, actually-Kelly tells a story that has great scope and cheerful ambition, he makes interesting connections, and pretty consistently sparks deep thinking. I fully admit that I am a glass-half-empty kind of guy, but I also like entering the glass-half-full world, something that Kelly facilitates with straightforward fluency. The Technium, I think, is a good myth, in the sense of being a framework for making increased sense of the world-useful information, in other words. But every time I find myself thinking hey, wait a minute, I have to remind myself: Kelly actually believes it.
Marc Weidenbaum says
Yeah, I wrestled with this stuff, too, the literal-minded anthropomorphism of it all: tech follows the same evolutionary path as humans.
The one thing I’d say in the (light) defense of the anthropomorphism in Kelly’s thesis is that we are ourselves making the tech of which he speaks. I have no doubt that technology we made is inherently humanoid — we made it, so why shouldn’t it follow from our imprint? I just don’t know if that’s the case for technology over all.
Just for starters, I’d like to see if man-made technology resembles man’s behavior/evolution in a way that, say, ant-made tech resembles ant behavior/evolution, and beaver-made tech resembles beaver behavior/evolution.
Brian M Rosen says
I think you’re missing one of his larger points. He’s stating quite explicitly that this isn’t a metaphor, that technology DOES want, every bit as much as a biological agent does. Around chapter 9 he makes a pretty fascinating case for it (identifying three vectors of evolution for both ‘LIFE’ and ‘TECHNIUM’). This is no garden variety anthropomorphic fallacy at play here, this is full on assertion of a new pseudo-biological process.
And I’m sure by now you’ve gotten to the parts where he tackles the technology as ‘good thing’ and humanities responsibilities in the face of technology’s allure in much more detail. Maybe a follow up post?
Matthew says
Brian: I did finally get there. He really does double down, doesn’t he? And the scope of the idea is impressive. But my suspicion of the Technium as a concept persists, partially because I’m a pessimist, but partially because Kelly keeps using the concept to veer the discussion away from the fact that what human beings want isn’t always that noble.
It’s interesting to compare chapter 8 of the book with the somewhat longer version on Kelly’s website; the difference is largely in the book’s downplaying (or even omitting) the economic and institutional structure that sprang up in the wake of the promulgation of Moore’s Law and how it dedicated a large part of the industry to maintaining its course. When Kelly asks, “Why don’t we see Moore’s Law type of growth in the performance of solar cells if this is simply a matter of believing in a self-fulfilling prophecy?”, he decides it’s because of scaling: “our entire new economy is built around technologies that scale down well,” unlike energy needs. But notice: it’s economics, not the technology itself. We still choose what we’re going to improve, and, based on the market and how human beings have chosen to invest in it—not the technology—semiconductors become cheap and ubiquitous while alternative energy doesn’t. And that, at least to me, implies convenience, or inertia, or being generally too lazy to dismantle fossil-fuel regulatory capture, or any of a host of other not-terribly-noble reasons, at least as much as technological inevitability. (Again: glass-half-empty. It’s a chronic condition.)
Lisa Hirsch says
He really does believe that Technology has an existence independent of humans? What does he say about human discovery and the choices humans have made about the uses of technology? Why is it that, for example, reproductive technologies are handled quite differently in Great Britain and in the US?
Does he think that Google web search and everything that came with it WANTED to be invented??