We are often negligent, we admit, about keeping up with Wired, the popular monthly magazine about high technology that describes itself as “the journal of record for the future.” But a news report about a long essay in the April 2000 issue caught our attention. Titled “Why the Future Doesn’t Need Us,” the twelve-thousand-word essay dilates on the dangers inherent in certain impending technological advances. Nothing surprising about that, you might say: crackpots have been warning about the dangers of technology at least since 1750 when Rousseau argued that “our minds have been corrupted in proportion as the arts and sciences have improved.”
What makes “Why the Future Doesn’t Need Us” newsworthy—as of this writing, The New York Times has already published three articles about the essay—is its provenance. It was written not by a disgruntled philosopher but by a working scientist. The author is Bill Joy, co-founder and chief scientist of Sun Microsystems, one of the companies whose technology (notwithstanding Al Gore’s claims to paternity) made the Internet’s World Wide Web possible. As Mr. Joy insists, he is hardly a Luddite. On the contrary, as his own work at Sun has demonstrated, he has always had “a strong belief in the value of the scientific search for truth and in the ability of great engineering to bring material progress.” Like most people, Mr. Joy gratefully acknowledges that scientific and technical advances have “immeasurably improved everyone’s life over the last couple hundred years.”
Nevertheless, Mr. Joy is worried. Recent breakthroughs in certain realms of scientific inquiry, he believes, pose unprecedented threats to humanity. He mentions in particular advances in genetic engineering, robotics, and what has come to be called “nanotechnology”—the effort to re-engineer the world at the most basic level, atom by atom, molecule by molecule. “The idea is,” one recent report on the subject explains,
that if humans could tell atoms how to arrange themselves and how to behave, many of the properties of a material could be controlled at will. Just as nature turns the carbon atoms of coal into diamond by changing their arrangement, so can properties such as color, strength and brittleness be determined at the atomic level.
Scientists believe that if they could learn how to make a brick atom by atom, its molecules could also be “instructed” to self-repair when a crack appeared, or to react to humidity in the air by becoming less or more porous.
In the last two hundred years, science has produced so many benisons that it is tempting simply to get on the wagon of change and equate the advance of technology with material progress. Notwithstanding some fearsome by-products—nuclear weapons, for example—the record thus far largely supports that equation. The problem is, Mr. Joy warns, that “the most compelling 21st-century technologies—robotics, genetic engineering, and nanotechnology—pose a different threat [from] the technologies that have come before.” It is not just that such technologies promise to grant mankind unprecedented power to intervene in the processes of nature: they also possess a novel and alarming potential for self-replication. Consequently, if the potential for constructive intervention by means of these new technologies is tremendous, so is the potential for destructive intervention, accidental as well as malicious. Mr. Joy cites the scientist Eric Drexler (coiner of the term “nanotechnology”), who imagined a number of catastrophic scenarios:
“Plants” with “leaves” no more efficient than today’s solar cells could out-compete real plants, crowding the biosphere with an inedible foliage. Tough omnivorous “bacteria” could out-compete real bacteria: They could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. Dangerous replicators could easily be too tough, small, and rapidly spreading to stop—at least if we make no preparation.
Furthermore, because harnessing many of these technologies—unlike nuclear technology—will require neither huge engineering facilities nor rare and dangerous materials, they are likely to be within the reach of small organizations or individuals. Mr. Joy concludes that “we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation-states, on to a surprising and terrible empowerment of extreme individuals.”
We believe that much of what Mr. Joy has to say is alarmist. Much else—his appeals to the Dalai Lama and Henry David Thoreau, for example—strikes us as naïve and utopian. And yet, and yet … How minatory are the developments that Mr. Joy discusses? Are they more than the familiar gloomy prognostications that fill the pages of science fiction? As to the technical particulars, we, like other laymen, cannot pretend to know. The real issue, we think, concerns not any particular technological advance, but rather the attitude toward the world and toward humanity that our staggering technological success to date has fostered. It is an attitude of hubris, epitomized long ago by Descartes’s famous promise that, following his scientific method, man would make himself the “master and possessor of nature.” At last we seem to be on the threshold of really fulfilling that promise. But the admonitions voiced by Mr. Joy and others suggest that such mastery may come at an unacceptably steep price. What, after all, would it profit a man to gain mastery over the world that his ingenuity had rendered humanly uninhabitable?
We agree with P. G. Wodehouse’s Jeeves that in many respects Nietzsche was “fundamentally unsound.” Yet Nietzsche did see with startling clarity that science unguided by human values bred nihilism. “Science,” he wrote in an early fragment, “probes the processes of nature, but it can never command men. Science knows nothing of taste, love, pleasure, displeasure, exaltation, or exhaustion. Man must in some way interpret, and thereby evaluate, what he lives through and experiences.” The task, then, is to formulate values that can guide—and ultimately circumscribe scientific inquiry and technological development. “Science,” Nietzsche continues, “is totally dependent upon philosophical opinions for all of its goals and methods, though it easily forgets this. But that philosophy which gains control also has to consider the problem of the level to which science should be permitted to develop: it has to determine value.” That this is easier said than done does not mean it is any less imperative.
Mr. Joy may exaggerate the particular threats that await humanity. Indeed, we suspect that the real threat may arise not from the instruments of power we forge but from the spiritual capitulations we accede to in wielding those instruments. For a culture weaned on technological triumph, one of the most difficult truths to acknowledge is that not every problem can be understood according to the calculus of scientific rationality. Nature is not simply a passive medium awaiting reconstruction according to our ingenuity. It is also an active collaborator in defining our humanity. To acknowledge that is to acknowledge that our humanity, like our power, will always have limits, and that we transgress those limits only at the expense of our humanity.
This article originally appeared in The New Criterion, Volume 18 Number 8, on page 1
Copyright © 2019 The New Criterion | www.newcriterion.com