Bill Joy, the chief scientist at Sun Microsystems, recently penned a treatise for Wired magazine in which he argues that technological advances are increasingly threatening the very nature of humanity.
Joy is questioning the long-term moral implications of his life’s work, and this is a valid pursuit for all who work to advance the science of technology.
But while Joy’s arguments are well thought out and forcefully presented, and while I have the greatest regard for the intellect of the man, I think his concerns are overstated and too apocalyptic. (Read his article at
www.wired.com/wired/archive/8.04/joy_pr.html/
.)
Now, before I go further, allow me to lay-out our respective qualifications for commenting on technology:
Bill Joy: created the Berkeley build of Unix, designed the Sparc microprocessor architecture and Sun’s Network File System, spearheaded the development of Java and Jini, and co-founded Sun Microsystems.
Me: I’m a journalist.
Okay, so there’s no question Joy is better qualified, that he is a tech visionary, and that he regularly does more smart things before breakfast than I do over a summer. But I believe his arguments are off target. Joy’s article details two basic concerns. One is that technology, and specifically genetic and nanotechnology, will enable the creation of new and terrible weapons. In the hands of governments or terrorists, these pose a deadly threat to life on the planet. I quibble not at all with this line of thinking.
It is his second thesis that I question, in which artificial beings animated by mechanical intelligence one day supplant humans. Self-contained robots will either push us out of the way or we ourselves will become mechanized: a gradual process in which single artificial enhancements are implanted into us. These additions will multiply over time, eventually resulting in beings that owe more to silicon than to genes. At some point, we cease to be recognizably human.
Joy backs his view by referencing prominent thinkers and scientists, most notably in a chilling quote from Danny Hillis, co-founder of Thinking Machines Corp. Commenting on human bio-tech devices, Hillis said: “I am as fond of my body as anyone, but if I can be 200 with a body of silicon, I’ll take it.”
This quote and others like it raise an important question: are the sources upon which Joy bases his thesis representative of common opinion? I think not.
I, for one, do not wish to be composed of silicon. I would, in fact, rather die a natural death. That’s not to say I eschew all artificial aids. If my heart was injured and an advanced computerized component could keep my blood moving about, I would grab that option. That is, however, not tantamount to welcoming existence as a robot.
But where is the line? At what exact point do I reject additional computerized aids because they make me less than human?
I don’t know the answer to that. But I do know that there is an answer. I am convinced, without a doubt, that at some point I would feel fear – visceral fear – at the prospect of over-mechanization, and I believe most people share that emotion. And that will save us from simply blundering down the path that so concerns Joy.
The entire debate comes down to probabilities, because it is certainly possible that humanity will end in a mechanized dystopia of our own creation. It will one day be possible for robotics to replace much that is organic, possible that machine intelligence will surpass human intelligence. But that does not lead inevitably to the triumph of machine over human.
In the precarious balance of probability, I believe humanity will prosper, and that advanced technology will act as tools at our disposal, and not as our overlord.