Attempting to predict the future is a uniquely human trait that’s been with us since the dawn of time. The witches of Macbeth foretold his rise and fall. Tarot card, tea leaf and palm readers seemingly gaze into our future using clues we only partially understand. Newspaper horoscopes give us fodder for the day ahead. But all forms of forecasting have one great ally – vagueness. In this domain Nostradamus, the famed French physician and astrologer, is undoubtedly the master. After all, a deluded few have gone so far as to say he predicted a bug in the Pentium chip. His prediction was so vague, that hammered long and hard enough his square peg fit nicely into a round hole.
Trying to forecast (futurists don’t like to use the word predict) what technology will be available 10, 20 or even 30 years from now is no easy task. Though technological advances move at a relatively steady pace, two factors make forecasting exceedingly difficult. The first is what some call the “wow” factor. This occurs when someone invents a process or technology that dramatically increases the speed of change. The printing press, light bulb and transistor all fit in this category. By definition they are unpredictable inventions. The second daunting factor is societal. The technology may be ready, but we may not. The video phone tanked like there was no tomorrow, though it was thought to be the wave of the future.
Social and moral norms also change. Those today, who would frown at the idea of a bio-implant, may be replaced tomorrow by a generation which has no qualms about such bodily invasions. Not long ago, artificial insemination was viewed as playing God and shunned; today it is a common occurrence. In fact, understanding how humans react to technological advances is mostly a study in hindsight.
“Anybody who says they can predict the future is pulling your leg,” said Robert Mittman, senior research affiliate at the Institute for the Future in San Francisco. For those who care to try, he has some advice: “Never put a number and a date in the same sentence.”
Sir Arthur C. Clarke, author of 2001: A Space Odyssey and renowned futurist, broke that rule by predicting artificial intelligence will reach human levels by 2020. But he manages to protect himself using another forecasting motto: don’t outlive your predictions. Since Clarke will be 103 in 2020 he may not outlive his forecast. Even if he does, who is going to have the chutzpah to lambaste a knighted centenarian?
Daniel Burrus, Milwaukee-based technology forecaster and author of the book Technotrends, says the key to forecasting is to think in terms of “both” and “and.” Though new technology will arrive, it won’t necessarily spell the end of the old. The future will be digital, but there will still be analogue technology. There will be much more wireless technology, but that which is wired will continue to coexist, he said.
“You look at the driving forces of change and then pick a subject,” he said. As an example, he picked the world of books. In a decade or so, flexible solar powered displays will increase the use of e-books, but that does not mean the end of the book, he explained.
Remember how e-mail was going to reduce paper use? Xerox Corp. is using anthropologists to study why we have an such undeniable affinity to paper, in order to figure out how to replace it.
Even with the help of anthropology and science, the art of forecasting is, at best, an educated crap shoot.
The Jetsons cartoon, in the early ’60s, brought the idea of automated housecleaning into popular culture. Let’s face it, very few of us really like cleaning the house, doing the laundry or mowing the lawn. But if you are thinking about letting the dust settle until a machine is available to do your work, think again.
The cost and complexity of creating a machine capable of doing something as apparently simple as mopping the kitchen floor is daunting. Takashi Gomi, president of Applied AI Systems Inc. in Ottawa, who has been working in artificial intelligence and robotics for decades, said there is finally some humility entering the field. “I think the whole field is getting more modest about understanding the problem (and realizing) that human actions are far more complex than originally thought.” His company is working on developing robots capable of helping the elderly. The first attempts to create a robot capable of picking up a person failed miserably, in part, because of a lack of appreciation of the complexities of a simple human action, he explained.
Though there is no question household appliances will be more connected in the future, the idea your fridge will order milk when you run out is absurd to many.
“We have seen efforts to do that…[but] I tend to be extraordinarily skeptical of technologically based services like that in consumer hands,” Mittman said.
“The complexity invites disaster…there are some things that are not worth doing,” said William Halal, a professor in the management sciences department of George Washington University in Washington, D.C.
Ian Angell, professor of information systems at the London School of Economics, agrees. “Why should my fridge connect to the Internet? How are you going to pay for this? It is just plain stupid.”
A self-avowed economic Darwinist, Angell said it is a question of whether people will pay for any added functionality. And the fridge ordering your food is not based on a good business case.
But how will the home change? Most agree the clunky desktop monitor will be a thing of the past. Great leaps in organic display technology will allow for large, flat wirelessly-connected screens to be placed around the house, probably by the end of the decade. Exactly how many we have and what we do with them is up for debate.
Rafik Loutfy, director of the Xerox Research Centre of Canada in Mississauga, Ont., foresees smart rooms which will react to your entrance by adjusting the heating and lighting to your preferences. Monitors could even potentially assess your emotional state with facial recognition software, and change the huge wall display to match your mood.
Burrus foresees what he calls an ultra intelligent agent. It will be a Web-enabled entity which will help you about your daily tasks, from scheduling meetings and flights to getting you to remember your daily workout. If you are at home, it will be a character on one of many large screens around the house. On the phone, it will be a voice.
the next frontier
All of this new interaction with technology hinges on the advent of far better voice recognition technology and AI, and this is where the debate and prognostication gets interesting.
Angell is direct. “Artificial intelligence is a load of nonsense.” He said the problem lies in the fact that some believe human knowledge, and even our very existence, can be synthesized to a series of zeros and ones. He believes it cannot.
Halal is slightly more reserved in his opinion of AI.
“There is a world of difference between a human and a machine,” he said. They will be powerful but they will always be missing the essence of humanity, what many call our spirit or soul, he said.
Mittman also agrees AI has a long, long way to go. “I have always been extremely skeptical of the capacity of speech recognition, it is a branch of AI where the vision is far from the technology.”
The problem with voice recognition technology is the need to overcome its constraints, he said. Either the vocabulary used has to be limited (automated telephone directories) or the population using it has to be narrowly focused (due to differences in accents and dialects).
Anybody who has used voice-controlled word processing knows how long it takes to train the software. The complexity of human speech has long been underestimated. Things we thought were not complex, such as learning to speak by two, turn out to be very complex, Mittman said.
Regardless, Burrus said the advances will come.
“I believe we will have (in about 20 years) some great artificial intelligence software that is able to understand (spoken) context.”
Beng Ong, research fellow with Xerox Canada in Mississauga, Ont., agrees that language is complex but foresees much more powerful voice recognition technology within a few decades.
Voice recognition will allow us to command technology instead of being a slave to it, Ong said. He even envisions computer-generated voice translation. “I think it is doable, I don’t think it will be 30 years out,” he said, adding that user demand will help push technological development.
Mark Federman, chief strategist of the McLuhan Management Studies Program at the University of Toronto, looks at translation from a more philosophical approach. “We now use a human machine as a universal translator…and that is a perfectly valid technology,” he said. The fact that it is done by humans today does not preclude the ability for machines to do it tomorrow. Though it may be years away.
Despite pitfalls and problems there will be much more voice interaction with the technology of the future, but unfortunately no one is predicting the end of the keyboard.
There is an unexplained interaction between the brain, as it thinks, and the body, as it writes. It is a level of subconscious interconnectivity unparalleled in technology. Try
dictating a letter. It is a harder task than you think. You continually rewind and delete to get exactly what you want and quickly come to realize typing may have been faster. It is these completely human attributes which make predicting technological adoption so difficult.
communication
We are a visual and aural species, but for communication the aural side wins. Forecasting how we will communicate is a tough nut to crack. Everyone spoken to thinks humanity will be more connected than it is today, with constant access to the Internet using some form of wireless technology.
“There won’t be one device that does everything,” Burrus predicts.
“I think the present model of communication and computation is going to look primitive (in comparison),” Halal added.
But exactly what the devices will look like is open for debate. For the most part, no one wanted to be too specific in predicting how we will interact with the Internet. There is talk of small ear pieces for an incoming voice interface, with microphones attached to glasses or lapels to transmit. All of this, of course, will allow us to contact anyone with a simple voice command.
Others envision a blurring of devices.
“I think that there is not going to be much of a distinction between phones and computers,” said Bill Mark, vice-president information and computing sciences SRI International in Menlo Park, Calif.
Taking all of this to the next level is the University of Toronto’s Steve Mann. He is known as the “wearable computer guy.” He invented the eyetap, a headset with an eyepiece which transmits his world to others or transmits a virtual one to him. Whether humans will adapt to wearing a visor over our eyes (which would allow us to connect to other virtual worlds) is open for debate. Mann seems to think so. So too does Burrus.
“The wearable PC makes a lot of sense to me,” he said. Though he is less certain about its exact form.
When asked how close he was to having eye control for his virtual world, Mann smiled. “That’s trivial.” The real quest is for thought control, so your mind can control the world emanating from your headset, he said.
His class at the university, on how to be a cyborg, is attended by the next generation of cutting edge developers, those that will take the wearable computer and cyborg technology to the next level.
But in predicting the demise of technology and what will rule in the decades to come, there is one interesting anecdote to pass on. When the students came to hand in their assignments, there were no CD-ROMs, URLs, or even floppies handed in. Many of the assignments were typed out, a few hand written, and one even printed with a dot matrix printer.
This goes to show how difficult it is to predict new technology’s adoption by generations to come. It is, in part, due to two simple facts: human behaviour is unpredictable – and old technology dies hard.