INTRODUCTION
CONVERGING CURRENTS
Then, Now, and Tomorrow
What do these two events have in common?
I
Kimberly, a young woman who has recently married and sees herself as well launched toward a bright future, suddenly begins suffering severe headaches. Her doctor, finding no obvious cause, sends her to a specialist, who in turn sends her for a “CAT scan.” This is a test in which her head is positioned and held steady for several minutes inside a large machine while, as she sees it, nothing of any clear significance is happening. She is sent home, still as frightened and unsure as before.
A few days later her doctor calls. The CAT scan has been analyzed and a tumor has been found deep in her brain—but the doctors now know exactly where it is, how big it is, and what shape it is. They don’t know whether it’s malignant. If it is, it could kill her—and soon, if nothing is done about it.
But, thanks to the knowledge obtained from the CAT scan, they can target the tumor so precisely that they can eliminate it with very little damage to the surrounding tissue. The process is not fun, but a couple of months later she is pain-free and back on the road to that rosy future.
II
On the clear, sunny morning of September 11, 2001, a jetliner bound from Boston to Los Angeles swerves off its planned course and turns toward New York City. A few minutes later, it smashes into the North Tower of the World Trade Center, killing everyone on board and a great many in the tower. A few minutes after that, another plane similarly strikes the South Tower, and it becomes clear that the city—and through it, the country— has become the target of a massive terrorist attack carried out by a handful of individuals. The upper parts of both structures are engulfed by flames as their occupants frantically struggle to get out. Within a couple of hours, to the horror of watchers all over the country, both towers collapse. Thousands are killed, many more injured, and far more lives—and the economy of a nation—are massively disrupted. The entire mood of the country has been transformed in a fraction of a morning, in ways that will affect everyone and last a long, long time. As I write this, a few years later, it’s still far too early to tell just how profound, far-reaching, and permanent the effects will be.
And all because of the actions of a few fanatics.
At first glance, these two incidents might seem as different and unrelated as two occurrences could be. One is a tale of newfound hope for an individual, with no large-scale significance except that it’s representative of many such incidents, now happening so routinely that we tend to take them for granted. We forget how remarkable it is that so many people can now be saved who would have been written off as hopeless just a few decades ago. The other is a real-life horror story: thousands of lives destroyed and millions disrupted, in little more than an instant, by a few individuals not only willing to die for their beliefs, but also to kill thousands of innocents for them—and, unfortunately, with the physical means to do so.
The common denominator is that both the lifesaving tool called the CAT scan, and the swift murder of thousands by a few individuals, were made possible by the convergence of two or more seemingly separate technologies. The CAT scan (short for “computerized axial tomography,” and now usually further shortened to CT scan) is the result of intentionally bringing together the fields of x-ray imaging and electronic computation that can analyze complex relationships among vast amounts of data in a reasonable amount of time. The World Trade Center disaster resulted from the technologies of aviation and large-scale building coming together in ways that the inventors of neither had in mind.
These convergences and others like them will continue to produce radical developments that will force us to make difficult, unprecedented choices. Many of those will have life-or-death significance, not just for individuals, but for civilization itself.
For example:
Should we do everything we can to increase human longevity? If we do, the worldwide problems already caused by population growth will be made even worse. Will longer life spans mean we have to do something drastic to reduce birth rates? Are we willing and able to do that?
Should we continue to build very tall buildings, or do they represent too much of a liability? How much freedom and privacy are we willing to give up to be safer from terrorists? Is privacy itself still a tenable ideal, or an outworn relic of an extinct past?
Can we get the advantages of large-scale power and communication networks without making ourselves vulnerable to catastrophic breakdowns caused by a single technical glitch or act of sabotage? Are very large cities still necessary, or even viable, as a basis for large-scale civilization? If not, what can replace them, and how can we get there from here? If individuals can “grow” whatever they need, that will eliminate their dependence on such large infrastructures, but it will also destroy the basis of our whole economic system. How can we ease the transition to a better one?
Under what conditions, if any, should we allow human cloning? Should we “preserve” our dearly departed as artificial intelligences that can simulate the personality (if not the physical form) of the deceased and continue to interact with the living? If so, should such “artificial citizens,” being in a sense continuations of “real” citizens, be allowed to vote?
To some people, the idea that we will actually face such choices may seem too fantastic, bizarre, or far-fetched to take seriously. Yet we have already had to confront such problems as the ethics of organ transplants (should faces be transplanted?) and abortions (at what point and to what extent should a fetus be considered a human being?). We are only beginning to create a body of law to deal with thorny dilemmas arising on the electronic frontier—problems like identity theft and what copyright means in a world where copying has become trivially easy and cheap. All of those would have seemed just as far-fetched a few short decades ago.
What we have seen so far is just the beginning. As technologies continue to converge, they will continue to produce new possibilities both exhilarating and horrifying. We will have to make choices to embrace the opportunities while avoiding the horrors.
The goal of this book is to think about how we can make such choices intelligently. The first step in doing this is to understand how converging technologies can lead to results that could never be anticipated by considering a single field in isolation. As a preview of how it works, let’s take a quick look at the broad outlines of what happened in my opening examples.
MEDICINE, X-RAYS, AND COMPUTER
I won’t go into much detail about how a CT scan works just yet. For now, I will merely observe that three things were going on more or less concurrently in the nineteenth and the early and mid-twentieth centuries:
(1) Doctors were, as they had been for a very long time, trying to keep patients in good health by preventing disease and injuries and making repairs when something went wrong. A major problem they faced was that doing their job often required knowing what was going on deep inside the human body, and that was usually out of sight.
(2) In 1895 a German physicist named Wilhelm Conrad Röntgen, while looking for something else, serendipitously discovered a new kind of radiation which soon came to be known as x-rays. Those turned out to be closely related to visible light, but were not visible to the eye. They did have one most intriguing new property, however. They passed easily through many materials that were opaque to visible light (such as skin and muscle), but were stopped or attenuated by other materials (such as bone or metal). Since those rays could be used to expose photographic film, they quickly became a diagnostic tool for medical doctors. If x-rays were passed through a patient’s body to a piece of film, parts of the film would be darkened more or less depending on how much of the radiation got through. This depended in turn on how much of what kinds of tissue or foreign matter it had to traverse. Thus the film formed a picture of the inside of the body, letting a doctor or dentist see such things as the exact shape and nature of a tumor, fracture, or cavity, or the location of a bullet.
(3) In the early twentieth century, several researchers developed the first digital computers, machines that could do complicated mathematical calculations by some combination of automatic electrical and mechanical processes. The first working models used electromechanical switching devices called relays and were huge, slow, and of limited ability—quite “clunky” by today’s standards. But in the ensuing decades, workers found ways to dispense with macroscopic moving parts and do similar operations electronically. First they used vacuum tubes (now largely forgotten), followed by transistors and, later, integrated circuits, which combined huge numbers of microscopic transistors on a single small chip. The result of those improving technologies was a steady, dramatic, and accelerating improvement in computing capabilities. Recent computers are far smaller, faster, and more powerful than those of past decades, which allows them to be used to do types of problems that were simply far too difficult before. Such as the CT scan, which is the result of the confluence of medicine, x-ray technology, and computing technology.
In principle, CT scans could have been done almost from the beginning. The basic idea is that instead of making a conventional x-ray—a flat picture of the body as seen from one angle—you use x-rays to construct a three-dimensional image of everything inside the body. All you have to do is shoot x-rays through the body in many different directions and measure how much comes out the other side in each case. Since different materials absorb x-rays more or less strongly, there’s only one distribution of materials that would give the pattern of absorption that you measure. The problem is that figuring out what that distribution is requires solving many simultaneous equations for many unknown quantities. That can be done, but doing it with pencil and paper is extremely difficult and time-consuming. The patient would die of old age while waiting for the test results.
But if you have fast, powerful computers available, they can do just that sort of tedious “number crunching” quickly and very efficiently. Set up a machine to take a series of x-rays from different angles, program fast computers to solve those tangles of absorption equations in a short time, and you get one of the most powerful diagnostic tools in medicine.
AIRPLANES AND BUILDING
In the World Trade Center incident, two technologies had developed independently, for quite different purposes. An incidental property of one of them—one that most sane people would try hard to avoid bringing into play—was used to attack an incidental vulnerability in the other.
Building very large structures isn’t easy, but has a number of advantages if you can learn to do it. The original inspiration was the growth of crowded cities that needed lots of businesses to support their populations, but had relatively little land on which to put those businesses. Building up rather than out effectively increased that area by a large factor, allowing tens of thousands of people to work (or live) on a couple of blocks of land. Building high also posed a whole complex of related engineering challenges. Before structures like the Sears Tower or the World Trade Center could be built, engineers had to develop ways to support all that weight, to ventilate and heat the large volumes within, and to transport large numbers of people quickly and easily through large vertical distances. Once those problems were solved, the technology of building big was widely applied not only to bustling business centers but also to housing large populations.
Aviation, meanwhile, followed a path of its own, with a quite different, essentially simple goal: getting people or cargo from point A to point B. It’s hard to say how far forward such pioneers as Orville and Wilbur Wright were looking, but they surely would have been astounded at some of the developments to which their early experiments would eventually lead. Quite possibly they saw it purely as a technical challenge, a problem to be solved “because it’s there”: to get a manmade, self-propelled machine off the ground and keep it there long enough to go somewhere else. Once that possibility was demonstrated, large potential advantages began to suggest themselves. An airplane would not require good roads—or any roads—all the way from its point of departure to its destination. All it would need would be a few airports, and in the beginning, those didn’t have to be large or complicated. Since a plane en route would be above all the obstacles that a ground vehicle has to contend with—trees, buildings, hills, chasms, rivers—it could travel much faster.
Powered flight took a few years to catch on, but once it did, it attracted lots of talent and money to grow exponentially. Within a few decades big, fast airplanes had become one of the world’s most important modes of transport for both people and goods. They also became weapons, at least indirectly. Fighter pilots machine gunned one another in World War I dogfights; and Orville Wright, the Ohio bicycle repairman who made that first brief flight at Kitty Hawk, lived to see planes drop atomic bombs on Japanese cities in World War II.
And in 2001, planes themselves became weapons. A plane big enough to carry hundreds of passengers, fully fueled for a transcontinental flight at close to the speed of sound, is itself a powerful bomb, if used in the wrong way.
And the same tall buildings that enabled thousands of people to work in a small area became tempting targets, concentrating thousands of potential victims in a compact, sharply defined bulls-eye.
RIVERS AND TRIBUTARIES OF CHANGE
Much of our past has been shaped by such convergences of what started out as independent lines of research or invention, and even more of our future will be shaped in this way. We live in what our ancestors, even quite recently, would have viewed as a “science fictional” age; but there’s an important difference between our present reality and much science fiction. Many science fiction writers have tried to follow the principle, perhaps first enunciated by H. G. Wells, of limiting themselves to one contrary-to-present-knowledge assumption per story—for example, to use cases from Wells’s own work, suppose someone developed a way to travel in time, or to make himself invisible. What has happened in reality (and is happening in more modern science fiction by writers seeking to imitate reality more believably) is that several new things develop concurrently, and important events grow out of their collision and interaction.
Try to look at my real-world examples from the viewpoint of a writer trying to imagine them before they happened—say, a writer working around 1870, shortly after the Civil War in the United States. Such a writer might have come up with a story in which everything was just as the United States was in 1870 except that somebody found a way to photograph the inside of the human body, or to build and fly airplanes, or to build huge buildings, or to make powerful computers that could be used for calculations too complicated for human bookkeepers. Such a story would follow Wells’s advice diligently, but it would be a world little like the one we live in, and considerably less interesting. For the world we actually have at the beginning of the twenty-first century is the result of all of those things being done concurrently, by different people, and other people seeing ways to put them together to yield still other developments, even more surprising.
You could think of Wells’s model of a story as following a single stream from its source, and real history as a vast map of a complicated river system, with streamlets coming from many sources, then flowing together to form bigger rivers, with branches occasionally splitting off and later recombining. The medical use of x-rays is a minor convergence, the confluence of the currents of diagnostic medicine and x-ray imaging. The high-speed computers that make CT scans possible result from another merging of two currents: those of digital computation (which can be done with a wide variety of switching devices) and semiconductor physics (which provides a way to make very small switching devices). The CT scan itself comes from the merger of those two larger currents—medical use of x-rays and high-speed semiconductor-based computers—each of which itself grew from the convergence of at least two smaller ones.
This kind of effect will continue to shape our future, quite likely in an ever-accelerating way. With ordinary streams, as more of them flow together, the total flow becomes greater and faster, at least until the combined flow carves out a deeper channel. The analogy isn’t perfect, of course, but it does have at least a qualitative sort of validity. And when it comes to scientific and technological progress, there are additional effects contributing to that speedup.
In Engines of Creation, K. Eric Drexler’s first book about nanotechnology (a new kind of molecular-scale technology), the author wrote about ever-increasing computer speed and power and their effect on human progress. With exponentially increasing speed, computers made it possible to do, in weeks or days or minutes, work that had never been done before because it would have taken too many person-years of drudgery. The results of those calculations suggested new problems to be tackled; and often those could be solved even faster because while the first set was being done with last year’s computers, this year’s, an order of magnitude faster, were being developed and were now available for use.
And so on. Furthermore, whatever computers were currently available were being used not just by workers in field A, but also by others in fields B, C, and D. Sometimes somebody in one field would be clever enough to look across a boundary and see that a problem somebody had solved in a seemingly unrelated field could be applied to his own. To give an example that doesn’t even depend on advanced computers, most people have at least some inkling of what a hologram is: a photographic image so truly three-dimensional that you can literally move your head and look behind objects in it. The basic principle was developed by Dennis Gabor in 1947, but not much happened with it for several years because holograms were very difficult to make and view with light sources available at that time. While holograms languished as an academic curiosity, another research current involving Alfred Kastler, Charles Hard Townes, and Theodore Harold Maiman led to the development of a new kind of light source: the laser. As far as I know, none of them was even thinking about holograms; but once the laser was available, it wasn’t long before somebody noticed that it made holography relatively easy. Holography then promptly took off as a hot field of research. The first wave of intensive holography research all used lasers, but then it wasn’t long before people started looking for ways to make holograms that didn’t depend on lasers. Now it’s not uncommon to see holograms (of sorts) on magazine covers and credit cards.
Ever-accelerating computation and cross-fertilization among scientific disciplines can be expected to lead to dizzyingly rapid societal change. Science fiction writer Vernor Vinge, in his novel The Peace War, imagined the cross-linked graphs of progress in all fields of endeavor becoming so steep that they became practically vertical, with change that would once have taken years occurring in minutes.
In mathematics, the place where a graph becomes vertical is called a singularity. Vinge, who is also a mathematician, called his fictitious time when graphs of historical change become so steep they’re practically vertical The Singularity. That term has quickly come to be generally understood as one of the central concepts in contemporary science fiction.
But don’t think it has no relevance to the real world. “Vinge’s Singularity” is a phrase that was coined and came into wide use in a science fictional context, but Vinge was thinking very seriously about where we might really be headed. Eric Drexler, in his nonfictional chapters on accelerating change, was independently describing much the same thing, even if he wasn’t using the same words. If you need an example to make you take such concepts seriously, let me mention one more forecast that Drexler made.
In chapter 14 of Engines of Creation, titled “The Network of Knowledge,” Drexler described some of the limitations of information handling as it existed up to then, and envisioned a way to get beyond them. At that time, most published information was scattered around in paper books, journals, and newspapers. It was hard to access whatever bit you might want, and easy to publish nonsense with little chance of anyone’s ever seeing a retraction or rebuttal, even if one was published. Drexler’s vision (which he called hypertext, a word coined by Theodor Nelson) was of what amounted to a worldwide library, with the equivalent of thousands of libraries’ content instantly available from any of thousands of connected computers. Instead of a footnote advising the reader of a related reference that might be found in some obscure journal, an article might include a “link” that would take an interested reader directly and immediately to that article to see for himself exactly what it said. Readers could post their reactions right with the article, so that anyone reading the article could also see everything anyone else had been interested enough to say about it. If the author changed his mind about something he’d written, he could post a correction right there where it couldn’t be missed.
What he was describing was, in other words, something very much like the present-day internet*—but to most people in 1986, it still sounded pretty fantastic. Yet it’s already here, in such a highly developed and ubiquitous form that almost everyone now takes it for granted. And it has contributed markedly to a change in how research is done. Scientists no longer have to wait months to publish their work or read their colleagues’; they can search quickly and easily for anything that’s been done in their field; and they can argue about theories and experimental results almost as easily as if they were in the same room. All of which contributes to a general speedup and makes the Singularity that much more plausible.
Drexler wasn’t kidding about that—and he’s not kidding about the rest of it, either.
ULTIMATE PROMISE, ULTIMATE THREAT, OR BOTH?
Although the potentials of nanotechnology, for instance, may sound a lot like magic, we already have living proof that its principle can work. In fact, we are living proof that it can work.
The basic idea of nanotechnology is that instead of making things by cutting or otherwise shaping bulk material with tools that we can see and handle, extremely tiny machines—molecule-sized—could build just about anything by putting it together, atom by atom. We are a proof of that principle because that’s essentially what happens in living organisms. Biology is a sort of natural nanotechnology, and its existence proves that this sort of thing can happen at least as efficiently as it does in biological systems. It does not rule out the possibility that it might be done in a much wider range of applications by using artificial nanomachines. If it can, it’s quite possible that we will eventually live in a radically transformed society in which practically anything can be cheaply grown rather than expensively manufactured. When Eric Drexler was publishing his initial speculations on nanotechnology (or “molecular engineering”) in the mid-eighties, many thought it sounded like the wildest sort of science fiction, but a great many real scientists are now actively working on it. I know several of them personally—including at least one who just a few years ago was a vocal skeptic, but is now quite busy doing nanotechnology.
Biology itself is, of course, another of the major currents of research that will influence our future. Everyone has read about the controversies over cloning, stem cell research, and genetic engineering. Some would prefer that those things just go away, because they raise the need for difficult, unfamiliar choices. But they won’t go away; once the capabilities exist, and the potential rewards in such areas as medicine are seen, those things will happen. The choices are about when and how.
Not surprisingly, our newfound computing abilities have played a major role in laying the groundwork for nanotechnology and biotechnology (biology-based technology). Much of our new understanding of biology and ability to manipulate it comes from the application of large-scale, high-speed computing to such problems as mapping the human genome. This is a problem that would have been far beyond the capabilities of pencil-and-paper biologists or mathematicians—not because it’s so hard, but because it’s so big.
We can foresee other powerfully influential convergences a bit farther down the road (or stream). If researchers do develop the atomically precise nanofactories that are currently seen as the ultimate goal of nanotechnology, they will need to be controlled by submicroscopic computers, which involves further development of the “computing” stream. Specialized nanomachines might also be developed for medical purposes, such as going into a living body to find and destroy cancer cells while leaving everything else alone.
Any powerful tool can do a great deal of good—or a great deal of harm. We have seen this repeatedly in the past with such developments as agriculture, airplanes, and nuclear energy. The tools that look likely to emerge from the convergence of research in computing, biology, and nanotechnology promise to be far more powerful than any we have known before. As such, they can transform our future lives in seemingly miraculous ways, or create nightmares almost beyond imagining. Can we learn to take advantage of the benefits while steering clear of the dangers? Maybe—but that ever-increasing speed of the rush of events, caused by faster and faster computers, faster and faster communication, and synergies between fields, means that those capabilities, with their attendant promises and perils, are likely to be upon us far sooner than we might expect. One early nanotechnology researcher, asked informally when we might expect a well-developed nanotechnology, said the optimistic estimate was thirty years; the pessimistic was ten.
The implication was not that nanotechnology is a bad thing that we can’t avoid and would like to put off as long as possible. Rather, it’s a potentially good thing that will involve such sweeping changes that we’ll need time to prepare for them and figure out how to deal with them. And the changes that might be produced by nanotechnology converging with other fields such as biotechnology and information technology dwarf even those anticipated by considering nanotechnology alone.
Of course, it isn’t really possible to consider nanotechnology alone. Making it work requires collaboration among materials scientists, information scientists, chemists, and physicists; and many of its potential applications involve other fields such as medicine. But nanotechnology itself was the most radical innovation in sight when Drexler wrote Engines of Creation. Anticipating the huge range of profound changes it could produce, and the speed with which those changes could happen, he and some colleagues established an organization called the Foresight Institute. The institute is dedicated to providing a clearinghouse for news about nanotechnology-related research and thinking about how we might best work toward reaping its rewards and avoiding its pitfalls.
Toward the end of 2002, the National Science Foundation published a report and hosted a conference on “Converging Technologies for Improving Human Performance.” The specific areas they identified as major currents whose convergence would radically transform our future were a group identified by the unpronounceable acronym NBIC: Nanotechnology, Biotechnology, Information Technology, and Cognitive Science (the science of how we know, learn, and think). For the purposes of this book, we should keep in mind that those may not be the only important currents in even our near-term future, but they will almost certainly be among the most important, and they alone suggest astounding possibilities. The report foresees such things as direct connections between human brains and machines, tailored materials that adapt to changing environmental conditions, computers and environmental sensors integrated into everyday wear, and medical technologies that might eliminate such persistent problems as paralysis and blindness. Please bear in mind that this is not science fiction but was a recent serious attempt by scientists acting as such to foresee what might happen in the next few decades.
The report describes a “golden age” and a “new renaissance,” but will such a future really be that, or an unprecedented kind of horror—or something in between, with elements of both? As my opening examples show, powerful technologies can be used for powerful benefits or great harm. In my novel Argonaut, a convergence of powerful computing with nanotechnology and another technology called telepresence enables very small numbers of individuals to gather and use vast amounts of information quickly. This is fine if you’re the one trying to learn a lot, but not nearly so good if someone else is using the same methods to gather information for use against you. Can we reap the benefits of the coming convergences without the dangers? If so, how? How can we learn to make the most intelligent choices?
Radical change will soon be upon us and we shall have to make crucial, difficult decisions. What I shall try to do in this book is:
(1) Describe some of the convergences that have led to our present world, tracing the development of some major past threads, including the stories of the people who made them happen, to see how they evolved from their beginnings to create pervasive parts of our present world.
(2) Describe some of the major lines of research that seem most likely to shape our (relatively near) future, beginning with the facts to date and going on to educated speculation about where current trends might lead.
(3) Examine how some of these current trends may interact to produce radically new abilities, and consider both the benefits and the dangers these might lead to.
(4) Consider how we might scout out our future options and steer the ship wisely. In a sense, this last is perhaps the most important. Making those synergies happen will require breaking down barriers between scientific and technological fields, by such means as interdisciplinary education aimed at making scientists and engineers comfortable with working across disciplinary boundaries and collaborating with colleagues in other fields. It will also pose political and economic challenges such as how to get the benefits to people and how to minimize economic disruption and the abuse of new abilities for antisocial purposes.
Since those changes will affect everybody, it is important for as many of us as possible to have some understanding of what may be coming. So here we go; and we might as well begin with the story of computing, since that has already had such a profound impact on so many areas of life and will continue to converge with and influence just about everything else.
From The Coming Convergence: The Surprising Ways Diverse Technologies Interact to Shape Our World and Change the Future (Prometheus Books, 2008). Reprinted by permission of the publisher.