ARTICLE: Scientific American



|| FLOW CHART || WWWBoard ||
WILL ROBOTS INHERIT THE EARTH?
Marvin Minsky


Yes, as we engineer replacement bodies and brains using nanotechnology. We will then live longer, possess greater wisdom and enjoy capabilities as yet unimagined.

MARVIN. MINSKY, a pioneer in artificial intelligence and robotics, began his distinguished career studying mathematics, physics, biology and psychology at Harvard and Princeton universities. In 1951 he designed and built with another colleague the first neural network learning machine. That same decade he invented the now widely used confocal scanning microscope. Then, after moving down the river from Harvard to the Massachusetts Institute of Technology, he co-founded the Artificial Intelligence Laboratory and now also does research at the Media Laboratory. A laureate of the 1990 Japan Prize, he is Toshiba Professor of Media Arts and Sciences at M.l.T. Minsky has written numerous articles and books, the most recent being The Society of Mind, published in 1987, and The Turing Option, a science-fiction novel written with Harry Harrison in 1992.

Early to bed and early to rise, Makes a man healthy, wealthy, and wise. - Benjamin Franklin

Everyone wants wisdom and wealth. Nevertheless, our health often gives out before we achieve them. To lengthen our lives and improve our minds, we will need to change our bodies and brains. To that end, we first must consider how traditional Darwinian evolution brought us to where we are. Then we must imagine ways in which novel replacements for worn body parts might solve our problems of failing health. Next we must invent strategies to augment our brains and gain greater wisdom. Eventually, using nanotechnology, we will entirely replace our brains. Once delivered from the limitations of biology, we will decide the length of our lives - with the option of immortality - and choose among other, unimagined capabilities as well.

In such a future, attaining wealth will be easy; the trouble will be in controlling it. Obviously, such changes are difficult to envision, and many thinkers still argue that these advances are impossible, particularly in the domain of artificial intelligence. But the sciences needed to enact this transition are already in the making, and it is time to consider what this new world will be like.

COG, under construction at the Massachusetts Institute of Technology, will have mechanical eyes, ears and arms wired to a network of microprocessors that act as its brain. Cog's creators hope that by interacting with its environment the system will learn to recognize faces, track objects and otherwise respond to a host of visual and auditory stimuli, as would an infant. If the project succeeds, Cog will be the most sophisticated robot assembled to date.

Such a future cannot be realized through biology. In recent times we have learned much about health and how to maintain it. We have devised thousands of specific treatments for specific diseases and disabilities. Yet we do not seem to have increased the maximum length of our life span. Benjamin Franklin lived for 84 years, and except in popular legends and myths no one has ever lived twice that long. According to the estimates of Roy L. Walford, professor of pathology at the University of California at Los Angeles School of Medicine, the average human lifetime was about 22 years in ancient Rome, was about 50 in the developed countries in 1900, and today stands at about 75 in the U.S. Despite this increase, each of those curves seems to terminate sharply near 115 years. Centuries of improvements in health care have had no effect on that maximum.

Why are our life spans so limited? The answer is simple: natural selection favors the genes of those with the most descendants. Those numbers tend to grow exponentially with the number of generations, and so natural selection prefers the genes of those who reproduce at earlier ages. Evolution does not usually preserve genes that lengthen lives beyond that amount adults need to care for their young. Indeed, it may even favor offspring who do not have to compete with living parents. Such competition could promote the accretion of genes that cause death. For example, after spawning, the Mediterranean octopus promptly stops eating and starves itself. If a certain gland is removed, the octopus continues to eat and lives twice as long. Many other animals are programmed to die soon after they cease reproducing. Exceptions to this phenomenon include animals such as ourselves and elephants, whose progeny learn a great deal from the social transmission of accumulated knowledge.

We humans appear to be the longest-lived warm-blooded animals. What selective pressure might have led to our present longevity, which is almost twice that of our other primate relatives? The answer is related to wisdom. Among all mammals our infants are the most poorly equipped to survive by themselves. Perhaps we need not only parents but grandparents, too, to care for us and to pass on precious survival tips. Even with such advice there are many causes of mortality to which we might succumb. Some deaths result from infections. Our immune systems have evolved versatile ways to cope with most such diseases. Unhappily, those very same immune systems often injure us by treating various parts of ourselves as though they, too, were infectious invaders. This autoimmune blindness leads to diseases such as diabetes, multiple sclerosis, rheumatoid arthritis and many others.

We are also subject to injuries that our bodies cannot repair: accidents, dietary imbalances, chemical poisons, heat, radiation and sundry other influences can deform or chemically alter the molecules of our cells so that they are unable to function. Some of these errors get corrected by replacing defective molecules. Nevertheless, when the replacement rate is too low, errors build up. For example, when the proteins of the eyes' lenses lose their elasticity, we lose our ability to focus and need bifocal spectacles - a Franklin invention.

The major natural causes of death stem from the effects of inherited genes. These genes include those that seem to be largely responsible for heart disease and cancer, the two biggest causes of mortality, as well as countless other disorders, such as cystic fibrosis and sickle cell anemia. New technologies should be able to prevent some of these disorders by replacing those genes. Most likely, senescence is inevitable in all biological organisms. To be sure, certain species (including some varieties of fish, tortoises and lobsters) do not appear to show any systematic increase in mortality as they age. These animals seem to die mainly from external causes, such as predators or starvation. All the same, we have no records of animals that have lived for as long as 200 years - although this lack does not prove that none exist. Walford and many others believe a carefully designed diet, one seriously restricted in calories, can significantly increase a human's life span but cannot ultimately prevent death.

By learning more about our genes, we should be able to correct or at least postpone many conditions that still plague our later years. Yet even if we found a cure for each specific disease, we would still have to face the general problem of "wearing out." The normal function of every cell involves thousands of chemical processes, each of which sometimes makes random mistakes. Our bodies use many kinds of correction techniques, each triggered by a specific type of mistake. But those random errors happen in so many different ways that no low-level scheme can correct them all.

The problem is that our genetic systems were not designed for very long term maintenance. The relation between genes and cells is exceedingly indirect; there are no blueprints or maps to guide our genes as they build or rebuild the body. To repair defects on larger scales, a body would need some kind of catalogue that specified which types of cells should be located where. In computer programs it is easy to install such redundancy. Many computers maintain unused copies of their most critical system programs and routinely check their integrity. No animals have evolved similar schemes, presumably because such algorithms cannot develop through natural selection. The trouble is that error correction would stop mutation, which would ultimately slow the rate of evolution of an animal's descendants so much that they would be unable to adapt to changes in their environments.

Could we live for several centuries simply by changing some number of genes? After all, we now differ from our relatives, the gorillas and chimpanzees, by only a few thousand genes - and yet we live almost twice as long. If we assume that only a small fraction of those new genes caused that increase in life span, then perhaps no more than 100 or so of those genes were involved. Even if this turned out to be true, though, it would not guarantee that we could gain another century by changing another 100 genes. We might need to change just a few of them - or we might have to change a good many more.

HUMAN LIFE SPAN has increased on average over time as economic conditions have improved. In ancient Rome (brown) the average lifetime was 22 years, in developed countries around 1900 (blue) it was 50, and now in the U.S. (dark blue) it stands at 75. Still, these curves share the same maximum. Even if we found cures for every plague(red),our bodies would probably wear out after roughly 115 years.

Making new genes and installing them are slowly becoming feasible. But we are already exploiting another approach to combat biological wear and tear: replacing each organ that threatens to fail with a biological or artificial substitute. Some replacements are already routine. Others are on the horizon. Hearts are merely clever pumps. Muscles and bones are motors and beams. Digestive systems are chemical reactors. Eventually, we will find ways to transplant or replace all these parts.

But when it comes to the brain, a transplant will not work. You cannot simply exchange your brain for another and remain the same person You would lose the knowledge and the processes that constitute your identity. Nevertheless, we might be able to replace certain worn- out parts of brains by transplanting tissue- cultured fetal cells. This procedure would not restore lost knowledge, but that might not matter as much as it seems. We probably store each fragment of knowledge in several different places, in different forms. New parts of the brain could be retrained and reintegrated with the rest, and some of that might even happen spontaneously.

Even before our bodies wear out, I suspect that we often run into limitations in our brains' abilities. As a species, we seem to have reached a plateau in our intellectual development. There is no sign that we are getting smarter. Was Albert Einstein a better scientist than Isaac Newton or Archimedes? Has any playwright in recent years topped William Shakespeare or Euripides? We have learned a lot in 2,000 years, yet much ancient wisdom still seems sound, which makes me think we have not been making much progress. We still do not know how to resolve conflicts between individual goals and global interests. We are so bad at making important decisions that, whenever we can, we leave to chance what we are unsure about.

Why is our wisdom so limited? Is it because we do not have the time to learn very much or that we lack enough capacity? Is it because, according to popular accounts, we use only a fraction of our brains? Could better education help? Of course, but only to a point. Even our best prodigies learn no more than twice as quickly as the rest. Everything takes us too long to learn because our brains are so terribly slow. It would certainly help to have more time, but

longevity is not enough. The brain, like other finite things, must reach some limits to what it can learn. We do not know what those limits are; perhaps our brains could keep learning for several more centuries. But at some point, we will need to in- I crease their capacity.

The more we learn about our brains, the more ways we will find to improve them. Each brain has hundreds of specialized regions. We know only a little about what each one does or how it does it, but as soon as we find out how any one part works, researchers will try to devise ways to extend that part's capacity. They will also conceive of entirely new abilities that biology has never provided. As these inventions grow ever more prevalent, we will try to connect them to our brains, perhaps through millions of microscopic electrodes inserted into the great nerve bundle called the corpus callosum, the largest databus in the brain. With further advances, no part of the brain will be out-of-bounds for attaching new accessories. In the end, we will find ways to replace every part of the body and brain and thus repair all the defects and injuries that make our lives so brief.

Needless to say, in doing so we will be making ourselves into machines. Does this mean that machines will replace us? I do not feel that it makes much sense to think in terms of "us" and "them." I much prefer the attitude of Hans P. Moravec of Carnegie Mellon University, who suggests that we think of these future intelligent machines as our own "mind-children."

In the past we have tended to see ourselves as a final product of evolution, but our evolution has not ceased. Indeed, we are now evolving more rapidly, though not in the familiar, slow Darwinian way. It is time that we started to think about our new emerging identities. We can begin to design systems based on inventive kinds of "unnatural selection" that can advance explicit plans and goals and can also exploit the inheritance of acquired characteristics. It took a century for evolutionists to train themselves to avoid such ideas - biologists call them "teleological" and "Lamarckian" - but now we may have to change those rules.

ANTS from the species Lasius niger are shown here swarming. An L. niger queen ant is known to have lived for 27 years. No sexually reproducing animal on record has lived more than 200 years, but some may exist. Although certain species seem to die only from external causes, such as predation or starvation, senescence is probably inevitable in all biological organisms.

Almost all the knowledge we amass is embodied in various networks inside our brains. These networks consist of huge numbers of tiny nerve cells and smaller structures, called synapses, that control how signals jump from one nerve cell to another. To make a replacement of a human brain, we would need to know something about how each of the synapses relates to the two cells it joins. We would also have to know how each of those structures responds to the various electric fields, hormones, neurotransmitters, nutrients and other chemicals that are active in its neighborhood. A human brain contains trillions of synapses, so this is no small requirement.

Fortunately, we would not need to know every minute detail. If details were important, our brains would not work in the first place. In biological organisms, each system has generally evolved to be insensitive to most of what goes on in the smaller subsystems on which it depends. Therefore, to copy a functional brain it should suffice to replicate just enough of the function of each part to produce its important effects on other parts.

Suppose we wanted to copy a machine, such as a brain, that contained a trillion components. Today we could not do such a thing (even with the necessary knowledge) if we had to build each component separately. But if we had a million construction machines that could each build 1,000 parts per second, our task would take mere minutes. In the decades to come, new fabrication machines will make this possible. Most present-day manufacturing is based on shaping bulk materials. In contrast, nanotechnologists aim to build materials and machinery by placing each atom and molecule precisely where they want it.

By such methods we could make truly identical parts and thus escape from the randomness that hinders conventionally made machines. Today, for example, when we try to etch very small circuits, the sizes of the wires vary so much that we cannot predict their electrical properties. If we can locate each atom exactly, however, the behavior of those wires would be indistinguishable. This capability would lead to new kinds of materials that current techniques could never make; we could endow them with enormous strength or novel quantum properties. These products in turn could lead to computers as small as synapses, having unparalleled speed and efficiency.

Once we can use these techniques to construct a general-purpose assembly machine that operates on atomic scales, further progress should be swift. If it took one week for such a machine to make a copy of itself, we could have a billion copies in less than a year. These devices would transform our world. For example, we could program them to fabricate efficient solar-energy collecting devices and attach these to nearby surfaces. Hence, the devices could power themselves. We would be able to grow fields of microfactories in much the same way that we now grow trees. In such a future we will have little trouble attaining wealth; our trouble will be in learning how to control it. In particular, we must always take care to maintain control over those things (such as ourselves) that might be able to reproduce themselves.

If we want to consider augmenting our brains, we might first ask how much a person knows today. Thomas K. Landauer of Bellcore reviewed many experiments in which people were asked to read text, look at pictures and listen to words, sentences, short passages of music and nonsense syllables. They were later tested to see how much they remembered. In none of these situations were people able to learn, and later remember for any extended period, more than about two bits per second. If one could maintain that rate for 12 hours every day for 100 years, the total would be about three billion bits - less than what we can currently store on a regular five-inch compact disc. In a decade or so that amount should fit on a single computer chip.

Although these experiments do not much resemble what we do in real life, we do not have any hard evidence that people can learn more quickly. Despite common reports about people with "photographic memories," no one seems to have mastered, word for word, the contents of as few as 100 books or of a single major encyclopedia. The complete works of Shakespeare come to about 130 million bits. Landauer's limit implies that a person would need at least four years to memorize them. We have no well-founded estimates of how much information we require to perform skills such as painting or skiing, but I do not see any reason why these activities should not be similarly limited.

The brain is believed to contain on the order of 100 trillion synapses, which should leave plenty of room for those few billion bits of reproducible memories. Someday, using nanotechnology, it should be feasible to build that much storage space into a package as small as a pea.

ROBOTIC TREE-HAND, designed (but not yet built) by the author and, independently, by Hans P. Moravec of Carnegie Mellon University, is composed of many similar units of different sizes. Because of this uniformity, such robots should be easy to build in the future. At each smaller scale, a tree robot has twice as many units - a pattern not unlike that of the human frame.

Once we know what we need to do, our nanotechnologies should enable us to construct replacement bodies and brains that will not be constrained to work at the crawling pace of "real time." The events in our computer chips already happen millions of times faster than those in brain cells. Hence, we could design our "mind-children" to think a million times faster than we do. To such a being, half a minute might seem as long as one of our years and each hour as long as an entire human lifetime.

But could such beings really exist? Many scholars from a variety of disciplines firmly maintain that machines will never have thoughts like ours, because no matter how we build them, they will always lack some vital ingredient. These thinkers refer to this missing essence by various names: sentience, consciousness, spirit or soul. Philosophers write entire books to prove that because of this deficiency, machines can never feel or understand the kinds of things that people do. Yet every proof in each of those books is flawed by assuming, in one way or another, what it purports to prove - the existence of some magical spark that has no detectable properties. I have no patience with such arguments. We should not be searching for any single missing part. Human thought has many ingredients, and every machine that we have ever built is missing dozens or hundreds of them! Compare what computers do today with what we call "thinking." Clearly, human thinking is far more flexible, resourceful and adaptable. When anything goes even slightly wrong within a present-day computer program, the machine will either come to a halt or generate worthless results. When a person thinks, things are constantly going wrong as well, yet such troubles rarely thwart us. Instead we simply try something else. We look at our problem differently and switch to another strategy. What empowers us to do this?

On my desk lies a textbook about the brain. Its index has approximately 6,000 lines that refer to hundreds of specialized structures. If you happen to injure some of these components, you could lose your ability to remember the names of animals. Another injury might leave you unable to make long-range plans. Another impairment could render you prone to suddenly utter dirty words because of damage to the machinery that normally censors that type of expression. We know from thousands of similar facts that the brain contains diverse machinery. Thus, your knowledge is represented in various forms that are stored in different regions of the brain, to be used by different processes. What are those representations like? We do not yet know.

But in the field of artificial intelligence, researchers have found several useful means to represent knowledge, each better suited to some purposes than to others. The most popular ones use collections of "if-then" rules. Other systems use structures called frames, which resemble forms that are to be filled out. Yet other programs use weblike networks or schemes that resemble trees or sequences of planlike scripts. Some systems store knowledge in language- like sentences or in expressions of mathematical logic. A programmer starts any new job by trying to decide which representation will best accomplish the task at hand. Typically a computer program uses a single representation, which, should it fail, can cause the system to break down. This shortcoming justifies the common complaint that computers do not really "understand" what they are doing.

What does it mean to understand? Many philosophers have declared that understanding (or meaning or consciousness) must be a basic, elemental ability that only a living mind can possess. To me, this claim appears to be a symptom of "physics envy" - that is, they are jealous of how well physical science has explained so much in terms of so few principles. Physicists have done very well by rejecting all explanations that seem too complicated and then searching instead for simple ones. Still, this method does not work when we are addressing the full complexity of the brain. Here is an abridgment of what I said about the ability to understand in my book The Society of Mind:

If you understand something in only one way, then you do not really understand it at all. This is because if something goes wrong you get stuck with a thought that just sits in your mind with nowhere to go. The secret of what anything means to us depends on how we have connected it to all the other things we know. This is why, when someone learns "by rote," we say that they do not really understand. However, if you have several different representations, when one approach fails you can try another. Of course, making too many indiscriminate connections will turn a mind to mush. But well-connected representations let you turn ideas around in your mind, to envision things from many perspectives, until you find one that works for you. And that is what we mean by thinking!

I think flexibility explains why, at the moment, thinking is easy for us and hard for computers. In The Society of Mind, I suggest that the brain rarely uses a single representation. Instead it always runs several scenarios in parallel so that multiple viewpoints are always available. Furthermore, each system is supervised by other, higher-level ones that keep track of their performance and reformulate problems when necessary. Because each part and process in the brain may have deficiencies, we should expect to find other parts that try to detect and correct such bugs.

In order to think effectively, you need multiple processes to help you describe, predict, explain, abstract and plan what your mind should do next. The reason we can think so well is not because we house mysterious sparklike talents and gifts but because we employ societies of agencies that work in concert to keep us from getting stuck. When we discover how these societies work, we can put them inside computers, too. Then if one procedure in a program gets stuck, another might suggest an alternative approach. If you saw a machine do things like that, you would certainly think it was conscious.

This article bears on our rights to have children, to change our genes and to die if we so wish. No popular ethical system yet, be it humanist or religion-based, has shown itself able to face the challenges that already confront us. How many people should occupy the earth? What sorts of people should they be? How should we share the available space? Clearly, we must change our ideas about making additional children. Individuals now are conceived by chance. Someday, instead, they could be "composed" in accord with considered desires and designs. Furthermore, when we build new brains, these need not start out the way ours do, with so little knowledge about the world. What kinds of things should our "mind-children" know? How many of them should we produce, and who should decide their attributes?

MICROMOTOR is shown here below a pinpoint. As ways are found to make even smaller devices, nanotechnologists will be able to build entire microfactories that, powered by light, can make copies of themselves in mere minutes.

Traditional systems of ethical thought are focused mainly on individuals, as though they were the only entities of value. Obviously, we must also consider the rights and the roles of larger-scale beings - such as the superpersons we term cultures and the great, growing systems called sciences - that help us understand the world. How many such entities do we want? Which are the kinds that we most need? We ought to be wary of ones that get locked into forms that resist all further growth. Some future options have never been seen: imagine a scheme that could review both your mentality and mine and then compile a new, merged mind based on that shared experience.

Whatever the unknown future may bring, we are already changing the rules that made us. Most of us will fear change, but others will surely want to escape from our present limitations. When I decided to write this article, I tried these ideas out on several groups. I was amazed to find that at least three quarters of the individuals with whom I spoke seemed to feel our life spans were already too long. "Why would anyone want to live for 500 years? Wouldn't it be boring? What if you outlived all your friends? What would you do with all that time?" they asked. It seemed as though they secretly feared that they did not deserve to live so long. I find it rather worrisome that so many people are resigned to die. Might not such people, who feel that they do not have much to lose, be dangerous?

My scientist friends showed few such concerns. "There are countless things that I want to find out and so many problems I want to solve that I could use many centuries," they said. Certainly immortality would seem unattractive if it meant endless infirmity, debility and dependency on others, but we are assuming a state of perfect health. Some people expressed a sounder concern - that the old ones must die because young ones are needed to weed out their worn-out ideas. Yet if it is true, as I fear, that we are approaching our intellectual limits, then that response is not a good answer. We would still be cut off from the larger ideas in those oceans of wisdom beyond our grasp.

Will robots inherit the earth? Yes, but they will be our children. We owe our minds to the deaths and lives of all the creatures that were ever engaged in the struggle called evolution. Our job is to see that all this work shall not end up in meaningless waste.


FURTHER READING

MAXIMUM LIFE SPAN. Roy L. Walford. W. W. Norton and Company, 1983.

THE SOCIETY OF MIND. Marvin Minsky. Simon and Schuster, 1987.

MIND CHILDREN: THE FUTURE OR ROBOT AND HUMAN INTELLIGENCE. Hans Moravec. Harvard University Press, 1988.

NANOSYSTEMS. K. Eric Drexler. John Wiley & Sons, 1992.

THE TURING OPTION. Marvin Minsky and Harry Harrison. Warner Books, 1992.


BACK TO ARTICLE LISTINGS

|| FLOW CHART || WWWBoard ||