This approach of relying on examples — on massive amounts of data — rather than on cleverly composed rules, is a pervasive theme in modern A.I. work. It has been applied to closely related problems like speech recognition and to very different problems like robot navigation. IBM’s Watson system also relies on massive amounts of data, spread over hundreds of computers, as well as a sophisticated mechanism for combining evidence from multiple sources.
The current decade is a very exciting time for A.I. development because the economics of computer hardware has just recently made it possible to address many problems that would have been prohibitively expensive in the past. In addition, the development of wireless and cellular data networks means that these exciting new applications are no longer locked up in research labs, they are more likely to be available to everyone as services on the web.
A recent cover story in the Atlantic magazine claims that artificial intelligence and machines "will never beat the human mind". While it's easy to make trivial objections to this argument (e.g., "never" is a long time, and "the brain is itself a machine constructed of squishy, organic parts"), the claim has some validity.
We humans have predicted artificial intelligence and friendly talking robots for decades now, and they never arrived.
But I'm not pessimistic. Certainly we'll need another Einstein or Niels Bohr to make the next theoretical leap, and it's difficult to predict the timing of such creative insight. It could come next week, or in 100 years. For now, we're stuck in our current paradigm.
To get unstuck, we must open our minds to new approaches to the problem. Don't just write a computer program that "acts like a brain". Conduct a thought experiment, as Einstein might do:
And so on... Yes, the brain is a machine. We don't understand it yet, that's why it appears magical and unbeatable to us. For now.
Aristotle famously declared that the brain's purpose is to cool the blood. (It was left to Herophilus to point out that the brain actually generates a lot of heat, and is the true source of passions and the intellect.)
But it turns out the Peripatetic One was on to something after all. The next generation of computer chips (in order to satisfy Moore's Law) will resemble the brain's 3D design. As such, internal heat dissipation will be a big problem:
[Researchers seek] to understand how the latest chip cooling techniques can support a 3D chip architecture [with] an interconnect density from 100 to 10,000 connections per millimeter square. [They] believe ... the use of hair-thin, liquid cooling microchannels measuring only 50 microns in diameter ... are the missing links to achieving high-performance computing with future 3D chip stacks.
I'm interested in how our genes construct the brain to guide our behavior. What better way to study this than by simulating "behavioral genetics" in robots?
To construct an intelligent robot, you need to provide it with the following:
The robot's senses represent what's happening both outside and inside itself. If it sees food, that's external. If it moves its muscles or feels full after eating, that's internal. The robot's prime directive tell it "what's important", and we'll assume "stomach fullness" is an important outcome of its behavior.
First, the robot must recognize patterns in its senses. Stacked neural networks (and eventually memristors) can be used for this purpose. The robot's senses train the network, and, once trained, the network recognizes patterns, even those not identical with the training set.
Second, the robot must predict future patterns. This is not as hard as it seems. When trained in the previous step, the stacked neural network should be presented not only with the present state of the senses, but also with several past states, simultaneously. In other words, the neural network makes no distinction between past and present, as it associates inputs from its external senses and its internal senses (e.g. muscle tension and how full is its stomach).
We can simulate all this virtually in a computer program (no need to build a physical robot) -
How can we make the robot learn? By training its neural network brain using the sensory inputs. But not only current sensory inputs: we must combine the present inputs and past inputs into a single neural network. If we assume the robot can remember the inputs from each of the last 8 clock ticks, the total number of inputs presented to the network will be 72 x 8 = 576 inputs. Past and present are thus combined into a single network. At the next clock tick, the oldest 72 "remembered inputs" are dropped, and the current 72 are added to the new training set.
The prime directive is hard-wired into the robot. It simply states that the robot will be "happier" if its stomach is full. We should assume a certain rate of digestion, whereby the stomach becomes less full over time unless it consumes more food. Obviously, if the robot can be trained to move its mouth muscles back and forth to catch food from the sky, it will achieve greater "happiness". If it doesn't learn this quickly enough, it will starve.
At first, the robot moves its mouth muscles randomly, back and forth. But eventually, given the high weight (importance) assigned to the prime directive input (i.e. stomach fullness), the robot eventually learns to predict how to move its mouth left or right (one space per clock tick) to anticipate where the food will land a few ticks in the future.
Most interestingly, the robot uses prediction (or anticipation) to guide its own muscles. It predicts where its mouth will be in the future, and this very act of prediction "causes" its muscles to contract and its mouth to move to catch the food (or play ping pong).
This simulated robot, although simplistic, demonstrates a primitive form of behavioral genetics. Humans are more complex, and our hard-wired "prime directives" are subtle (and no two people have exactly the same prime directives), but the principle should be the same.
When trying to devise artificially intelligent robots, scientists spend too much time constructing physical parts (robotic hands, servos, battery packs), and too little time developing the intelligent programming and algorithms. This begs the question: Why build anything physical at all? Why not develop and simulate the robot body virtually, on a computer?
Lego, with its line of MindStorms robot kits, has done just that. Now you can experiment with new robot designs in silico, without the need for a soldering iron, or assembly of any physical pieces at all. All the components are simulated in a virtual world on a computer. Hopefully, this technology is the wave of the future, and scientists can return to working on the hard problem, which is artificial intelligence!
One downside of the Lego offering is its lack of physics and interaction. There's no gravity and collision detection between objects. These have been addressed in computer games ("physics engines"), and hopefully that technology gets absorbed into the experimental world of intelligent robot design.
If you wanted to build an intelligent robot from scratch, how would you do it? The answer is to give it genes, just like humans have!
Humans have many types of genes, which function at different levels. (Clearly, I'm using a more abstract definition of "gene" here, than just a simple piece of DNA). Each level builds on the previous levels, and can be affected by higher levels:
Structure-building genes - The robot's body needs form. These "genes" would take the form of blueprints for mechanical parts, like bones, limbs and muscles.
Effector genes - These genes would monitor the position and stresses upon the body's structure, and develop a "model" of how the parts fit together. The effector genes are self-tuning, based on the movement and position of the body's structures. These genes also have to "advertise" their capabilities in some way, so that the "higher" genes can discover them. For example, the "limb effectors" must advertise any stresses they are feeling.
Sense genes - These genes are specialists in detecting changes in the environment, and may include "camera genes" and "touch sensor genes". They are carried upon the robot body's structure, and may have a close relationship with the effector genes. For example, the "camera genes" may work in conjunction with the muscles, to form a movable eye. The sense genes advertise their stream of outputs (raw images, touch, smells, etc), and make them available to the higher genes.
Map genes - These genes take the inputs of the sense genes, and try to make some sense of them, using various approaches. One approach is to identify features in the inputs, by determining which inputs occur at the same time. For example, if the robot's right hand rubs its left arm in a straight line, the sense gene outputs on the skin will be triggered sequentially. That sequential timing information can help the genes build a "map" of how the senses are arranged.
Pattern training genes - These genes are most active early in the robot's existence, to train the robot's cognitive powers. For example, these genes may direct the robot to focus its eyes on objects within 1-3 feet which have a roughly circular shape and small contrasting features. Once this pattern is located, the pattern training genes develop the capability of recognizing that circular shape in more detail. This is analogous to how a baby learns to recognize his mother in the first month of life. Humans can recognize faces quickly in a crowd, but can also quickly lose that ability if a specific part of the brain is damaged.
Motion genes - These genes set the body in systematic motion. The robot flails its limbs around somewhat randomly at first. Using this motion, the map genes can begin to understand how the senses fit together, and how the sense and effector genes interact. The motion genes can later be controlled and overridden by the higher genes.
Let's skip a few levels, so now we come to...
Context-establishing genes - Using all the powers of the "lower genes", the robot can recognize its context. Is it home with its parents, or in a social context with strangers? That sets the stage for how it will act differently, depending on the context.
Motivation genes - Once the robot has trained itself to recognize patterns, and has associated motions with changes in its environment, it needs a sense of focus and purpose. The motivation genes exploit the patterns and motions that have been developed, and lead it to prefer certain situations over others. This leads to increased "time-on-task" (and thus greater skill) in certain situations over others.
Social behavior genes - The robot must specialize to fill a specific niche in society. Will it be a follower or a leader? A "social hierarchy" behavior gene may exploit the fact that the robot can detect, for example, the face pattern of another robot whose eyes are not averted after 5 seconds. Depending on the social context, this may signal that the robot is standing in front of another (highly confident) robot (or in front of a mirror). The social behavior genes can be highly variable across the robot population. Some robots may feel stress when in this situation, yet other robots may feel highly motivated upon seeing other robots looking at them.
Consciousness genes - A robots has the illusion of free will when its programming runs according to its design. A robot designed to climb mountains will think it is choosing to climb mountains of its own free will. However, sometimes a robot's programmed desires are in conflict. A desire to climb mountains may be in conflict with a desire to nurture children who cannot climb. Since the robot can only be in one place at a time, another set of programs -- consciousness -- are required to negotiate among competing desires for bodily resources.