http://www.sciam.com/specialissues/1198intelligence/1198yam.html


INTRODUCTION

Intelligence Considered

What does it mean to have brainpower?
A search for a definition of intelligence

 Philip Yam

...........
SUBTOPICS:
In Defense of IQ

Function and Form

Do Animals Think?

I, Robot

Beyond Earth

Defining Intelligence
 
 

SIDEBAR:
Estimated IQ Scores

ILLUSTRATIONS:
Neuron Transistor

Lunch Invitations?

Brain Activity

 

For the past several years, the Sunday newspaper supplement Parade has featured a column called "Ask Marilyn." People are invited to query Marilyn vos Savant, who at age 10 had tested at a mental level of someone about 23 years old; that gave her an intelligence quotient of 228--the highest score ever recorded. IQ tests ask you to complete verbal and visual analogies, to envision paper after it has been folded and cut, and to deduce numerical sequences, among other similar tasks. So it is a bit perplexing when vos Savant fields such queries from the average Joe (whose IQ is 100) as, What's the difference between love and infatuation? Or what is the nature of luck and coincidence? It's not obvious how the capacity to visualize objects and to figure out numerical patterns suits one to answer questions that have eluded some of the best poets and philosophers.

 Clearly, intelligence encompasses more than a score on a test. Just what does it mean to be smart? How much of intelligence can be specified, and how much can we learn about it from neurobiology, genetics, ethology, computer science and other fields?

 The defining term of intelligence in humans still seems to be the IQ score, even though IQ tests are not given as often as they used to be. The test comes primarily in two forms: the Stanford-Binet Intelligence Scale and the Wechsler Intelligence Scales (both come in adult and children's versions). Generally costing several hundred dollars, they are usually given only by psychologists, although variations of them populate bookstores and the World Wide Web. (Superhigh scores like vos Savant's are no longer possible, because scoring is now based on a statistical population distribution among age peers, rather than simply dividing the mental age by the chronological age and multiplying by 100.) Other standardized tests, such as the Scholastic Assessment Test (SAT) and the Graduate Record Exam (GRE), capture the main aspects of IQ tests.

 Such standardized tests may not assess all the important elements necessary to succeed in school and in life, argues Robert J. Sternberg. In his article "How Intelligent Is Intelligence Testing?", Sternberg notes that traditional tests best assess analytical and verbal skills but fail to measure creativity and practical knowledge, components also critical to problem solving and life success. Moreover, IQ tests do not necessarily predict so well once populations or situations change. Research has found that IQ predicted leadership skills when the tests were given under low-stress conditions, but under high-stress conditions, IQ was negatively correlated with leadership--that is, it predicted the opposite. Anyone who has toiled through college entrance exams will testify that test-taking skill also matters, whether it's knowing when to guess or what questions to skip.

 Sternberg has developed tests to measure the creative and practical sides of the mind. Some schools and businesses use them, and Sternberg has published work showing their predictive value in subsequent tasks, but they have yet to gain much acceptance in the mainstream testing business.

 Still, conventional standardized testing has leveled the field for most people--whatever their shortcomings, the exams provide some standard by which universities can select students. Contrast this with the time before World War II, when family background and attendance at elite prep schools were key requirements for selective colleges.

 That tests cannot capture all of a person's skills in a neat number is an important crux of the article by Howard Gardner. In "A Multiplicity of Intelligences," he espouses his view, developed in part after working with artists and musicians who had suffered strokes, that human intelligence is best thought of as consisting of several components, perhaps as many as nine. Components such as spatial and bodily-kinesthetic, embodied by, say, architect Frank Lloyd Wright and hockey player Wayne Gretzky, elude test measures. Gardner's classifications are not arbitrary; he draws from evolution, brain function, developmental biology and other disciplines.

 Gardner has been quite influential in education circles, where his theory is often required study for teachers-to-be. He feels, however, that some of his ideas are being misinterpreted. He mentions Daniel Goleman's best-seller, Emotional Intelligence, the central concept of which is based on multiple-intelligences theory. Gardner maintains that the theory should not be used to create a value system, as suggested in Goleman's book. People with high emotional quotients aren't necessarily well adjusted and kind to others--think Hannibal Lecter.

In Defense of IQ

 In sharp contrast to Sternberg and Gardner is Linda S. Gottfredson. In "The General Intelligence Factor," she makes the case for the psychologist's g--that is, a single factor for brains. Other elements, such as linguistic ability and mathematical skill, fall below g in the hierarchy of human skills. She argues that IQ scores are important predictors for both academic and life success and draws on biology to bolster her ideas.

 The concept of g has a long and stormy history. First proposed in the early part of this century, it has waxed and waned in popularity. Among the public and the media, the concept took a hard hit in 1981, when Stephen Jay Gould published his now classic The Mismeasure of Man. In it, he argues that early researchers (perhaps unconsciously) biased their measurements of intelligence based on race and points to shortcomings of those trying to substantiate g. For instance, he takes to task Catherine M. Cox's 1926 publication of deduced IQ scores of past historical figures. Gould notes that Cox drew her assumptions based on written biographical accounts of a person's deeds. Unfortunately, the existence of such biographies correlated with the prominence of the family--poorer families were less likely to have documentation of their children's accomplishments. Hence, pioneering British physicist Michael Faraday, from a modest background, gets a surprisingly low childhood IQ score of 105 [see sidebar].

 Psychometricans (psychologists who apply statistics to measure intelligence) have a hostile view of Gould. According to critics, many of whom recently have written new reviews for the rerelease of Mismeasure, Gould does not grasp factor analysis--the statistical technique used to extract g. In a 1995 review published in the journal Intelligence, John B. Carroll of the University of North Carolina at Chapel Hill writes that "it is indeed odd that Gould continues to place the burden of his critique on factor analysis, the nature and purpose of which, I believe, he still fails to understand." This is one of the milder criticisms leveled at Gould by psychometricians.

 The stormy debate about g stems from its political, racial and eugenics overtones. Historically, the idea of IQ has been used to justify excluding certain immigrant groups, to maintain status quo policies and even to sterilize some people. Scientists who hold views that intelligence is strongly hereditary are often vilified by the general population, sometimes rightly and sometimes wrongly. One researcher who has a bad public image that is not on par with the opinion of professional peers is Arthur R. Jensen of the University of California at Berkeley: even those working psychologists who disagree with him consider his investigations to be solid research.

 Modern genetic studies threaten to inflame the racial controversy even more. For example, this past May, Robert Plomin of the Institute of Psychiatry in London and several collaborators reported the discovery of a gene variation that is statistically linked with high intelligence. The variation lies in chromosome 6, within a gene that encodes for a receptor for an insulinlike growth factor (specifically, IGF-2), which might affect the brain's metabolic rate.

 In some respects, the discovery is not truly surprising. Obviously, some people are born smarter than others. But note who Plomin and his colleagues used as subjects: 50 students with high SAT scores. Strictly speaking, the researchers found a gene for performance on the SAT. True, SATs correlate with IQ scores, which in turn reflect g--which not everyone agrees is the sole indicator of smarts. Complicating the analyses is the fact that average SAT scores have been variable; they dipped in the 1980s but are now swinging back up. That could be the result of better schooling, because the SAT measures achievement more than inherent learning capacities (for which IQ tests are designed). But even IQ scores have not been as stable as was once thought. James R. Flynn of the University of Otago in New Zealand discovered that worldwide, IQ scores have been rising by about three points per decade--by a full standard deviation (15 points) in the past 50 years.

 Are we truly smarter than our grandparents? Researchers aren't sure just what has caused the rise. (Flynn himself, who is profiled in the January 1999 issue of Scientific American, doesn't think the rise is real.) Genetics clearly cannot operate on such a short time scale. Ulric Neisser of Cornell University thinks it may have to do with the increasing visual complexity of modern life. Images on television, billboards and computers have enriched the visual experience, making people more capable in handling the spatial aspects of the IQ tests. So even though genes might play a substantial role in individual differences in IQ, the environment dictates how those genes are expressed.

 In part to probe the genetic-environment mechanisms, the American Psychological Association (APA) convened a task force of mainstream psychologists. They published a 1995 report, Intelligence: Knowns and Unknowns, which concluded that almost nothing can be said about the reason for the 15-point IQ difference between black and white Americans: "There is certainly no such support for a genetic interpretation. At this time, no one knows what is responsible for the differential."

 The APA report was sparked by the publication of The Bell Curve, by Charles Murray and Richard J. Herrnstein. The report actually does not disagree with the data presented in the book about IQ scores and the notion of g. The interpretation of the data, however, is a different story. To many scholars, The Bell Curve played on psychometric data to advance a politically conservative agenda--arguing, for instance, that g is largely inherited and that thus enrichment programs for disadvantaged youth are doomed to failure. As staff writer Tim Beardsley points out in "For Whom Did the Bell Curve Toll?", several interpretations are possible, and other studies have produced results that run counter to the dreary conclusions offered by Murray and Herrnstein. Although it engendered heated debate, the book ultimately had little impact on government policy.

Function and Form

 Even those who fall on the right end of the bell curve, however, do not necessarily have it easy. In "Uncommon Talents: Gifted Children, Prodigies and Savants," Ellen Winner explores the nature of children who are so mentally advanced that schools often do not know how to educate them. These whiz kids are expected to achieve on their own even though they often are misunderstood, ridiculed and neglected. Many are unevenly gifted, excelling in one field but doing average in others. The most extreme cases are the so-called savants (formerly called idiot savants), who can perform astounding feats of calculation and memory despite having autism or autismlike symptoms. Studies of such people offer valuable insights into how the human brain works.

 Observations of brain-damaged patients have done much to identify the discrete functional areas of the brain [see past Scientific American articles, such as "The Split Brain Revisited," by Michael S. Gazzaniga, July 1998; "Emotion, Memory and the Brain," by Joseph LeDoux, June 1994; and the special issue Mind and Brain, September 1992]. Modern imaging technology, such as positron-emission tomography (PET) and functional magnetic resonance imaging (fMRI), have helped investigators to map cognitive function with structure [see "Visualizing the Mind," by Marcus E. Raichle; Scientific American, April 1994]. With such imaging, researchers can see how the brain "lights up" when certain cognitive tasks are performed, such as reciting numbers or recalling a visual scene.

 Structure and function are of particular interest to neurobiologists trying to boost the brainpower of the common person. Several researchers in fact have ties to pharmaceutical companies hoping to capitalize on what would seem to be a huge market in cognitive enhancers. In "Seeking 'Smart' Drugs," staff writer Marguerite Holloway reviews the diverse approaches. If you're a sea slug or a fruit fly, scientists can do wonders for your memory. Humans have somewhat limited choices at the moment; the vast majority of compounds now sold have no solid clinical basis. For instance, package labels of the popular herb gingko biloba overstate its efficacy: a study has shown that it has some modest benefits in Alzheimer's patients, but no study has indicated that gingko definitely helps healthy individuals. Prospective compounds, including modified estrogen and nerve growth factors, seem promising, but the best smart drug may already be in your kitchen: sugar, the energy source of neurons.

 The exploration of human intelligence naturally raises the question of how humans got to be intelligent in the first place. In "The Emergence of Intelligence" (updated since its appearance in the October 1994 issue of Scientific American), William H. Calvin puts forth a kind of 2001: A Space Odyssey hypothesis: that ballistic movement, whether it's pitching a baseball or throwing sticks and stones at black monoliths, is the key to intelligence, because a degree of foresight and planning is required to hit the target. And these ingredients may have permitted language, music and creativity to emerge, differentiating us from the rest of the world's fauna.

Do Animals Think?

 That's not to say that animals aren't intelligent. In "Reasoning in Animals," James L. Gould and Carol Grant Gould make a persuasive case that animals have some ability to solve problems. The examples they cite and the studies they describe make it unlikely that strict behaviorism--that animals' actions are dictated by conditioned responses--can explain it all. Of course, not everything an animal does is an act of cognition: many of the actions of animals are accomplished and restricted by instinct and genes.

 Language plays a role in the development of cognitive abilities, too, as suggested by Irene M. Pepperberg's article, "Talking with Alex: Logic and Speech in Parrots." Alex is the famous Grey parrot that can make requests and provide answers in a seemingly reasoned way. Alex is unique in part because he's a bird: other communicating animals have been primates, such as the chimpanzees Washoe and Kanzi and the gorilla Koko. Rigorously speaking, these animals are communicating through learned symbols and sounds; whether they are truly engaging in language, which permits planning and abstraction, remains to be proved.

 Besides language, another hallmark of intelligence may be self-awareness. Many investigators have grappled with human consciousness from a scientific perspective [see "The Puzzle of Conscious Experience," by David J. Chalmers; Scientific American, December 1995; and "The Problem of Consciousness," by Francis Crick and Christof Koch; Scientific American, September 1992]. But how can you tell if an animal is self-aware? In the late 1960s Gordon G. Gallup, Jr., devised a now classic test using mirrors. Gallup painted a red dot on the faces of anesthetized animals and then observed them when they awoke and noticed themselves in the mirror. An animal that would start poking at the red spot on its face seemingly indicated an awareness that it was seeing itself in the mirror, not another creature. Of all the animals tested in this way, only humans, chimpanzees and orangutans pass.

 With self-awareness comes the ability to take into account another creature's feelings--at least, that's the way it works in humans. Taking the pro side of the debate, "Can Animals Empathize?", Gallup reasons that chimps and orangutans have a sense of self, which they might use to model other creature's mental states.

 Daniel J. Povinelli, however, remains skeptical (in the best traditions of scientific open-mindedness, he adopts the "maybe not" view). He tells how he tested chimpanzees under a variety of clever conditions to see if they understand that another creature cannot see them. It turns out that chimps will beg for food from a blindfolded person (who does not see the chimps) as well as from a sighted individual. Such results suggest that chimps do not reason about another animal's state of mind--or even their own. That they pass the mirror test suggests to Povinelli that they are not necessarily self-aware. Instead they learn that the mirror images are the same as themselves.

I, Robot

 If our closest relatives aren't self-aware, is there any chance that a computer can be? In seeking to make a machine that can pass the so-called Turing test--that is, produce responses that would be indistinguishable from those of humans--artificial intelligence has proved to be a substantial disappointment. Yet passing the Turing test may be an unfair measure of AI progress. In "On Computational Wings: Rethinking the Goals of Artificial Intelligence," Kenneth M. Ford and Patrick J. Hayes maintain that the obsession with the Turing test has led AI researchers down the wrong road. They draw an analogy with artificial flight: engineers for centuries tried to produce flying machines by mimicking the way birds soar. But modern aircraft obviously do not fly like birds, and fortunately so. From this argument, Ford and Hayes note that AI is effectively all around us--in instrumentation, in data-recognition tasks, in "expert" systems such as medical-diagnostic programs and in search software, such as intelligent agents, which roam cyberspace to retrieve information [see "Intelligent Software," by Pattie Maes; Scientific American, September 1995]. Several more formal AI projects exist. One is that of Douglas B. Lenat of Cycorp in Austin, Tex., who for more than a decade has been working on CYC, a project that aims to create a machine that can share and manage information that we humans might consider common sense [see "Artificial Intelligence," by Douglas B. Lenat; Scientific American, September 1995]. Another is that of Rodney Brooks and Lynn Andrea Stein of the Massachusetts Institute of Technology, whose team has produced Cog, a humanoid robot that its makers hope to endow with abilities of a conscious human, without its necessarily being conscious.

 A realm of AI that sparks intense, though perhaps unjustified, feelings of anxiety and human pride is game-playing machines. In "Computers, Games and the Real World," Matthew L. Ginsberg summarizes the main contests that machines are playing and how they fare against human competitors. Garry Kasparov's loss in a six-game match against IBM's Deep Blue last year may have inspired some soul searching. The point of game-playing computers, however, is not so much to best their makers as to explore which types of calculation are best suited to the architecture of the silicon chip. As Ginsberg reminds us, computers are designed not to replace us humans but to help us.

 Indeed, life without computers is now hard to imagine. And the machines will get more ubiquitous. In "Wearable Intelligence," Alex P. Pentland explains how devices such as keyboards, monitor screens, wireless transmitters and receivers are getting so small that we can physically wear them. Imagine reading e-mail on special eyeglasses as you walk down the street, generating power in your shoes that is converted to electricity that powers your personal-area network for cellular communications. Two M.I.T. students, Thad Starner and Steve Mann, have spent time in such cyborg existences--Starner has been doing it since 1992. They look like less slick versions of the futuristic Borg creatures seen on the Star Trek series.

 A true melding of mind and machine is still far away, although the appeal apparently is irresistible. British Telecommunications has a project called Soul Catcher; the goal is to develop a computer that can be slipped into the brain to augment memory and other cognitive functions. Hans Moravec of Carnegie Mellon University and others have argued, somewhat disturbingly, that it should be possible to remove the brain and download its contents into a computer--and with it, one hopes, personality and consciousness.

 Connecting neurons to silicon is only in its infancy. Peter Fromherz and his colleagues at the Max Planck Institute of Biochemistry in Martinsried-München, Germany, have managed to connect the two and caused the neuron to fire when instructed by the computer chip [see illustration]. Granted, the neuron used in the experiment came from a leech. But in principle "there are no show-stoppers" to neural chips, says computer scientist Chris Diorio of the University of Washington, adding that "the electronics part is the easy part." The difficulty is the interface. Diorio was one of the organizers of a weeklong meeting this past August sponsored by Microsoft Research and the University of Washington that explored how biology might help create intelligent computer systems. Expert systems, notes co-organizer Eric Horvitz of Microsoft Research, do quite well in their rather singular tasks but cannot match an invertebrate in behavioral flexibility. "A leech becomes more risk taking when hungry," he notes. "How do you build a circuit that takes risk?" The hydrocarbon basis of neurons might also mean that the brain is more efficient with its constituent materials than a computer is with its silicon. "If we knew what a synapse was doing, we could mimic it," Diorio says, but "we don't have the mathematical foundation yet."

Beyond Earth
 
 

While we have much to learn from the neurons on Earth, we stand to gain even more if we could find neurons from other planets. In "Is There Intelligent Life Out There?", Guillermo A. Lemarchand reviews the history of the search for extraterrestrial intelligence, or SETI. The odds say that other technological civilizations are out there, so why haven't we made contact yet, government conspiracies notwithstanding? The answer is simple: astronomers have looked at only a tiny fraction of the sky--some 10-16 of it. Almost all SETI funds have come from private sources, and time on radio telescopes is limited.

 One ingenious attempt to enlist help from amateurs is SETI@home. Interested parties would download a special screen saver for personal computers that, when running, would sift through data gathered from the Arecibo Radio Observatory in Puerto Rico (specifically, from Project SERENDIP). In other words, as you take a break from work, your PC would look for artificial signals from space.

 Organizers estimate that 50,000 machines running the screen saver would rival all current SETI projects. At press time, investigators were still completing the software and looking for sponsorship: they need at least $200,000 to proceed to the final phases.

 Of course, there's the chance that we have already received alien greetings but haven't recognized them as such. In Lemarchand's view, sending salutations of our own may be the best way to make first contact. He proposes relying on a supernova, on the assumption that other civilizations would also turn their sights onto such relatively rare stellar explosions. Radio telescopes on Earth could send signals to nearby star systems that have good views of both Earth and the supernova [see illustration].

Defining Intelligence

 In the end, most of us would feel rather confident in identifying intelligent signals, be they from space, a machine, an animal or other people. An exact definition of intelligence is probably impossible, but the data at hand suggest at least one: an ability to handle complexity and solve problems in some useful context--whether it is finding the solution to the quadratic equation or obtaining just-out-of-arm's-reach bananas. The other issues surrounding intelligence--its neural and computational basis, its ultimate origins, its quantification--remain incomplete, controversial and, of course, political.

 No one would argue that it doesn't pay to be smart. The role that intelligence plays in modern society depends not on the amount of knowledge gained about it but on the values that a society chooses to emphasize--for the U.S., that includes fairness, equal opportunity, basic rights and tolerance. That intelligence studies could pervert these values is, ultimately, the root of anxiety about such research. Vigilance is critical and so is the need for a solid base of information by which to make informed judgments--a base to which, I hope, this issue has contributed.


The Author Phil Yam, issue editor