Skip to main content

The Physical Limits to Genius

The laws of physics may well prevent the human brain from evolving into an ever more powerful thinking machine

It is humbling to think that a honeybee, with its milligram-size brain, can perform tasks such as navigating mazes and landscapes on a par with mammals. A honeybee may be limited by having comparatively few neurons, but it surely seems to squeeze everything it can out of them. At the other extreme, an elephant, with its five-million-fold larger brain, suffers the inefficiencies of a sprawling Mesopotamian empire. Signals take more than 100 times as long to travel between opposite sides of its brain—and also from its brain to its foot, forcing the beast to rely less on reflexes, to move more slowly, and to devote 97 percent of the neurons in its brain to the cerebellum, which coordinates each step.

We humans may not occupy the dimensional extremes of elephants or honeybees, but what few people realize is that the laws of physics place tough constraints on our mental faculties as well. Anthropologists have speculated about anatomic roadblocks to brain expansion—for instance, whether a larger brain could fit through the birth canal of a bipedal human. If we assume, though, that evolution (or surgeons) can solve the birth-canal problem, then we are led to the cusp of some even more profound questions.

One might think, for example, that evolutionary processes could increase the number of neurons in our brain or boost the rate at which those neurons exchange information. Such changes could, in principle, make us smarter. But several recent lines of investigation, if taken together and followed to their logical conclusion, seem to suggest that such avenues for improvement would soon be blocked by physical limits. Ultimately those limits trace back to the very nature of neurons and the statistically noisy ways in which they communicate by chemical exchanges. “Information, noise and energy are inextricably linked,” says Simon Laughlin, a theoretical neuroscientist at the University of Cambridge. “That connection exists at the thermodynamic level.”


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Do the laws of thermodynamics, then, impose a limit on neuron-based intelligence, one that applies universally, whether in birds, primates, porpoises or praying mantises? This question apparently has never been asked in such broad terms, but the scientists interviewed for this article generally agree that it is a question worth contemplating.

“It's a very interesting point,” says Vijay Balasubramanian, a physicist who studies neural coding of information at the University of Pennsylvania. “I've never even seen this point discussed in science fiction.”

Intelligence is of course a loaded word: it is hard to measure and even to define. Still, it seems fair to say that by most metrics, humans are the most intelligent animals on earth. But as our brain has evolved, has it approached a hard limit to its ability to process information? Could there be some physical limit to the evolution of neuron-based intelligence—and not just for humans but for all of life as we know it?

That Hungry Tapeworm in Your Head
The most intuitively obvious way in which brains could get more powerful is by growing larger. And indeed, the possible connection between brain size and intelligence has fascinated scientists for more than 100 years. Biologists spent much of the late 19th century and the early 20th century exploring universal themes of life, mathematical laws related to body mass—and to brain mass in particular—that run across the animal kingdom. One advantage of size is that a larger brain can contain more neurons, which should enable it to grow in complexity as well. But it was clear even then that brain size alone did not determine intelligence: a cow's brain is well over 100 times the size of a mouse's, but the cow isn't any smarter.

Instead brains seem to expand with body size in order to carry out more trivial functions. Bigger bodies might, for example, impose more neural housekeeping chores unrelated to intelligence, such as monitoring larger numbers of tactile nerves, processing signals from bigger retinas and controlling more muscle fibers.

Eugene Dubois, the Dutch anatomist who discovered the skull of Homo erectus in Java in 1892, wanted a way to estimate the intelligence of animals based on the size of their fossil skulls, so he worked to define a precise mathematical relation between the brain size and body size of animals—under the assumption that animals with disproportionately large brains would also be smarter. Dubois and others amassed an ever growing database of brain and body weights. One classic treatise reported the body, organ and gland weights of 3,690 animals, from wood roaches and yellow-billed egrets to slugs and three-toed sloths.

Dubois's successors found that mammals' brains expand more slowly than their bodies do and generally reach about the ¾ power of body mass. So a muskrat, which has a body 38 times as large as a mouse's, carries a brain only about eight times as big. From that insight came the tool that Dubois had sought: the encephalization quotient, a ratio of the average brain mass actually measured for a species to the mass predicted by a version of the power law. Humans have a quotient of 7.5, meaning our brains are 7.5 times as large as the law predicts. Bottlenose dolphins sit at 5.3; monkeys hover as high as 4.8; and oxen, unsurprisingly, lumber around at 0.5. In short, intelligence may depend on the amount of neural reserve left over after the brain's menial chores, such as minding skin sensations, are accounted for. Or to boil it down even more: when it comes to intelligence, bigger may be better, at least in superficial ways.

As brains expanded in mammals and birds, the organs almost certainly benefited from economies of scale. There is, for starters, the network effect: the more neural pathways that connect a pair of neurons, the more information each signal implicitly carries. That's one reason that neurons in larger brains can get away with firing fewer times a second.

Meanwhile, however, a competing trend may have kicked in. Balasubramanian argues that “it is very likely that there is a law of diminishing returns” to increasing intelligence indefinitely by adding new brain cells. Size carries burdens with it, the most obvious one being added energy consumption. In humans, the brain is already the hungriest part of our body: at 2 percent of our body weight, this greedy little tapeworm of an organ wolfs down 20 percent of the calories that we expend at rest. In newborns, it's an astounding 65 percent.

Staying in Touch
Much of the energetic burden of brain size comes from the organ's communication networks. In the human cortex, communications eats up four fifths of the energy expended. But it appears that as size increases, neuronal connectivity also becomes more challenging for subtler, structural reasons. In fact, even as biologists kept collecting data on brain mass in the early to mid-20th century, they also took on the more daunting challenge of defining the “design principles” of brains and explaining how the principles work in brains of vastly different sizes.

A typical neuron extends an elongated tail called the axon. At its end, the axon branches out, and the tips of the branches form synapses, or contact points, with other cells. Like telegraph wires, axons can connect distant parts of the brain, but they can also bundle together to form nerves that link the central nervous system to far-flung parts of the body.

Pioneering neuroscientists measured the diameter of axons under microscopes. They counted the size and density of nerve cells, as well as the number of synapses per cell. They surveyed hundreds—sometimes thousands—of cells per brain in dozens of species. Eager to refine their mathematical curves by extending them to ever larger beasts, they even found ways to extract intact brains from whale carcasses. The five-hour process involved the use of a two-man lumberjack saw, an ax, a chisel and brute strength to open the top of the skull like a giant can of beans.

These studies revealed that as brains expand in size from species to species, several subtle but probably unsustainable changes happen. First, the average size of nerve cells increases. This phenomenon allows the neurons to connect to more and more of their peers as the overall number of neurons in the brain increases. But because larger cells pack into the cerebral cortex less densely, the distance between cells increases, as does the length of axons required to connect them. What's more, longer axons mean longer times for signals to travel between cells. To compensate for the distance, axons have to get thicker—they can then carry signals faster.

Researchers have also found that as brains get bigger from species to species, they are divided into a larger and larger number of distinct areas. You can see those areas if you stain brain tissue and view it under a microscope: patches of the cortex turn different colors. These areas often correspond with specialized functions, such as speech comprehension or face recognition. The specialization unfolds in another dimension as well: equivalent areas in the left and right hemispheres take on separate functions—for example, spatial versus verbal reasoning.

For decades this dividing of the brain into more work cubicles was viewed as a hallmark of intelligence. But it may also reflect a more mundane truth, says Mark Changizi, a theoretical neurobiologist at 2AI Labs in Boise, Idaho. Specialization compensates for the connectivity problem that arises as brains get bigger. If a cow's brain had the same design as a mouse's brain, despite having 100 times as many neurons, there is no way the neurons could be as well connected as they are in the mouse. Cows—and other large mammals—solve this problem by segregating like-functioned brain neurons into highly interconnected modules and reducing the numbers of direct long-distance connections among modules.

The specialization of the hemispheres similarly reduces the amount of information that must leap across long, interhemispheric axons from one side of the brain to the other. “All of these seemingly complex things about bigger brains are just the backbends that the brain has to do to satisfy the connectivity problem” as it gets larger, Changizi argues. “It doesn't tell us that the brain is smarter.”

Jan Karbowski, a computational neuroscientist at the University of Warsaw in Poland, agrees. “Somehow brains have to optimize several parameters simultaneously, and there must be trade-offs,” he says. “If you want to improve one thing, you screw up something else.”

What happens, for example, if you expand the corpus callosum (the bundle of axons connecting right and left hemispheres) quickly enough to maintain constant connectivity as brains expand? And what if you further thicken those axons, so the transit delay for signals traveling between hemispheres does not increase as brains expand? The results would not be pretty. The corpus callosum would expand—and push the hemispheres apart—so quickly that any performance improvements would be neutralized.

Recent experiments refining the relation between axon width and conduction speed have brought these trade-offs into stark relief. Neurons do grow larger as brain size increases, Karbowski says, but not quickly enough to prevent a decrease in connectivity. And while axons do thicken as brains expand, it's not enough to offset the longer conduction delays.

There is a good reason that axons don't thicken more. Restraining their girth saves the brain both space and energy, Balasubramanian says. Double the width of an axon, and its energy expenditure doubles, but the velocity of its pulses goes up just 40 percent or so.

Even with all of this corner cutting, the volume of white matter (the axons) still grows more quickly than the volume of gray matter (the main body of neurons containing the cell nucleus) as brains increase in size. To put it another way, as brains get bigger, more of their volume is devoted to wiring rather than to the parts of individual cells that do the actual computing. This alone suggests that scaling size up is ultimately unsustainable.

The Primacy of Primates
The seemingly intractable inefficiencies that big-brained beings face explain why a cow fails to squeeze any more smarts out of its grapefruit-size brain than a mouse does from its blueberry-size brain. But how is it that humans are so smart? Part of the answer seems to be that evolution has found impressive workarounds at the level of the brain's building blocks. When Suzana Herculano-Houzel, a neuroscientist at the Federal University of Rio de Janeiro in Brazil, surveyed the number and size of brain cells across 41 mammal species in 2014, she and her colleagues stumbled onto a game changer that probably gives humans an edge.

Herculano-Houzel found that cortical neurons in primates differ in an important way from those in most other mammals. In primates, only a few of these neurons grow much larger as the brain increases in size. These rare oversized neurons may shoulder the burden of keeping things well connected and allow the majority to remain small. This feature allows the brains of large primates (humans included) to stay dense. An owl monkey, for example, has about twice the brain mass of a marmoset, as well as roughly twice as many neurons. In contrast, a similar doubling of mass in the rodent brain boosts the neuron count by just 60 percent.

“It's a huge difference—one of the things that makes a primate a primate,” Herculano-Houzel says. Humans pack 86 billion neurons into 1.4 kilograms of brain, but a rodent that had followed its usual neuron-size scaling law to reach that number of neurons would now have to drag around a brain weighing 45 kilograms. And metabolically speaking, all that brain matter would eat the varmint out of house and home.

Having smaller, more densely packed neurons does seem to have a real impact on intelligence. In 2005 neurobiologists Gerhard Roth and Ursula Dicke, both at the University of Bremen in Germany, reviewed several traits that predict intelligence across species (as measured, roughly, by behavioral complexity) even more effectively than the encephalization quotient does. “The only tight correlation with intelligence,” Roth says, “is in the number of neurons in the cortex, plus the speed of neuronal activity,” which decreases with the distance between neurons and increases with the degree of myelination of axons. (Myelin is fatty insulation that lets axons transmit signals more quickly.)

If Roth is right, then primates' small neurons confer a double advantage. First, they allow a greater increase in cortical cell number as brains enlarge. Second, they allow faster communication because the cells pack more closely. Elephants are reasonably smart, but their neurons are six times larger than humans' and up to 40 times larger than other mammals', leading to inefficiencies. “The packing density of neurons is much lower,” Roth says, “which means that the distance between neurons is larger and the velocity of nerve impulses is much lower.”

A growing number of studies have revealed a similar pattern of variation within humans: people who have the quickest lines of communication among their brain areas also seem to be the brightest. One study, reported in 2014 by Emiliano Santarnecchi of Harvard Medical School, used functional magnetic resonance imaging to measure how directly different brain areas talk to one another—that is, whether they talk via a large or a small number of intermediary areas. Santarnecchi found that shorter paths between brain areas, and higher overall network efficiency, correlated with higher IQ.

Edward Bullmore, an imaging neuroscientist at the University of Cambridge, and his collaborators obtained similar results in 2009 using a different approach. They compared working memory (which is the ability to hold several numbers in one's memory at once) among 29 healthy people. The researchers then used magnetoencephalographic recordings from their subjects' scalps to estimate how quickly communication flowed between brain areas. People whose brains exhibited the most direct communication and the fastest neural chatter had the best working memory.

It is a momentous insight. We know that as brains get larger, they save space and energy by limiting the direct connections among regions. The large human brain contains relatively few of these long-distance connections. But Bullmore and Santarnecchi showed that these rare, nonstop connections have a disproportionate influence on smarts: brains that scrimp on resources by cutting just a few of them do noticeably worse. “You pay a price for intelligence,” Bullmore concludes, “and the price is that you can't simply minimize wiring.”

Intelligence Design
If communication among neurons, and between brain areas, is really a major bottleneck that limits intelligence, then when evolution produces smaller neurons that pack together more densely and communicate faster, that should yield smarter brains. Brains might also become more efficient by evolving axons that can carry signals faster over longer distances without getting thicker. But something prevents animals from shrinking neurons and axons beyond a certain point. You might call it the mother of all limitations: the proteins that neurons use to generate electrical pulses are inherently unreliable.

These proteins, known as ion channels, act like tiny valves to open and close pores in the cell membrane of the neuron. Open channels allow ions of sodium, potassium or calcium to flow into or out of neurons, producing the electrical signals by which these cells communicate. But the channels are so minuscule that mere thermal vibrations can flip them open or closed.

If you were to isolate a single ion channel on the surface of a nerve cell and then adjust the voltage across the channel to open or close it, you would find that this protein-activated switch does not flip on and off reliably like your kitchen light does. Instead it flutters open and shut unpredictably, like a sticky screen door on a windy day. Changing the voltage only influences the likelihood that it will open.

This may sound like a horrible evolutionary design flaw, but it's not a bug—it's a feature. Or rather the unavoidable price of having a sensitive, energy-efficient gate. “If you make the spring on the channel too loose,” Laughlin explains, “then the noise keeps on switching it”—the screen door in the wind. Cells could use stiffer proteins as channels to dampen that noise, he says, but that would force neurons to spend more energy to control the ion channel. The trade-off means that ion channels are reliable only when many of them are used in parallel to “vote” on whether or not a neuron should generate an impulse.

Here's the rub: voting becomes problematic as neurons get smaller. “When you reduce the size of neurons, you reduce the number of channels that are available to carry the signal,” Laughlin says. “And that increases the noise.”

In a pair of papers published in 2005 and 2007, Laughlin and his collaborators calculated whether the need to include enough ion channels limits how small axons can be made. The results were startling. “When axons got to be about 150 to 200 nanometers in diameter, they became impossibly noisy,” Laughlin says. At that point, an axon contains so few ion channels that the accidental opening of a single channel can spur the axon to deliver a signal even though the neuron did not intend to fire.

The brain's smallest axons probably already hiccup out about six of these accidental spikes per second. Shrink them just a little bit more, and they would blather out more than 100 a second. “Cortical gray matter neurons are working with axons that are pretty close to the physical limit,” Laughlin concludes.

This fundamental compromise among information, energy and noise is not unique to biology. It applies to everything from ham radios to computer chips. Transistors act as gatekeepers of electrical signals, just like ion channels do.

For five decades engineers have shrunk transistors steadily, cramming more and more onto chips to produce ever faster computers. Transistors in the latest chips have features that are just 14 nanometers across. At that size, it becomes very challenging to “dope” silicon uniformly. (Doping is the addition of small quantities of other elements to adjust a semiconductor's properties.) When features shrink below 10 nanometers, a transistor will be so small that the random presence or absence of a single doping atom of boron could cause it to behave unpredictably.

Engineers might circumvent these limitations by redesigning chips using new technologies. But life cannot start from scratch: it has to work within the framework that has evolved over 600 million years. And during that time a peculiar thing has happened: one particular grand plan has popped up over and over again.

The brains of the honeybee, the octopus, the crow and intelligent mammals look nothing alike at first glance. But if you look at the circuits that underlie tasks such as vision, smell, navigation and episodic memory of event sequences, “very astonishingly they all have absolutely the same basic arrangement,” Roth says. Such convergence in anatomy and physiology usually suggests that a certain evolutionary solution has reached maturity, leaving little room for improvement.

Perhaps, then, life has arrived at an optimal neural blueprint. That blueprint is wired up through a step-by-step choreography in which cells in the growing embryo interact through signaling molecules and physical nudging. The blueprint may be so entrenched by evolution as to rule out any major changes in plan.

Bees Do It
So have humans reached the physical limits of how complex our brain can be, given the building blocks that are available to us? Work by Herculano-Houzel suggests that we have already jumped through a major evolutionary hoop just to obtain the brain we have now.

It all comes down to calories. Animals can spend only so many hours eating per day. Primates thus faced a critical trade-off as they evolved larger brains, she says. They could expend their limited calories on a larger, more powerful body—or on a smarter brain. Gorillas, orangutans and chimpanzees maxed out their calories with various combinations of big, strong bodies and brains containing 20 to 40 billion neurons. Those brains consume around 9 percent of the total calories that they burn—which means they must spend up to eight hours a day foraging.

Humans, in contrast, sport brains packed with 86 billion neurons—and we devote a whopping 20 percent of our calories to feeding our heads. We can afford such an extravagant caloric luxury, Herculano-Houzel believes, only because our species developed a unique technology: the cooking fire.

Around 1.5 million years ago our ancestors began using fire to transform food. “That allows a jump in the amount of calories that you can get from your food that no other practice can achieve,” Herculano-Houzel says. Cooking makes it easier to digest plant foods and to extract calorie-dense fat from animal carcasses—for example, by stewing bones to extract marrow. It seems an unlikely coincidence that around the time our human ancestors conquered fire, they also finally broke through the caloric barrier and jumped from brains of perhaps 40 billion brain neurons (Homo habilis) to 60 billion neurons (Homo erectus), and finally to 86 billion. Were it not for cooking, she says, “we would not be here.”

But what about future human evolution? Laughlin doubts that there is any hard limit on brain function the way there is on the speed of light. “It's more likely you just have a law of diminishing returns,” he says. “It becomes less and less worthwhile the more you invest in it.”

The human mind, however, may have better ways of expanding without the need for further biological evolution. Through social interaction and language, we humans have learned to pool our intelligence into collective smarts.

And then there is technology. For millennia written language has enabled us to store information outside our body, effectively extending the capacity of our brains. One could argue that the Internet is the ultimate consequence of this trend toward outward expansion of intelligence. In a sense it could be true, as some say, that the Internet makes you stupid: collective human intelligence—culture and computers—may have reduced the impetus for evolving greater individual smarts.

 

MORE TO EXPLORE

 

Evolution of the Brain and Intelligence. Gerhard Roth and Ursula Dicke in Trends in Cognitive Sciences, Vol. 9, No. 5, pages 250–257; May 2005.

Cellular Scaling Rules for Primate Brains. Suzana Herculano-Houzel, Christine E. Collins, Peiyan Wong and Jon H. Kaas in Proceedings of the National Academy of Sciences USA, Vol. 104, No. 9, pages 3562–3567; February 27, 2007.

Efficiency of Weak Brain Connections Support General Cognitive Functioning. Emiliano Santarnecchi et al. in Human Brain Mapping, Vol. 35, No. 9, pages 4566–4582; September 2014.

Douglas Fox writes about biology, geology and climate science from California. He wrote our July 2021 article "The Carbon Rocks of Oman," about efforts to turn carbon dioxide into solid minerals.

More by Douglas Fox
SA Special Editions Vol 24 Issue 4sThis article was originally published with the title “The Limits of Intelligence” in SA Special Editions Vol. 24 No. 4s (), p. 104
doi:10.1038/scientificamericanphysics1215-104