The New Yorker | Degrees of Freedom: A scientist’s work linking minds and machines helps a paralyzed woman escape her body.

A scientist’s work linking minds and machines helps a paralyzed woman escape her body.

By Raffi Khatchadourian

For eighteen years, Jan Scheuermann has been paralyzed from the neck down. She is six feet tall, and she spends all day and all night in a sophisticated, battery-powered wheelchair that cradles her—half sitting, half reclining—from head to toe. In effect, the chair has become an extension of her body. To navigate the world in it, Scheuermann manipulates a cork-tipped joystick with her chin. She can move in this way with remarkable agility, but her height, combined with the bulk of the chair and the unrelenting nature of gravity and matter, can limit her. Over the phone, though, it is possible to not ever think of her paralysis. She has a soft voice, a wry sense of humor, and a warm, gentle manner. Sometimes when she speaks she pauses to inhale; the deliberate breaths are necessary because her lungs do not automatically pull in enough air, but a listener tends not to notice them. Across a fibre-optic network, her words are converted into weightless digital information. She floats to you.

When I first met Scheuermann, it was by phone. I had called her at home, in Pittsburgh, after learning that she had participated in a neuroscience experiment that allowed her to partially escape the confines of her paralyzed body. Scheuermann is one of a very few Americans to have experienced a direct brain-computer interface, a complex assemblage of technology—transistor-like cortical implants, wires, algorithmic decoders, robotics, all in their early stages of development—designed to fuse minds with machines. For decades, the idea of plugging a brain into a computer has been a mainstay of cyberpunk fiction, not biotechnology. (“I jack in and I’m not here,” a character explains in William Gibson’s 1984 novel, “Neuromancer.”) The human brain is the most complicated object in the known universe. A single brain contains more electrical connections than there are galaxies in space. Understanding the behavior of its eighty-six billion neurons is as formidable a scientific challenge as interstellar travel.

Scheuermann was not always paralyzed. The second of nine siblings, she grew up in Pittsburgh, in the nineteen-sixties and seventies. Her childhood took place in a self-contained, analog world: family, school, church, all within a few city blocks. Her father was a baker, and on Saturdays she worked at his doughnut shop, near her school. She loved reading mysteries; later, at the University of Pittsburgh, she studied nonfiction writing, and after graduating she founded a company, Deadly Affairs, that staged murder mysteries in clients’ homes. Guests wore outlandish costumes and acted out parts that she wrote, while she played Inspector Clueless, a befuddled detective who helped guide the narrative. She met her husband through the business, and moved to California to live with him in 1987. While developing Deadly Affairs, she became a contestant on “Wheel of Fortune” and other game shows. By the early nineties, she had two children. She was writing new scripts. Her life was as she wanted it.

In 1996, Scheuermann was in a client’s living room, orchestrating a production, when suddenly her legs felt heavy and numb. She thought of her father, who had died at the age of fifty-seven, from multiple sclerosis, a disease that tends to run in families. That night, Scheuermann tried not to become alarmed, assuring herself that the heaviness was only fatigue. With rest, the feeling dissipated, but soon it returned and began to spread. Her doctors ruled out M.S. but could offer no diagnosis, and the medical uncertainty inspired its own worries. Scheuermann began to use a cane, then a wheelchair. In 1998, fearing death, she moved with her family back to Pittsburgh so that her relatives could help care for her children.

In her home town, Scheuermann’s life began to change. Doctors there settled on a diagnosis, spinocerebellar degeneration, a rare ailment that ruins lines of communication between the brain and the spine; although there was no cure for it, the concreteness of the diagnosis offered a kind of relief. She was prescribed Prozac, which alleviated a gathering depression. After her children left for college, Scheuermann felt a deep emptiness, but, buoyed by the drug and by support from friends, she regained her bearings and wrote a humorous whodunnit, “Sharp as a Cucumber: A Brenda LaVoom Mystery.”

Then, in October, 2011, a friend sent her a YouTube clip of a young man, Tim Hemmes, who was paralyzed after a motorcycle accident. The video documented his work with researchers at the University of Pittsburgh, which had joined with the U.S. military in an unprecedented scientific effort—a program, with a budget of more than a hundred million dollars, to develop sophisticated prosthetics that could be controlled directly by the human brain. Microsensors embedded on a wafer were surgically placed between Hemmes’s skull and his brain, allowing him to manipulate a robotic arm—even to hold his girlfriend’s hand, for the first time in years. At the end of the clip, he told the camera, “I believe in my heart that this is the future. Anybody out there who has the courage, and the want, to try to do this—you gotta go for it!” The work with Hemmes was only a pilot study. The researchers, contemplating a genuine trial, needed a new subject. As soon as the video ended, Scheuermann vowed to sign up. “I was just so eager for this,” she told me. “I had one goal: to move that robotic arm with my mind!”

II.

The human animal is a creature of movement. For each of us, the gift of consciousness resides in a cellular vehicle, made from bone and blood, skin and fat, and driven by muscles—a body, as Walt Whitman put it, “cunning in tendon and nerve.” One cardiac muscle and countless smooth visceral muscles operate automatically within—the unseen engines of life. They are joined by hundreds of skeletal muscles, which can be commanded to run marathons, to perform music, to write, to speak.

How the mind instructs the body to move is a mystery that has preoccupied Andrew Schwartz, a neuroscientist at the University of Pittsburgh, for more than three decades. It might seem reasonable to expect that a relationship so fundamental would by now be known, but the chain of events that connect the firing of neurons to, say, a punch to the nose remains the subject of pitched scientific controversy.

Schwartz, whose research was central to helping Hemmes move the robotic arm, was a partisan in that controversy, and also a pioneer in the field of neural prosthetics. Since the nineteen-nineties, he had been competing with a small cadre of scientists to develop a system that could circumvent the body and translate raw mental activity into robotic movements. In a rarefied, often showy, sometimes bitter scientific milieu, he seldom sought attention. But his incremental approach had produced remarkable results, earning him a reputation as a rigorous researcher who was as comfortable with bioengineering as he was with neuroscience. The video of Hemmes was one artifact of an intellectual quest, combining Schwartz’s personal journey through the science of the brain with an effort to build the world’s most advanced anthropomorphic robotic arm, underwritten by the most heterodox part of the federal bureaucracy: the Defense Advanced Research Projects Agency.

Schwartz grew up outside Minneapolis. He has the compact physique, and the ruddy complexion, of a cycling enthusiast. He tends to maintain a quiet presence, but he holds strong scientific views, and they often surface beyond the veneer. “Movements are beautiful, and I want to restore that beauty,” he once told me. “When engineers say, ‘Well, heck, you could do this with a claw and a magnet, to lift something up and transport it,’ I’m, like, ‘Yeah, but that’s not what I want to do!’ When we move, there is a very efficient, almost simplistic, elegant way to do it.”

Schwartz came to his life’s work as a sophomore at the University of Minnesota, when, in 1975, he persuaded a neurophysiologist to lend him lab space to test an idea that he hoped might heal spinal injuries. His experiment did not result in a cure, but it brought him into contact with researchers who were striving to discover the mathematical laws that define our bodies in motion—just as Isaac Newton had done for inanimate things. They called their field psychophysics. Because their object of study was human behavior, the physics part often hinted at biological riddles. Why was it, for instance, that a person drawing a figure eight in the air could never quite render the two loops in the same plane, no matter how carefully aligned they appeared to the human eye? As one of the field’s pioneers, R. S. Woodworth, wrote, in 1899, “Some doubt may arise whether my work is really psychological or physiological. The field of voluntary movement undoubtedly lies, like the field of sensation, in the borderland.”

Schwartz was drawn to the borderland. He devoured Woodworth’s papers, and traced the literature from there to the work of Paul Fitts, a psychologist at Wright-Patterson Air Force Base, in the nineteen-fifties. While trying to figure out the best design for a cockpit, Fitts had learned that the way a pilot’s arm reached for an instrument or a dial on a control panel could be precisely captured by an equation that took into account the target’s size and distance. Fitts’s law, as it was known, seemed to reveal a hidden mathematical order in the workings of the body. “That was the part that just hooked me,” Schwartz told me. The human arm, he believed, combined simplicity, power, subtlety, and grace—an evolutionary marvel that was a fundamental part of the human experience. Yet the cognitive mechanisms that caused it to work were unknown, posing what he called “the ultimate control problem.”

Schwartz decided to pursue a Ph.D. in neurophysiology, staying on at Minnesota. He proposed investigating arm gestures using monkeys, but the university was unable to offer him a primate, so instead he set up an experiment to investigate how cats recovered from forced interruptions to their movement—“tripping cats,” as he put it, wryly. With barely any funding, he scrounged. “Basically, they said, ‘Here’s fifty bucks,’ ” he recalled. “I would take old junk equipment and refurbish it.” When he needed a treadmill for his cats, he used a hand-cranked belt sander. The experience taught him how to engineer his own tools, how to experiment with animals, and how to think through scientific problems, but it was a diversion from his interest in the workings of the arm. As he was completing his degree, in 1983, an adviser asked if he had considered doing postdoctoral work, and recommended that he reach out to Apostolos Georgopoulos, a neuroscientist at Johns Hopkins University.

The year before, Georgopoulos had published a paper that offered a radical new idea of how the brain commands the body. It would take years of research to sort out whether he was onto something. But Schwartz was intrigued. Georgopoulos’s insights touched on just about everything of consequence that was human. Our sense of self, our sense of others, and the way we formulate ideas are often shaped by the way we move, by the way we expect others to move. If Georgopoulos had taken a step toward solving that problem, what would follow from it? After meeting him, Schwartz asked if he could join his lab—to help advance his revolution.

When Schwartz was preparing to begin his postdoc, the neuroscience of volitional movement was still in its infancy. The motor cortex—a part of the cerebral cortex that is directly associated with movement—had been discovered only a century earlier, in 1870, when two researchers in Berlin probed the brains of dogs strapped to a dressing table, provoking muscle twitches in response. The responsive area ran from the top of the head to the ears, like twin halves of a headband. In the early twentieth century, researchers often stimulated the motor cortex in anesthetized animals, and found that it contained divisions that corresponded to groups of muscles; body parts that required great dexterity, such as hands, occupied far more of its surface area than those which did not. Only in the nineteen-sixties did scientists figure out how to study the motor cortex in alert primates, observing what it did as the body moved.

During Schwartz’s college years, the prevailing view was that the motor cortex worked like an engineer, calculating the forces required to move joints and directing muscles in how to contract. Georgopoulos, instead, argued that the motor cortex concerned itself mainly with the geometry of human motion in space. In this view, it resembled an air-traffic controller, mapping the flight paths of limbs, while leaving other parts of the nervous system to issue instructions to muscles.

He had arrived at this conclusion by training monkeys to move their arms across a tabletop, tracing a pattern that resembled the spokes of an asterisk. While observing these motions, he tracked clusters of neurons in the motor cortex, and noticed that each cell appeared to be “tuned” to prefer a single direction, as though set to an internal compass. Neurons tuned to true north, say, fired with maximum excitement when the arm prepared to head north. If the arm moved north by northwest, they still fired, but thirty per cent less. This declining excitement was mathematically predictable, allowing Georgopoulos to decode the cells’ behavior. One neuron firing on its own did not always predict a gesture. But a few hundred firing together offered a robust signal, from which he could derive a vector describing where the arm was heading. The cells expressed this information milliseconds before the body acted. Perhaps, Georgopoulos thought, he was glimpsing the workings of the brain as it processed its intentions.

Schwartz could have been risking his career by embracing Georgopoulos’s ideas. But he found the research meticulous and engaging, and he was eager to push it forward. Recently married to his college sweetheart, Lisa Schroepfer, he moved with her to Maryland in 1984. She took a job at Johns Hopkins, writing medical news, while he devoted himself to lab work. Georgopoulos’s experiment had shown that directional tuning held in two dimensions; the next step was to test whether it worked in three-dimensional space. And so Schwartz prepared an experiment in which monkeys, placed inside a large sphere, reached out to targets mounted at the perimeter.

The work was laborious, and surgically invasive. First, a portion of the animal’s skull had to be removed. Then Schwartz carefully inserted a glass-coated electrode into the brain, to take readings while the monkey moved its arm. Because the distance of a few microns meant the difference between signal and noise, he had to peer intently into an oscilloscope as he placed the probe. “It’s kind of like being in a submarine, using sonar to figure out what is going on,” Schwartz told me. “You are in another world.” His probe could record from only one or two neurons at a time. After taking a reading, he hunted for another cell, repeating the process hundreds of times, while the animal performed the same action over and over. Later, he combined data from the trials, to get a sense of how the neurons worked in concert. After months of research, he found that the model did hold in 3-D. The results made the cover of Science.

For Schwartz, the paper was an academic triumph, but other neuroscientists greeted its ideas with intense skepticism. “I’d go around looking for jobs, giving lectures, and people would say, ‘This is just wrong! Because we know that the motor cortex is hooked to muscles.’ ” In 1987, Schwartz took a position at the Barrow Neurological Institute, a tiny, newly founded facility in Phoenix, Arizona. It was as far as one could get from the centers of American neuroscience, but he was drawn to the monastic intensity of the place. He and his wife moved into an apartment overlooking Barrow’s parking lot. “It was a hundred and ten degrees,” he told me. “You burned your feet on the parking lot. Every day I’d walk to my lab, come back for dinner, then go back to work.”

Schwartz was sure that Georgopoulos’s ideas could be taken further still. The research that he had done at Johns Hopkins was limited to arm movements that traced straight lines. To understand the cognition of movements as they actually were, curved and complex, it was necessary to know how the brain was behaving moment by moment—how it was making minute midcourse corrections, not merely for direction but also for speed.

Venus Williams hits a backhand. Picasso draws a swooping line. A soldier salutes a superior officer. An old woman writes a letter to her granddaughter. Our bodies in motion are defined by a unified alchemy of speed and direction, as we trace parabolas, lemniscates, trident curves, Poinsot’s spirals, and trajectories that have no mathematical names.

A trajectory is impossible to calculate without taking into account an object’s speed. Draw a circle. If you move too fast either upward or to the side—and if you don’t compensate by shifting direction as you accelerate—you will end up with an oval. Speed can bend an arc.

Hoping to devise a formula that expressed how velocity was encoded in the motor cortex, Schwartz trained monkeys to draw curved shapes on touch screens designed for A.T.M.s. (He later built his own tracking equipment.) During the course of tens of thousands of trials, he explored many possibilities, only to discover that the answer was right in front of him: the same neurons that were tuned for direction were also tuned for speed. Neurons produce only a simple signal—they fire or don’t fire—and yet they are able to express information about more than one thing at the same time. “That just blew me away,” Schwartz told me. In 1992, he began publishing papers documenting his work. One included a pair of exquisite diagrams: a trajectory representing a monkey’s spiralling finger superimposed over a trajectory derived from its motor cortex. The two arcing lines—one from the brain, the other from the body—were virtually identical.

Shortly after Schwartz began making his trajectories public, he received a call from an administrator at the National Institutes of Health who had seen them and could not believe that they were accurate. For years, the N.I.H. had been striving to develop brain implants to restore function lost to injury or disease; after reviewing Schwartz’s findings, the administrator wanted to know how the results could be used for a medical device. Schwartz suggested that they might be used to control robotic arms, in order to help paralyzed people feed themselves. “He said, ‘Do you think you could actually use this for prosthetics?’ ” Schwartz recalled. “And I said, ‘Sure.’ ”

The call put Schwartz on a new academic path, heading for the frontiers of bioengineering. He raised money to fund a large lab, at Arizona State University, one of the first to concentrate exclusively on integrating robotics directly with the brain. He staffed it with engineers, who were unencumbered by disputes over how the motor cortex functioned. They embraced his research as a step toward technology that could change people’s lives, and Schwartz found their enthusiasm thrilling. “It was freedom,” he told me.

Schwartz’s old way of conducting research—probing an exposed brain with a single electrode—had to change. An implant that a person could live with would have to be able to record from many neurons simultaneously and transmit the data without requiring an open skull. The N.I.H. had made remarkable strides in developing such technology, but the engineering hurdles were still formidable. The brain is crowded, damp, ever-shifting, salt-filled, and home to large cells that tend to encase foreign objects in scar tissue; a sensor lodged in the cortex had to be designed like a robot built for a misty jungle planet. At the same time, a device embedded in the brain presented many hazards to living tissue. One solution that bioengineers were pursuing involved bundles of microwires, which could read from a dozen neurons at a time; another was the microelectrode array—tong-studded squares, resembling doll-house hairbrushes, that could be pressed into the boggy surface of the brain.

Working with the N.I.H., Schwartz gained admission to a tight-knit society of researchers who treated the problem of neural implants like calculations for the load limit of a bridge. Once a year, about two hundred of them would congregate at the N.I.H.’s main library, in Bethesda, Maryland, where they spoke about esoteric problems that they had pursued with extreme doggedness. “There was this guy who would soak different electrodes and connectors in heated saline for years, and then he would report on how they were doing,” Schwartz recalled. The pioneers of the cochlear implant attended these workshops; other researchers were trying to build a prosthetic for the eye. Schwartz belonged to one of the smallest tribes, focussing on the motor cortex.

In order for Schwartz to make a dramatic presentation, he had to locate an anthropomorphic robotic arm. “A huge problem,” he told me. “People just didn’t make them.” His team reached out to venders around the world—even calling a company that built a mechanical avatar of Abraham Lincoln for Disneyland. Its arms were powered by a compressor the size of a refrigerator, and they cost two hundred thousand dollars apiece. The team kept looking. Eventually, it located a practical device, called the Zebra Zero. To connect it to a monkey’s motor cortex, Schwartz used a microwire implant, with thin metallic tendrils that had to be carefully threaded into the brain. “It takes twenty minutes,” he said. “As you do that, the brain puckers, so you don’t really know how far it goes in—because over time the brain will also un-pucker.” At the workshop in 1999, Schwartz screened a video of his progress: an implanted monkey pressed buttons, as the Zebra Zero, in another room, roughly mimicked its movements.

As the quality of the hardware improved, the prospect that brain-controlled robotics could one day be tested in a person also grew—and so did the competition. A team at Brown University was raising piles of venture capital and racing to gain F.D.A. approval for a new device: an array of ninety-six microelectrodes, known as the Utah array, which promised to be effective in the human motor cortex. The Brown team hoped to use it to link a paralyzed patient to a computer.

In 2002, Schwartz moved to the University of Pittsburgh, which let him know that it would support him should he decide to pursue clinical trials. For two years, he established himself there, setting up his primate lab. He was not poised to beat Brown’s effort, but he was still making remarkable headway, when he caught an unexpected break. The Defense Advanced Research Projects Agency had decided to launch the most ambitious effort in history to give people mastery of brain-controlled robotics, and it was curious whether he was interested in participating. Did Schwartz want to work for the military?

DARPA was created in 1958, after the Soviet launch of Sputnik shocked the American scientific establishment. With a mission to both anticipate technological surprises and create them, it had spent nearly sixty years funding projects that seemed torn from a comic book. In the sixties, darpa had established the technical foundations of the Internet. It later aided in the development of speech-translation software, G.P.S. navigation, and airplanes invisible to radar. Less successfully, it sought to develop spacecraft propelled by nuclear bombs. As Schwartz told me, “Sci-fi, stars-in-your-eyes is what darpa does.”

Since its inception, darpa had asked if computers could be more closely coupled with minds. But its interest in embedding electronics directly in the cortex emerged only after the N.I.H. workshops demonstrated that the technology was mature enough. In 2002, the agency created a program, called Brain-Machine Interfaces, which laid a scientific foundation for the development of cognitive implants that could enhance soldiers. “The human is becoming the weakest link in Defense systems,” the agency noted—implying that biology itself needed an upgrade. A darpa official speaking at an agency symposium encouraged attendees to visualize soldiers who could act as human lie detectors, or communicate by computer-aided telepathy.

By 2003, darpa had spent millions of dollars on Brain-Machine Interfaces. But in the post-9/11 political climate—following a controversial darpa surveillance program, along with the conflicts in Afghanistan and Iraq—the agency’s leadership sought to redefine its goals. The head of darpa’s Defense Sciences Office at the time told me, “We had this interest in being able to move things with the brain, and it didn’t look like anyone was going to be too excited about flying airplanes with the technology.” The country was at war, and many soldiers returning from the front with missing arms were using replacements that were little more than hooks—technology that would have been recognizable during the Civil War. darpa reasoned that it should focus its investment in brain-machine technology on making the wounded whole, rather than on building super-warriors. “Frankly, it made it easier,” the official said. “If someone said, ‘Why are you spending all this money?,’ it was kind of a dual-purpose thing. How could you argue we shouldn’t be?” darpa officials arranged for their director to visit Walter Reed Army Medical Center, to meet soldiers who had lost their arms. Moved by the experience, he committed more than a hundred million dollars to a new program, called Revolutionizing Prosthetics.

In 2005, Schwartz travelled to a hotel in Maryland, for an introductory meeting about the new darpa venture. It was hosted by Colonel Geoffrey Ling, an Army neurologist who had tended to amputees in Afghanistan, many of them children. The agency had recruited Ling after learning that he had been working on a “Star Trek”-like device that could scan internal organs without requiring surgery. At the meeting, Ling, wearing a crisp blue dress uniform, spoke with zealous intensity about the mission of rebuilding the wounded body—moving with the hyped-up gestures of a motivational speaker, while scientists and engineers sat before him nibbling refreshments.

Within just four years, Ling explained, darpa wanted a robotic arm that functioned just like a real arm. It had to weigh as much as a real arm. It had to be as strong as a real arm. It had to operate as quietly as a real arm. It had to be modular—since all amputees did not lose their limbs at the same spot—which meant that its battery could not be stashed in a single cavity. In short, the agency wanted the most advanced robotic arm ever made. At the same time, darpa wanted to fund an aggressive neuroscience program to allow a human subject to use the arm with “neural control.” As Ling explained shortly after the meeting, “We want our soldiers to be able to play the piano. Not ‘Chopsticks,’ but a classical piece, like Brahms.” To achieve these aims, he was ready to bring together hundreds of researchers, from institutions across the country.

Schwartz told me that many of the researchers were skeptical, “whereas me, being the Minnesota Boy Scout, I was, like, ‘Yeah, give me the money and I’ll do it!’ ” He submitted a proposal, but Ling instead decided to bring in the Applied Physics Laboratory, a government mega-contractor based at Johns Hopkins. Although A.P.L. had virtually no experience with neuroscience, it had been building spacecraft and testing missiles since the Second World War, and Ling believed that he needed a contractor with expertise in large-scale engineering projects. As he saw it, A.P.L. would be managing the bioengineering equivalent of a lunar mission.

Still, Schwartz benefitted. Ling offered to fund his basic research on the side, explaining, “You’re my ace in the hole.” With a generous contract, Schwartz began new trials with monkeys, hoping to give them the ability to feed themselves using robotics controlled by their brains. Within a year, he was achieving remarkably lifelike results. An investigator would hold a marshmallow, and a monkey would grab it with a robotic arm, carry it to its mouth, and eat. One afternoon, Schwartz showed me a video of a trial. After devouring a marshmallow, a monkey moves the arm from its mouth to grab a new treat, but then stops and returns it, to lick off some residue. The intuitive decision indicated a fluid melding of brain and machine. “We took that as signs of embodiment,” he told me. “We saw stuff like that every day. It was amazing.”

In 2011, a darpa representative came to observe Schwartz’s research, and afterward he casually asked, “Hey, would you guys be interested in doing some human work?” Although A.P.L. had made great progress in building an arm, its brain-control system had not got off the ground—and so Schwartz was brought on, along with a team from the University of Pittsburgh Medical Center. Finally, he was in.

A new A.P.L. arm—nearly half a million dollars’ worth of robotics—was shipped to Pittsburgh. It was like nothing the university had worked with before. Made from black carbon composite, with polished-aluminum detailing at its joints, it looked like a prop from “The Terminator.” Its proportions were based on a man who was five feet eight inches tall. Though it weighed only nine pounds, it could curl forty-five, and its fingers could apply about half that pressure to anything caught in their grip. Taken together, the joints in a human arm, from shoulder to fingertip, can make as many as thirty discrete motions—each one regarded as a “degree of freedom.” The new arm had an unprecedented twenty-six degrees, including a functioning thumb—essentially, a separate robot grafted onto the hand. Mike McLoughlin, who oversaw Revolutionizing Prosthetics at A.P.L., told me, “If you talk to people at A.P.L., they will tell you that this was as complicated a program as any spacecraft that we have done.”

Schwartz was by that time tentatively working with humans. In his study with Tim Hemmes, he had recorded from the motor cortex using a flexible wafer the size of two postage stamps—something like an EEG, but inside the skull. (To avoid the need to open Hemmes’s skull more than necessary, its wires were threaded down his neck and emerged through his chest.) The device could make only generalized readings of neurons, allowing Hemmes, in a thirty-day trial, to achieve just three degrees of freedom. But that was enough to move a few joints on a robotic arm, so when A.P.L.’s device arrived, near the end of the study, the researchers at Pittsburgh connected him to it at once. Hemmes could not control the robot’s hand or fingers, but he was able to use the arm to reach out to his girlfriend—a gesture forbidden to him for years. Standing before him, she gripped the carbon composite. Eyes welling, with her other hand over her heart, she softly said, “Baby.”

The team at Pittsburgh sent footage of the moment to darpa. Regina Dugan, the agency’s director at the time, told me that when it arrived she and her deputy rushed over to Geoff Ling’s desk to watch it. “We looked at each other,” she said. “I remember there being silence, each of us with tears in our eyes, knowing something important had happened.” More than anything, the moment’s significance was about potential. Schwartz was not yet using a Utah array in human subjects, and the device was sure to yield more sophisticated results. “I always said, ‘Give me a human with a hundred electrodes in his head, and I can do all sorts of things,’ ” he told me. Then Jan Scheuermann called, eager to be that person.

III.

She had kept her desire to join the experiment a secret, even from her husband, who she feared might try to talk her out of it. He was a precise man, an engineer who had once worked for Borax. When he came home that evening, Scheuermann said nothing about it, even as the idea swelled inside her. She was living a meaningful life as a quadriplegic, but one of her condition’s most difficult limitations was that she was rarely in a position to help others. It was easy to feel that everyone around her—at church, or at home—was lifting her up. Here was a chance to be of use, to help advance the cause of science.

That evening, Scheuermann tried to watch a movie, but, as she later wrote in an unpublished memoir, “My Life as a Lab Rat,” she was too excited. “For years, I had had daydreams about waking up and suddenly being able to move again—to get out of my wheelchair and walk; to go upstairs and hug my husband and my two children; to dress myself, feed myself, and take myself to the bathroom; to make breakfast for my family; to dance, to run, to feel the wind in my hair!” she wrote. “Now, I had a new daydream that had a much more realistic possibility of happening. I imagined moving a robotic arm. I was not sure what all I could do with the arm, but I could certainly do what Tim had done—I could use it to touch my husband’s hand, and to gently touch my children’s cheeks. I envisioned doing just that for several hours before I could fall asleep.”

A few days later, a representative from the University of Pittsburgh called to explain that researchers needed to evaluate her to determine whether she was a good fit for the experiment. Not long afterward, she met with Jennifer Collinger, an assistant professor at Pittsburgh’s medical school, who would be directing the study. Scheuermann learned that the experiment would be more invasive than Hemmes’s: a device would penetrate her brain, and hardware would protrude from her head. “They said, ‘You know this includes voluntary brain surgery?’ ” she told me. “I said, ‘Yup, that’s O.K. I’m going to move that robotic arm!’ They said, ‘Well, these two pedestals will stick out of your head, about three-quarters of an inch, and it will be that way until we take them out.’ And I said, ‘O.K., sure. I want to move that robotic arm with my mind! ’ ”

The team explained that they hoped to achieve seven degrees of freedom with the arm—a goal that Schwartz had set for darpa. “Then they asked me if I had a goal,” Scheuermann recalled. “I sensed they wanted me to say that I wanted to touch my children, or my husband. I said, ‘Yeah, I have a goal. I want to feed myself chocolate’—and I was waiting for them to laugh, but they didn’t laugh. They just looked at each other, and said, ‘Yeah, we should be able to do that.’ So my line said in jest became one of the goals of our study.”

That evening, she let her husband in on her secret by leaving the video of Hemmes on her computer screen. After watching it, he suggested that she apply. Cautiously, she told him that she already had—and, to her relief, he was excited, enumerating the qualities that would make her a good subject. In the coming days, Scheuermann was given an fMRI, to learn whether her motor cortex, after a decade of quadriplegia, was still functioning normally. (Amazingly, it was.) The researchers also fixed EEG sensors to her scalp. As they monitored her brain, she found that she was able to crudely move a cursor on a screen.

Schwartz met Scheuermann only briefly in the early interviews. The project for Revolutionizing Prosthetics involved a large team, including postdocs, a neurosurgeon, and experts on assistive technology. He was close to achieving a career-long ambition, but he was too preoccupied with the details to dwell on it. “Think about going to the moon,” he told me. “You have all these guys worried about scheduling and mechanics. You are more concerned with the minutiae, how everything is going to fit together.” The transition to human trials had brought many unknowns—from new equipment and more complex surgery to the inexperience of the researchers who had joined him. He said, “We had no guarantee we had the skill and capability to get this to work in a human.”

In February, 2012, after months of preparation, Scheuermann was ushered into a hospital room to be prepped for surgery. “Several people greeted me, and I’m sure that behind their masks, they were smiling,” she later recalled. “But I could not see their smiles; I could only see bright lights, gowned and masked figures, and trays of medical equipment. The solemnity of what was about to happen finally hit me.”

The successful implantation of a Utah array requires tremendous precision. After lasers were used to make a 3-D scan of Scheuermann’s head, a location was marked; part of her scalp was shaved, and the neurosurgeon—Elizabeth Tyler-Kabara, who had operated on Schwartz’s animals—cut back a flap of skin. With a drill, she began to cut around the site. Bone shavings piled up around the bit, like snow.

While the surgeon opened Scheuermann’s skull, the Utah arrays were kept on a tray nearby. They were four millimetres square—no wider than a “W” on this page—and manufactured from a block of silicon that had been sliced, chemically treated, and then etched in acid, until the surface resembled a minuscule bed of nails. Each studded square had been shipped to Pittsburgh with its pedestal, made from titanium: the plug port that would be mounted atop Scheuermann’s head. They were tethered together by a cable of ninety-six gold wires, one for each electrode.

Tyler-Kabara carried over a set, and carefully screwed the pedestal into the skull, while the array hung from a ball of beeswax mounted on Scheuermann’s scalp. Once the pedestal was attached, she placed the array face down on the naked cortex, with the microelectrodes poised to penetrate the brain. To push it in by hand risked damaging the tissue or misdirecting the device; a ballistic entry was necessary. A pneumatic injector the shape of a wand was positioned precisely atop the implant. With a forceful blast, it would shoot the array in at twenty-five miles per hour. The shot had to be timed to Scheuermann’s respiration. With each breath, her brain was rising and falling, as it floated in the shifting spinal fluid in her skull, and it was crucial to implant when the cortex rose to maximum height.

Tyler-Kabara asked one of the Pittsburgh researchers on the Revolutionizing Prosthetics team to press the button. Everyone paused, to internalize the rhythm of Scheuermann’s brain movements. Then, suddenly, the injector was triggered. The sound of valves opening and closing filled the operating theatre, along with the rush of compressed air through the injector, the noise a lightning-quick mechanical breath, culminating in a metallic clink. In an instant, the ninety-six electrodes were in, like a soccer cleat going into soft earth.

After the second set of microelectrodes was implanted, the excised portion of Scheuermann’s skull was returned—though it was bevelled at the seams to allow the wires to pass through to the pedestals. The scalp flap, sewn back on, had been carefully shaved to leave enough hair to mask the titanium implants, a little. Because the pedestals would occupy open wounds for the duration of the experiment, antibiotics would have to be applied frequently, to reduce the risk of a potentially lethal brain infection.

The moment Scheuermann awoke, in a recovery room, she felt a debilitating headache. “I hurt, I hurt,” she called out, and then she chastised her family for allowing her to undergo unnecessary brain surgery. The hospital administered a painkiller, and she fell asleep. The following morning, the pain subsided, and she asked for a handheld mirror, so she could see herself. Protruding from the top of her head were the two pedestals: cylinders reminiscent of Frankenstein’s monster, each the diameter of a quarter, and capped to prevent moisture from getting into the contact points. Scheuermann vowed to embrace them. She told herself they were instruments of exploration, and named them Lewis and Clark.

The arrays in Scheuermann’s head were like nasa probes in their first uncertain moments after touching down on a distant planet. The researchers had taken every precaution to insure that they were delivered to the right location on the wrinkled landscape of her brain. During the preparatory fMRI scans, they had asked her to imagine moving her hand and arm, in order to reveal the parts of the motor cortex that corresponded to them. But they couldn’t be sure what she was imagining. This uncertainty, combined with their lack of experience with implanting arrays in a human, meant that the tiny devices might well have been in the wrong place. As Schwartz told me, “Could have been in her face area—maybe we weren’t even in the motor cortex.”

The moment it was clear that Scheuermann was no longer in pain, Schwartz wanted to begin recording from the arrays. “I’m thinking of all the things that can go wrong,” he told me. What if the devices were poised to break? Or Scheuermann began to bleed? Or became infected? He had learned from Georgopoulos to record as quickly as possible, to put information in the bank. “My rule is that it is criminal if you have an opportunity to collect data and you don’t,” he said. “It was my obligation, after she risked her life.” He helped the team load equipment onto a cart to wheel to her room, in a rehab center near the hospital. “It’s, like, O.K.—I’m going to go in,” he said.

Scheuermann learned that the team was coming, and asked her health aide to bring a costume and help her get into it. When the researchers arrived, she was sitting there, poker-faced, wearing pink-and-gray mouse ears, whiskers, and a mouse nose. A tail snaked from the seat of her wheelchair. The scientists quickly began to smile—except for Schwartz, who was visibly displeased.

“This is funny shit, Andy,” she told him. “You should be laughing.”

“But you’re not a lab rat,” he said. “You’re our co-worker.”

Jennifer Collinger unscrewed the cap of one of the pedestals, and a cable was plugged in, connecting a single Utah array to the electronics on the cart. When a neuron fires, the pulse of electricity from one synapse to another can be converted into sound—something between a pop and a scratch. The sound of many neurons firing resembles static on a radio; one neuroscientist has called it a “cerebral symphony.” The team told Scheuermann that they were going to feed the recordings from her brain into a speaker. This way, they could learn instantly if the arrays were in the right place.

Standing before Scheuermann, Schwartz asked her to imagine moving her arms in various ways, but this produced no sound. Trying not to show it, she immediately worried that the surgery had been a failure. Then she imagined moving her index finger—in her tests with the EEG, this had triggered clearer signals—and the system responded with a few pops. To Scheuermann, they sounded “like Rice Krispies when the milk is poured over them,” and they sent the scientists into a flurry of restrained excitement. Schwartz held out a palm, like a boxing coach, and said, “Punch it!” Concentrating hard, Scheuermann imagined striking his hand. The speaker again softly popped. After moving his hand around, directing her to repeat the punch, Schwartz asked her to imagine turning her wrist. The speaker erupted with a symphonic neuronal burst. “There we go!” he said, gleefully. “Which way were you moving your wrist?”

“Up and down,” she said, and grinned.

“Oh, that’s beautiful!” he said.

Leaving the rehab center, Scheuermann was relieved and excited. At home, she had a note tacked up by her desk: “You are more than the body you live in.” Several weeks before the surgery, she had attended a family Christmas gathering, where relatives batted around responses that she could offer to strangers who asked why she had metal pipes sticking out of her skull. One suggested that she act as if the person were seeing things. Someone else made a “Star Trek” suggestion: she should say, “Resistance is futile. You will be assimilated.”

Scheuermann spent the weekend mostly sleeping. On Monday, she made her first trip to the lab, a windowless room at the University of Pittsburgh Medical Center, twenty minutes from her house. A.P.L.’s robotic arm was mounted on a scaffold of aluminum beams bolted to a wall. To protect against a malfunction that might cause the robotics to go haywire, the arm was designed to stop whenever it crossed an invisible perimeter of a few feet. The researchers made sure that Scheuermann was never inside that space.

Seeing the robot for the first time, Scheuermann had decided that it, too, needed a name. To her, it looked like a Hector, and she insisted that the researchers refer to it that way. (They tried to oblige.) “Hector and I had a discussion,” Scheuermann told me. “I would take credit for the victories, and, when I couldn’t do something, he would take the fall for the failures. He agreed—but that might be because I speak for him.”

In one corner of the laboratory was a mission-control center: six flat-screen monitors and a rack of electronics to process information from the Utah arrays. Four computers were set up to crunch the data from Scheuermann’s brain, which could be depicted on the monitors as a grid of waveforms, each one representing a millisecond’s worth of electrical activity from a neuron—the choral polyphony of cognition, divided into a collection of solo voices. The arrays recorded at a rate of thirty thousand times per second. Every six and a half hours that Scheuermann spent plugged in at the lab, the digital equivalent of James Cameron’s “Avatar”—as projected on the silver screen, in 3-D—would pass through the cables attached to Lewis and Clark. To make the torrent of data manageable, the system retained only the information that was scientifically relevant.

Scheuermann drove her wheelchair up to Hector and waited while the researchers connected her brain to a computer. Her health aide helped her with the Times crossword, which she kept attached to a clipboard that rested on a blanket draped over her lap.

Because the Utah arrays were inclined to shift in the gelatinous matter of the cortex as Scheuermann’s brain moved naturally within her skull, they took readings from an ever-changing population of neurons. At the start of each day, as many as thirty per cent of the cells could differ from the previous session. It was impossible to know how the new neurons were tuned for direction or speed, and so every day the system had to be recalibrated. For this, Scheuermann had to do very little. When we observe an action, our brains often respond to the behavior as if it were our own: if you watch a person use a screwdriver, some of the neurons in your motor cortex will appear to fire as if you were driving in the screw. (The motor cortex is often very active when we read.) To take advantage of this mirroring effect, the team had Scheuermann watch Hector act out motions dictated by a soft computerized voice—left, right, up, down. Quietly, a crowd of scientists watched her while she watched the robot.

Schwartz told me that Scheuermann looked overwhelmed: the implants were still new to her, and the laboratory routines were unfamiliar. Moreover, although the mirroring effect allowed the researchers to estimate how her neurons were tuned, “it was such a crappy estimate that the arm would inevitably make a lot of mistakes.” To help Scheuermann through the beginning of the process, the team had developed two software tools: a directional filter that prevented her from causing the arm to veer off course, and an “auto-controller” that could help guide the arm toward a target. A delicate balance was necessary: too little computer assistance and Scheuermann risked losing her motivation; too much and she wouldn’t learn how to control Hector on her own. Giving her brain the ability to learn was crucial. With training, a person can gain volitional control over the firing of a single neuron to accomplish a goal.

In Scheuermann’s first trial, the tools were set so that the arm’s motion would be guided almost entirely by computer. Nonetheless, she strained to command the robot. “All along, I had envisioned nothing other than success,” she told me. “And when I first tried, and it didn’t work, I was suddenly struck: ‘Oh, my gosh—this wasn’t in my plan!’ And there were so many people in the room, and I realized that they are doing this with me. The project’s success depends upon whether I could do this, and I suddenly felt like there was this great weight on my shoulders, like I’ve got to do this.” The problem was a technical glitch—one of the computers had crashed—but Scheuermann, focussing only on her attempts to move the arm, thought that the failure was hers. She braced herself and tried again. “The second time I watched the training I was able to move the arm, and I just gasped, and I said, ‘I did it!’ ”

Moving the arm took all her concentration—thinking right, right, right, while visualizing her own arm moving right. She told the team, “There are a couple of times when I especially thought to punch hard and fast, and it came out hard and fast. I picture being in a boxing match.”

“I was kickboxing on Saturday,” one researcher said.

“Maybe you should try this,” she said.

Schwartz had encouraged her to set aside the careful, precise side of her temperament. He knew from his study of psychophysics that she was working against a fundamental biological obstacle. Each reaching movement can be divided into distinct phases. When we want to do something—pick up a coffee mug, say—our arms rapidly begin the gesture before our brains can make visual sense of what we are doing. Schwartz called this the “ballistic” phase, and he told me that it could compose as much as ninety per cent of the gesture. Once the brain can visually comprehend what is happening, the body begins to refine the movement. Just before our fingers make contact with the mug, vision plays a dominant role, making sure that our body is acting with precision. Because Scheuermann’s control of Hector was entirely dependent on her vision, she never benefitted from the ballistic stage. The final phase of motion, full of micro-deliberations and corrections, was her entire gestural process.

By the second day, many technical problems had been resolved, and Scheuermann began to gain command of the arm. Schwartz, working to get her to move boldly, at one point jumped beside Hector and urged her to slap his hand. While she guided the arm to his palm, Scheuermann sat in motionless concentration. “There we go!” he told her. They played cat and mouse. Touching the robotic hand, he said, “Can you feel me pushing back?”—a joke, because it was of course impossible. The room broke out into laughter. Scheuermann’s face was filled with joy. “Oh, yeah!” she said. In just hours, she had achieved what Hemmes had in a month. Schwartz pecked out an e-mail to darpa on his phone: “Full 3D brain control!!!! High 5 all over the place!!!”

For more than a decade, Scheuermann had not been able to lift a finger. Now, suddenly, she could use the arm to reach out into the space around her. Later that afternoon, while she was directing Hector to various targets—blue blocks mounted on a board—the room grew quiet. She was enveloped by the hum of computer fans cooling hardware. At regular intervals, a gentle robotic voice instructed her to move to the next target. Raising the arm, Scheuermann felt a surge of emotion. She shut her eyes and tightened her lips. Despite her determination to maintain composure, a tear rolled down her face. The arm stopped its ascent. “You feeling O.K.?” a researcher asked.

Scheuermann was flooded with memory. She was suddenly back at her home in California, in 1998, at the onset of her paralysis. Speaking almost in a whisper, she told the researchers, “I’m just standing in the kitchen, and I am trying to reach up and get a bowl, and I can’t.” Later, she explained to me what she was remembering: “It was the first time I had noticed weakness in my arm, and it was a crushing blow—My God, what if this is spreading?” Scheuermann’s health aide wiped away her tears, and the experiment proceeded.

In the following days, her performance improved further. As the researchers were correcting their algorithms, her brain was correcting its responses to them. She began to refer to Hector as “my arm,” a slip of the tongue that soon became habit. “It happened without me realizing it,” she told me. “I just said, ‘Let me see if I can get my arm to do that.’ Or, ‘My arm’s not going that way.’ ”

At one point, Schwartz walked over to her. “You know what the cool thing to me is?” he said. “It’s starting to look like coördinated, graceful movements—which is what we’re after.” As he spoke, he arced his arm with exaggerated fluidity, to emphasize the idea. “That’s the whole point of doing this. We don’t want stupid-looking robot movements. We want your own movements, which I know are graceful!”

“Oh, absolutely! ” Scheuermann said.

Grinning, Schwartz assured her, “We’re getting there.”

Three times a week, Scheuermann made a pilgrimage to the lab and worked there for four hours. Within about a month, she was able to make a fist with Hector’s fingers, to pinch, and to grasp. After two months, the researchers turned off the assistive algorithms, giving her full control. At three months, she achieved Schwartz’s goal for darpa, seven degrees of freedom—demonstrating that she could open and close the hand, while orienting it with turns of the wrist. She was progressing so quickly that the researchers could barely keep up with her. “I remember, I went for a walk and said to myself, ‘Damn, it really works!’ ” Schwartz told me. “It kind of dawned on me. It was like—wow.”

With each trip to the lab, Scheuermann’s movements grew more intuitive, and it became hard for her to describe what she was doing to move the arm. “I just learned to look at the target, and Hector went there,” she told me. She sensed that she was tapping into a long-neglected part of herself. “I had learned how to reach for an apple fifty-one years ago, and it still remembered—and that delighted me.”

Six months into the trials, the team was ready to allow her to feed herself. Someone was sent out to buy a few Dove chocolate bars. Because the robotic arm often overheated, they were frozen so that they wouldn’t goo up in its grip. “We tried it the day before Thanksgiving,” Scheuermann told me. “The bar was half unwrapped, and the first couple of times I hit the side of my mouth, or it was coming towards me and I bit air, and it went off somewhere.”

Schwartz saw in her the human instinct for self-preservation. “You can imagine, it’s rather frightening,” he told me. “You have this robot, you think you can control it, but maybe you can’t, and it generates forty pounds of thrust, and you are bringing it to your face. You could tell she was worried, but at the same time she wanted to eat the candy bar.” After several tries, Scheuermann got the chocolate close enough. “I was only able to get a nibble before Hector—unbidden by me—jerked away,” she said. “It was a small nibble. But it was the best chocolate ever! ” She lingered on that last word with a schoolgirl’s singsongy drawl, blending excitement with satisfaction. After her nibble, she had pumped Hector’s fist in celebration.

IV.

Since its inception, Revolutionizing Prosthetics had maintained the goal of writing sensory information into the brain—to give people like Jan Scheuermann the ability not only to manipulate a robotic arm but also to feel with it. Although this would make for a more natural prosthetic, it also posed profound questions. If extracting information from the brain could expand a person’s sense of agency, wouldn’t inserting information—interfering with the basic structure of cognition—risk diminishing it?

In the nineteen-sixties, José Delgado, a Spanish neuroscientist at Yale University, had designed a radio-controlled electrode that could be implanted deep inside an animal’s brain. With conditioning, Delgado found, the probe could be used to diminish aggressive behavior in monkeys. In 1964, he travelled to Spain and implanted his electrode in a bull; then, theatrically, he faced down the animal in a bullfighting arena, with nothing but a matador’s red cape and a handheld radio switch that controlled the device. As the bull charged, he activated the implant, dramatically causing the animal to appear to lose interest in him. Later, Delgado used the electrodes on psychiatric patients. According to Scientific American, “With the push of a button, he could evoke smiles, snarls, bliss, terror, hunger, garrulousness, lust.” In a book titled “Physical Control of the Mind: Toward a Psychocivilized Society,” Delgado imagined a utopia of machine-modulated cognition. One critic, testifying before Congress, described it as a harbinger of “technological totalitarianism.” It was evident that such technology could be misused. After the race riots of the late nineteen-sixties, two Harvard neurosurgeons proposed that neural electrodes could be used to quell social violence. In 1972, a Tulane psychiatrist used them to try to create “heterosexual arousal” in a gay man.

DARPA was conscious of this history. In the summer of 2012, a new director, Arati Prabhakar, arrived, and after she was briefed on the neuroscience research she quickly decided that the agency needed to pay greater attention to its societal impact. Neuroscientists outside the program were already developing a brain implant that could give a rodent the ability to perceive infrared light, and had even begun wiring animal brains together. “I don’t want to live in a world where technologists make up the answers,” she told her staff. She consolidated darpa’s biological programs into a unit called the Biological Technologies Office, headed by Geoff Ling, and she instructed him to bring together an advisory panel of ethicists, philosophers, and practicing neuroscientists. “Like every powerful technology, this has the potential for good or for ill,” she told me. “That’s where we were: thinking about these issues—the wonder and the marvel of the research, and the weight of it.” With greater deliberation, the agency proceeded.

In 2014, the team at Pittsburgh took on a second quadriplegic subject, a twenty-seven-year-old named Nathan Copeland, to see if they could channel sensory information directly into his brain. The robotic arm that A.P.L. had designed could accommodate sensors on its fingertips, and, though Scheuermann did not have the right implants to use the feature, Copeland would: two additional Utah arrays, embedded in his somatosensory cortex. His nervous system would form a closed circuit with the robot.

The researchers knew that evoking natural sensations posed a greater technical challenge than moving the arm. While it is possible to splash electricity near a cluster of neurons and trigger some kind of sensory reaction, it is not easy to translate such stimulus into a nuanced perception—largely because today’s implants only crudely approximate the way neurons interact with one another. Sensory information is processed among brain cells in complex patterns that continually change. No known implant can work in that way. The most successful neural prosthesis, the cochlear implant, resembles its biological analogue about as closely as a manila folder resembles a laptop computer. Its limitations can be partly overcome because it is wired to the auditory nerve; by the time its information reaches higher structures in the brain, it has been refined. But, in Copeland’s case, the sensory data would be delivered directly to his cortex, which meant that it would not benefit from that natural process of refinement.

Although sensors were placed at the fingertips of the robotic arm, Copeland experienced their stimulation near the base of his biological fingers. Sometimes it felt like pressure applied to his skin; at other times, the pressure seemed to be emanating from within, from his bones. “So that’s kind of weird,” he told me. Wherever the sensations originated, they felt both familiar and strange. Copeland had invented a taxonomy of the tingles that he felt: “sparkly” or “rapid-tappy” or “drilly-buzzy” or “pinpointy.” I asked him about the sensation, and he told me, “It’s not like touching an electric fence. It’s not like menthol, a cool tingle. It’s not pins and needles—that has an uncomfortable component to it.” He sighed. “It tingled! I don’t know. It’s super-weird!”

When Copeland was able to feel with the arm, his performance with it seemed to improve. But, in general, his approach differed from Scheuermann’s. Whereas she strove for exactitude, he had a gamer’s instincts, and often tried to barrel through tasks. He was eager to test his abilities in nonscientific experiments, to plug his brain directly into Final Fantasy XIV—“just straight-up computer-game interface.” One afternoon, he told me, “I always say, why don’t we call Guinness, and do a bunch of stupid crap—there you go, world record, never been done! You just have to write it down: ‘O.K., he slung a pencil around!’ ” With that, he turned to his next task, an experiment that resembled a hearing test. At the sound of a beep, the researchers sent electrical pulses into his cortex, to measure his brain’s reaction. All Copeland had to do was sit there and convey what he felt.

It was not hard to see that immersion in a video game could offer someone who was paralyzed a profound sense of liberation. Before Scheuermann completed her experimental trials, she experienced something like it, and she told me it was her most significant moment at the laboratory. She was working with the Pittsburgh team—by then achieving ten degrees of freedom with the arm—when, one afternoon, Geoff Ling met with A.P.L.’s project director, Mike McLoughlin, at a café in Maryland to chat about the technology. “We were thinking beyond prosthetics,” McLoughlin told me. “Say you had a Nest thermostat—you might be able to communicate with that and change the temperature, turn lights on, work with your computer, drive your car. This starts to offer the potential to fundamentally change the way we interact with machines.”

As the two men batted around ideas, McLoughlin mentioned that A.P.L. had built a flight simulator for the F-35 fighter plane, and suggested that they plug Scheuermann’s brain into it. Ling agreed. As he saw it, darpa’s role was to open doors, not to refine ideas to perfection. “We’re a defense agency, right?” he told me. “So, yes, we are doing something very wonderful to help people who have quadriplegia, but our thought also was: what demo can we use to show people where this could go?”

Plugging Scheuermann into the flight simulator would unquestionably get the military’s attention. Thought-controlled aviation is an idea with a history. In the nineteen-seventies, darpa had considered such a process, using EEG. Tony Tether, a former darpa director, told me that at high speeds the g-forces imposed on pilots can make physically maneuvering an aircraft impossible. “If all the pilot has to do to move the airplane is think, you can put him in a cocoon in that airplane, which would protect him at higher g’s, and therefore the airplane would be able to turn faster,” he said.

darpa discontinued the line of inquiry; EEG was not good enough. But the idea lived on in a Cold War thriller, “Firefox,” which told the story of a fictional Soviet fighter plane, the MiG-31, in which, as one character explains, “You are literally plugged into the weapons-system.” “Firefox” became shorthand for a goal in military neuroscience: unifying warrior and weapon in a hybrid being. Karen Moxon, a pioneer of brain-computer interfaces, told me, “I did my Ph.D. in aerospace engineering at the University of Colorado. The Air Force cared about muscles, and growing plants in a space station, and Firefox.”

The emergence of drone warfare, in the past decade, inspired darpa officials to envision an updated role for brain-controlled aviation. A program manager who worked under Geoff Ling when the idea of the F-35 simulator was being developed told me, “It really is a twentieth-century notion to think of the function of the pilot as controlling only one airplane. In the future, it is highly likely that we will move to a one-to-many relationship, so that the pilot controls maybe fifteen or twenty assets—to be an air-battle commander, rather than a stick-and-throttle jockey.” With brain implants, one person could conceivably command a swarm.

Ling and McLoughlin had hoped to have Scheuermann try the F-35 simulator, but because the software was classified it could not be brought to Pittsburgh. Instead, the team purchased a similar commercial product, along with a simpler program for training. The first simulation was with a single-engine airplane, a Mooney Bravo. As with the robotic arm, Scheuermann began by visualizing: using imagined movements of her wrist in two degrees—left and right, up and down—to guide the plane. “It became instinctive very quickly—I would say within two minutes,” she told me. “If I wanted to go down, it just went down.” Her first real trial with the Mooney began in midflight; she seemed to hover above the tail wing. Even though the image onscreen suggested that she was looking down on the plane, she was overcome with the feeling that she was inhabiting it as it soared. “I was up and out of my chair—I was in the clouds, I was out of my broken body,” she told me. “I was flying, and it was even more exhilarating than eating chocolate—and that’s saying something.”

She flew through a gorge. Over Pittsburgh, she tried to find her house, but the simulator did not render the landscape in sufficient detail, so she decided instead to visit major world landmarks. Thinking that the Great Wall of China would be monotonous, she asked if she could explore France and Egypt. This was possible. As she flew, she noticed that the simulator not only allowed her to move fluidly through space but also offered up a universe of digital phantasms: objects without solidity, weight, or mass. Scheuermann took great pleasure in flying through things. “I took off from Charles de Gaulle Airport and flew through the Eiffel Tower,” she told me. “I flew through the pyramids, and over Alexandria!” The experience was rapturous. With the help of speech-transcription software, Scheuermann keeps a journal; that evening, back from the lab, she wrote:

I flew a plane today.

I freaking flew a plane today!

I am 54 years old, I’ve been a quadriplegic for 14 years, and I flew a plane today! In my mind, I’m still flying.

Andy Schwartz did not attend the session. In his view, Scheuermann had demonstrated far more sophisticated brain control with the robotic arm; by comparison, flying the airplane, using only two degrees of freedom, was scientifically empty, epitomizing the theatrical showiness that he had long avoided. But officials at A.P.L. were ecstatic. In addition to the Mooney, Scheuermann had flown the simulated F-35, though with greater difficulty. “Jan Scheuermann was able to fly,” McLoughlin told me. “She embodied that plane. That’s really powerful—really powerful.”

At darpa, Geoff Ling screened a video of Scheuermann flying, and described it in momentous terms. The experiment, he believed, prefigured evolutionary changes to the human organism. “Don’t you understand what has happened?” he told me. “We just got rid of the confines of our bodies. That is taking mankind to another level, brother! Can you imagine a body with four arms? Can you imagine having two more eyes? The body we have been given is a biological thing. We could totally break free of it.” Ling stepped down from his position in 2015, but his successor, Justin Sanchez, decided to keep pursuing the transhumanist lines of inquiry. darpa has since been investing in a widening portfolio of neural technologies, most significantly an effort to develop neural implants beyond the Utah array. A new program manager was hired to accelerate the development of brain implants in the private sector. “The door to the future was opened—tinged with some challenges,” he told me. “How do we solve the problem so that it is not just a hundred wires but something more powerful, with higher bandwidth? And can you make it wireless?”

In 2016, darpa began pushing the researchers at Pittsburgh for more flight-simulator experiments. A team from A.P.L. returned to the university with a new question: Could a pilot feel aspects of flight? Software was designed to relay hazards and preferred navigation paths directly into Nathan Copeland’s sensory cortex. Rather than looking at a dial or listening for commands, he would simply feel the information.

In other simulations, A.P.L. strove to give Copeland the ability to fly, simultaneously, two drones based at its headquarters in Maryland—a first step toward commanding a swarm. “We wanted to let Nathan control some real aircraft, quadcopters,” McLoughlin told me. The plan was to conduct the experiment remotely, with Copeland’s brain connected to the aircraft over the Internet. But the scientists at Pittsburgh resisted, arguing that, unlike the flight tests in virtual reality, which offered insight into how a paralyzed person could access a computer, commanding physical drones in a distant and uncontrolled setting was likely to yield little scientific information of value. Furthermore, it could violate ethical commitments to oversight boards, and divert lab time from the project’s core mission: assistive technology for the impaired. “They said, ‘That is not academic research,’ ” McLoughlin told me.

Soon afterward, A.P.L. demanded that the hardware that processed data from the Utah arrays be returned. While the university scrambled to acquire replacements, A.P.L. pushed ahead with brain-controlled aviation. In the new protocol, McLoughlin told me, human subjects will have Utah arrays implanted in both brain hemispheres, for experiments that could allow them to command multiple drones. Officials at darpa have taken to calling this line of research Mind Flight. When I asked Justin Sanchez about the agency’s shift away from the University of Pittsburgh, he said, “We knew that A.P.L. could explore these other problems—controlling aircraft, and things like that—so we gave them a shot. We just made a strategic management decision to go in that kind of a direction.”

The Pittsburgh researchers’ contract with Revolutionizing Prosthetics has now expired, but they have secured millions of dollars from the N.I.H. to continue and expand their project. Schwartz, meanwhile, has resumed his quest to decode the cognition of movement. The more he delves into the brain, he told me, the more complicated it appears—filled with nonlinear patterns that resemble changes in the weather, simultaneously noisy and orderly, with neurons passing on information in crosscurrents and feedback loops. The trajectories that he discovered might be regarded as thoughts in the raw, but he is reluctant to describe them that way. “There is a little nuance here,” he said. “Just because we get a very nice signal, it doesn’t mean that’s what the motor cortex is doing.” Buried beneath the detectable patterns, it was likely that there was a deeper order.

Schwartz has become especially curious about an unexpected difficulty that Scheuermann faced. Whenever she tried to pick up an object—a ball, a plastic cone—the array detected a surge of neuronal activity, and the arm backed away, as if repelled by a force field. “If I had her close her eyes, she could grip it,” Schwartz told me. “If I moved the object out of the way, then she could close the hand, too. So that showed pretty convincingly that something about the concept of object interaction was beyond our decoding capability.” To correct the problem, the team dampened the surge. “It kind of worked,” he explained. Since then, his lab has been trying to understand why Scheuermann’s brain reacted as it did. “It’s my passion now,” he told me. “The whole reason for moving your arm is so that you can actually do something.” A scientific journey that began with an investigation of the cognitive laws of motion was now leading him to new questions: How do we understand the objects around us—their weight, their fragility, their rigidity—before we reach out to touch them? In effect, how do we see the world?

V.

Jan Scheuermann was abruptly separated from Hector a few years ago. One day, researchers who were preparing to plug her into a computer noticed something alarming: her skin was pulling away from one of the pedestals, revealing a wire beneath the scalp. The opening elevated her risk of a deadly brain infection, and the researchers began to consider options: repair the skin with plastic surgery or terminate the experiment. Schwartz, who feared that the wire could carry bacteria into the brain, recommended that the devices be removed at once. Within two days, the researchers scheduled her return to the operating theatre.

Scheuermann appreciated that the researchers were treating her health as paramount, but she also recognized that terminating the experiment would mean that her role as a cognitive explorer would come to an end. The surgery was so sudden that she did not have any time to return to the laboratory—to use the brain interface one last time, or even to look at the equipment.

Her life as a lab rat had altered her view of herself. While the project was under way, she had watched footage of herself in the lab, wondering, Can this person who appears to be so diminished really be me? She watched one video several times, looking for clues. “Each time, I began to accept a bit more the fact that that woman was, in fact, me,” she wrote in her memoir. “I looked past the broken body and saw the shine in my eyes, heard the joy in my voice, and listened to my own enthusiasm.”

Soon after her pedestals—Lewis and Clark—were removed, her husband and children took her to the movies. “The sun was shining on the hillside across the river, highlighting the glorious colors of autumn,” she wrote. “I was just basking in the company of my family and my beautiful day. I remember thinking how beautiful life was, and how blessed I was. Then, in the twenty-minute drive home, it happened. I went from that blissful happiness to being a sobbing, blubbering mess. I was suddenly overwhelmed by the loss of Lewis and Clark and what their absence meant. It meant I would never control Hector again. It was all over. I might visit the lab, but I would never again be hooked up, would never again make Hector move. The full measure of that loss hit me, and I cried.” Scheuermann yearned to see the arm one last time, to speak to it. “I had to tell him that I would miss him, and I knew he would miss me. I thought that Hector needed to hear that we had had a wonderful time together, but that it was all right for him to have a good time with someone else now, and to achieve new things with that person. I didn’t want Hector to feel that he was betraying me by making a connection with a new subject. As I thought this over, I realized what I really needed was to tell myself all that.”

In time, her sense of loss dissipated. She was happy to learn that Copeland had broken some of her records. She thought of the two of them as twin explorers—No. 001 and No. 002—pushing forward into an uncharted zone of human experience. She was filled with a sense of gratitude and meaning. “I did it!” she told herself. “I moved a robotic arm just by thinking about it. I maneuvered Hector’s wrist and fingers, and we made technological history! Now I get to talk about it. I get to share with people the excitement of our study, the thrill that I experienced, and the advances that we made. How lucky—how blessed—can one girl get?”

About Blackrock Microsystems LLC

Blackrock Microsystems, based in Salt Lake City, Utah, was founded in 2008 and is the world’s leading provider of technology in the neuroscience, neural engineering, and neural prosthetics space. The company’s technology is at the core of worldwide innovations in Brain-Machine Interfaces (BMI/BCI), implantable bionic technologies and epilepsy diagnostics. Most impactful implantations of the Utah Array have been in some tetraplegic individuals from University of Pittsburgh’s Nathan Copeland, who controlled a prosthetic arm to fist bump US President Barack Obama in 2016; Ohio State University’s Ian Burkhart who is learning to control an automobile, and CalTech’s Nancy Smith who is learning to play a virtual piano. Seemingly easy tasks, all have used the Utah Array to translate their thoughts into action to restore function.

For more information about Blackrock Microsystems, the Utah Array or Auditory Nerve Implant, please contact Shilo Case, Marketing Manager at Blackrock Microsystems, (801) 582-5533 or scase@blackrockmicro.com

Media contact:

Shilo Case
+ 1 801 582-5533, ext. 222
scase@blackrockmicro.com

You might also like

Introducing The BCI Exhibit

Jessica Nani

The American Association for the Advancement of Science (AAAS) and Blackrock Neurotech announced today the first-ever brain-computer interface (BCI) art exhibit, to be displayed at AAAS headquarters in Washington, D.C. The BCI Exhibit will feature works created by patients with paralysis using thought-to-cursor implantable brain-computer interface technology made possible by Blackrock.

Medical Design and Outsourcing | Blackrock Neurotech unveils next-generation BCI interface

Jessica Nani

Blackrock Neurotech announced today that it revealed its Neuralace next-generation neural interface for brain-computer interface (BCI) technology.

Danny in the Valley | Blackrock Neurotech’s Marcus Gerhardt: “An inflection point for brain-computer interfaces”

Jessica Nani

The Sunday Times’ tech correspondent Danny Fortson brings on Marcus Gerhardt, chief executive of Blackrock Neurotech, to talk about his […]

Medical Design and Outsourcing | How the Utah Array is advancing BCI science

Jessica Nani

By Florian Solzbacher Brain-computer interface (BCI) science has seen exciting advances and heightened public attention in recent years, and for […]

Physics World | Brain–computer interfaces: tailoring neurotechnology to improve patients’ lives

Jessica Nani

By Tami Freeman Sumner Norman, chief neuroscientist at AE Studio, talks to Tami Freeman about the company’s work in brain–computer […]

+ Mass Device | 7 brain-computer interface companies you need to know

Jessica Nani

By Sean Whooley Blackrock Neurotech, BrainGate, ClearPoint Neuro, Neuralink, Synchron and more race to bring brain-computer interface (BCI) tech to […]

Verdict | Why is Elon Musk So Excited About Brain-Computer Interfaces?

Jessica Nani

By Jake Mainwaring Brain-computer interfacing (BCI) is like something out of a sci-fi movie. The idea of hooking up a […]

WIRED | This Man Set the Record for Wearing a Brain-Computer Interface

Jessica Nani

By Emily Mullin NATHAN COPELAND CONSIDERS himself a cyborg. The 36-year-old has lived with a brain-computer interface for more than […]

Insider | Linking Your Brain to a Computer Will Soon Be Real

Jessica Nani

By Adam Rogers  For years — decades, even — news accounts and scientific journals have featured videos of human beings […]