Elon Musk’s Neuralink blurs lines between man, machine
This week Elon Musk’s company Neuralink showed off a robot that can connect electrodes to the brain more accurately than a human can. The company also promised brain implants that can capture far more neuron activity than has occurred up until now.
To medical researchers, this raises the prospect of better care for patients. Already, electrodes have been implanted to stimulate portions of the brain in more than 100,000 patients with certain diseases. But the technology also raises ethical questions. One is whether patient safety will take a front or back seat amid a flurry of corporate investment. Another is over the potential blurring of boundaries between man and machine.
Mr. Musk argues the technology is imperative if humanity is going to keep up with artificial intelligence. He said that, “with a high-bandwidth brain machine interface, … we can effectively have the option of merging with A.I.,” instead of being “left behind.”
Where some scientists warn against unwarranted fear of the technology, others emphasize the need for wider public debate about the ethics involved.
It sounds far-fetched: With a computer chip implanted in their brains, humans could boost their intelligence with instant access to the internet, write articles like this one by thinking it rather than typing, and communicate with each other without saying a thing – what entrepreneur Elon Musk calls “consensual telepathy.”
Of course, it’s not really telepathy. It’s radio waves transmitting data from one chip to another. And it’s still futuristic. But it raises important ethical questions, as academic researchers and industry scientists pursue a path that could lead to the merging of human thought with artificial intelligence through the routine use of brain implants.
This week, Mr. Musk’s company Neuralink revealed details of how its technology has pushed forward that future.
“It is a big jump,” says György Buzsáki, a neuroscientist at New York University’s Langone Medical Center. Other scientists have pioneered many of the techniques that Neuralink has used. “What is impressive is making an industrialized version of this procedure,” eventually perhaps creating a product that could speed the spread of the technology.
The entry of companies – and especially the flow of venture capital into the field – raises some important ethical issues. While some wrestle with big philosophical questions like the further blurring of boundaries between man and machine, scientists are focused on the more immediate questions of patient safety and corporate priorities.
For radically different reasons, doctors, academic researchers, and industry scientists are moving to plant increasingly sophisticated technology into the brain.
A flurry of new research
For doctors and many academics, the goal is to mitigate the effects of disease. For some four decades, they have worked on implants that stimulate portions of the brain to treat the symptoms of Parkinson’s, for example, and depression. More than 100,000 patients worldwide now have these implants. The systems are relatively straightforward. They zap the brain with small amounts of electricity.
Medical researchers are now working on more sophisticated systems that can detect and record when the brain’s neurons fire and, hopefully, interpret what it means. Early work with rats and monkeys suggests paralyzed people could move a limb or control a computer to be able to communicate.
A host of companies are moving in to supply this medical market with implants carrying 100 or so electrodes. Neuralink has created a 3,000-electrode implant that it says it can scale up to 10,000 electrodes. That jump in electrodes should allow its system to capture far more neuron activity.
The company also showed off a robot that can connect the electrodes to the brain more accurately than a human can. Mr. Musk wants permission from the U.S. Food and Drug Administration to have one of his chips implanted in a human patient by the end of next year.
The role of private companies in this kind of research and development is controversial. On the one hand, the companies can routinize products and services that improve quality control and, thus, safety. And the influx of funds can speed up the research and deployment of devices, researchers and neuroscientists say. On the other hand, by focusing on products and profits, the companies risk giving a lower priority to patient safety.
That’s one reason François Berger, a neuro-oncologist now at a teaching hospital in Grenoble, France, left his job as director of a public-private partnership known as Clinatec. The safeguards for patients in the entrepreneurial environment weren’t high enough, he said in a 2018 interview. “We have an obligation to a slow science.”
“The thing that worries me is if they make a bad mistake,” says John Donoghue, a widely recognized neuroscientist, now at Brown University, who founded an early startup to work on computer-brain interfaces. “When somebody does something wrong, it can shut down the enthusiasm for the entire field, even when it’s not warranted.”
Humans in a race with A.I.?
The medical market is now large enough for companies to make a profit, Dr. Donoghue says. But some visionaries, like Mr. Musk, dream of a much larger market sometime in the future where ordinary people might opt for a brain implant to boost their intelligence in the way some now have their eyes lasered to improve their eyesight. For him, such technology is imperative if humanity is going to keep up with artificial intelligence.
“Even in a benign A.I. scenario, we will be left behind,” he said at Neuralink’s coming-out presentation Tuesday in San Francisco. But “with a high-bandwith brain-machine interface, I think we can actually go along for the ride and we can effectively have the option of merging with A.I.”
“It’s different worlds,” says Helen Mayberg, a neurologist at Mount Sinai in New York who pioneered the use of deep-brain stimulation for treatment-resistant depression. To her, the imperative to move forward is clear: She says she gets multiple emails a day from people diagnosed with the disease wanting to receive the technology.
“Why are we talking about enhancement [of people who are well] when we’re not doing such a great job of even having delivery of care and parity of mental-health services?” she asks. “That’s a disconnect for me.”
And it may be a longer way off than many of the optimists believe. Even with the advances in A.I., linking it with a human will require solving multiple problems, including such mundane things as finding materials capable of functioning in a body for a decade or more, says Dr. Donoghue. Then there’s the market challenge: Will the technology add enough value that people will really want it?
“I use my phone for my short-term memory; I don’t need it plugged into my brain to do that,” he says. “Your mouth works at about the speed of thought. So you’re going to have to beat that [with an implant]. And you could be a little faster, but are you going to go around with all this hardware in your head just to be able to interact with your computer 20 percent faster? … I think we are really a long way off before you get a good enough interface that it’s going to give you a significant advantage.”
Other technologies, such as plastic surgery, have moved from strictly helping accident victims to enhancing body features for anyone. “Is it fair to society that some people look nicer because they can afford it? I don’t know,” says Dr. Buzsáki, the New York University neuroscientist. In the same vein, he says the spread of brain implants “has to be discussed by a wider group of people” than just scientists.
If nothing else, the presentations by companies like Neuralink will help bring that discussion to the fore. But the corporate activity also shows the risk that profit motives could dominate the discourse.
“I honestly believe in the separation between money and academic research,” says Dr. Buzsáki, who has worked in both worlds. “And the reason for that is that the moment money is involved, then that controls a lot. I’m not saying it’s overriding morals, but history says [that] most of the time it does.”