Elon Musk wants to read your mind. You shouldn't let him.
Back in February, the brain-computer interface (BCI) company Neuralink heralded the success of its AI-based implant Telepathy. “Patient is able to move a mouse around the screen by just thinking,” said founder Elon Musk. Musk called for more participants in Neuralink’s human subjects trial, presenting a CGI video that featured his vision to speedread patients’ brain waves so that they could not only regain capabilities lost to disease and injury, but also boost users’ innate capabilities to think and communicate with AI-based hyperintelligence. The video entreated quadriplegics to “redefine the boundaries of human capability.” Yet within weeks, the implant had malfunctioned and Neuralink was scrambling to save the experiment and its reputation.
In the wake of these setbacks, many people have been asking where is neural AI taking us? If companies like Neuralink want all of us to have AI in our brains, can safety and privacy truly be achieved? Who will get access to a person’s thought data? Will it be possible to draw a line between life-saving technology and purely elective?
From my experience tracking BCIs and AI in the market, I am certain that there is no limit to the invasiveness of these technologies. And though many companies presently tout their technology as an absolute medical necessity, there is no real division between medical therapy and brain enhancement.
Neurotech innovators like Neuralink, BlackRock Neurotech, BrainGate, and Synchron—the leading BCI makers who presently have FDA clearance to conduct clinical trials in quadriplegics—collect and process brain signals irrespective of what kinds of thoughts these signals are driving. These waves comprise routine, automated thought processes as well as emotionally laden decision-making ones. The BCIs use AI to interpret the unique valence of each thought, thereby reading and interpreting what a user wants to say and do with their minds. There is no way to identify what kinds of neural pathways BCIs are implicating in these trials, because machine intelligence is designed to exhaustively read signals, using them as the reference dataset for all its text and action prediction.
Machine intelligence is not selective and particular like human intelligence—it is quantitative and all-encompassing, based on systematic analysis of all the tokenized data points it can get its hands on. Exercising machine intelligence on our minds opens them up to endless capitalization. Much like how all our online activities are tracked and mined for the algorithms that deliver targeted advertising, so too will our brains with BCIs.
Human intelligence, by contrast, is qualitative and iterative, based in scrupulous analysis of new and old information steeped in sociality and emotion. Humans are constantly acquiring new information from our social interactions and collective knowledge base, comparing that to our personal knowledge base stored in our memories, and interpreting that new information in order to act in a way that we believe benefits us and those we care about. Exercising human intelligence is simultaneously creative, rational, emotional, and social. Allowing companies to relentlessly tokenize and capitalize on that complexity will only give them carte blanche over our lives.
For quadriplegics who need to restore function in order to survive, the all-access, open-ended, medical-meets-enhancement surveillance of BCIs is a no-brainer. That is why the FDA has deemed Telepathy, BlackRock’s Neuralace, BrainGate’s BrainGate2, and Synchron’s Stentrode “breakthrough therapies” and fast-tracked them for human testing and commercialization. Those who are not living with a debilitating or life-threatening illness, however, will soon give companies the passcode to all of their thoughts, memories, and feelings. The risks of this far outweigh any supposed rewards of brain enhancement.
BCI makers are crystal clear that the long-term goal with these technologies is a medical-enhancement cocktail for the masses, where AI boosts mental and physical capacities and even controls the emotions of the consumer. Musk, for one, initially launched Neuralink as a way for billionaire “transhumanists” like himself to meld with the very technology that they have created, technology that they believe is superior to our intelligence and will one day usurp it. Perhaps because of the dystopian implications of this vision, less than a month into clinical trials, Musk refocused Telepathy on commonplace eating and mood disorders, and learning disabilities, taking what developers are calling a “whole-brain” approach to enhancement of everyday human wellbeing.
Blackrock Neurotech financier Christian Angermeyer has similarly touted Neuralace as a potential salve for anxiety, depression, and all the day-to-day challenges that prevent humans from putting their best foot forward. His “Next Human Agenda” to engineer hyperintelligent, “happy” humans is behind all of his investing choices at his private investment firm Apeiron (Greek for “infinite”), and inspires the mission of his BCI making VC firm re.Mind, which also finances gene therapies that are supposedly “enabling millions to find freedom, fulfillment, and happiness.”
Another of BlackRock Neurotech’s financiers, Machine Intelligence Research Institute and Singularity founder Peter Thiel, similarly sees mass-marketed BCIs as the future of human intelligence, one in which humans rewire their brains to feel optimistic and content, while enhancing their ability to sense meaning and potential in their lives. Promising an end to hype and hyperbole, Thiel has made bringing to market trial successes in neurological biotech a cornerstone of his investment framework.
Rather than redefine the limits to human capability, these investors actually seek to remove the final barrier to accessing your thoughts, to sell consumers more so-called “medical” enhancements.
AI in your brain may seem far-fetched, but these are the financial priorities of the leading investors of our time. I have been in meetings with company founders who have complained about the FDA firewalls to innovative neurotech, and who have called on consumers of their technologies to offer up open-access “Big Data” about themselves to companies’ private drug and device development.
As it is, the FDA cannot fully prevent the whole-brain use cases that private funders are proposing. When device makers want to bring a new technology to market in the US, they apply for an Investigational New Drug Application with the FDA. The FDA first clears the company for clinical trials and later must approve the technology for commercialization. The agency favors neurological applications like BCIs as high-risk, high-reward applications for people living with ALS, Alzheimer’s, Parkinson's, stroke, and spinal cord injuries. However, once trials are underway, companies engage in exploratory research beyond the immediate application as they interpret and stimulate signaling throughout the brain.
We can see this process unfolding with Neuralace, which is a radical leap from its breakthrough-designated MoveAgain BCI that successfully restored communication and the sense of touch to quadriplegic trial subjects. Set to come out later this year, Neuralace will achieve a new level of whole-brain analysis by dusting the brain with ultrasound-based “smart” chips. This decentralized implantation system will be able to read thoughts as well as stimulate brain activity in any region of the brain.
While none of these AI-based BCIs have yet been brought to market, they eventually will, bringing with them immense power over our minds. At that point, it will be too late.
Want to learn more about the future (and present) of AI? Be sure to follow me TikTok.