Brain-computer interfaces (BCIs) are technologies that directly connect the human nervous system to external computing devices via invasive or non-invasive electrodes. Invasive BCIs require surgical implantation of electrodes directly into the brain, while non-invasive BCIs like EEG headsets are worn externally, providing safer but typically less precise brain signal measurements. Some BCIs primarily function to deliver assistive stimuli directly to the brain, like in the case of deep-brain stimulation for Parkinson’s disease and neuroprosthetic cochlear impacts for hearing-impaired individuals. Others are designed to enable the brain to control external devices with nothing but mental commands, such as amputees controlling prosthetic limbs and paralyzed people controlling computer cursors with their thoughts.
While the central motivation to create BCI technology is indeed for cognitive therapy purposes, BCIs also have the potential to be used for enhancement purposes by non-impaired people. The consumer market for neuroenhancement can be traced back to at least 2007, when the company NeuroSky introduced the first commercially available, noninvasive BCI to the public (the ‘Mindset’), which was designed for gaming, education, and meditative uses. In 2016, the billionaire entrepreneur Elon Musk created Neuralink with the long-term vision of producing invasive, enhancement-based BCIs at scale. Musk believes that to remain relevant in an era of superintelligent AIs, it may be necessary for humans to undertake neuroenhancement via BCIs (i.e., ‘if you can’t beat them join them’). While Neuralink has received the most headlines, it is far from the only company developing BCIs. Other prominent players in the invasive BCI space include Synchron, Paradromics, and Precision Neuroscience. Key companies working on non-invasive BCI include OpenBCI, Neurable, NextMind, and Kernel. There is also the theoretical but promising field of neural nanorobotics, which proposes using nanorobots to record and manipulate neurons at the cellular level for both cognitive therapy and enhancement purposes.
The rise of the BCI industry suggests that we may in the future create something approximating what the philosopher Michael Lynch (2014, 2016) calls neuromedia: “Imagine you had the functions of your smartphone miniaturized to a cellular level and accessible by your neural network. That is, imagine that you had the ability to download and upload information to the internet merely by thinking about it, and that you could access, similar to the way you access information by memory, all the various information you now normally store on your phone: photos you’ve taken, contact names and addresses, and of course all the information on Wikipedia” (Lynch, 2014: p. 299). This quote from Lynch speaks to the potential of BCIs to facilitate enhanced human-computer interaction. In total, however, there are at least three holy grails of BCI-based neuroenhancement:
A) Enhanced human-computer interaction: BCI technology could dramatically increase the bandwidth of information transfer between the human mind and computational devices, allowing for instantaneous access to and control of digital information.
B) Enhanced human-human communication: BCI technology could enable direct brain-to-brain information transfer, allowing for a form of technological telepathy where thoughts and conscious experiences can be shared without the limitations of verbal and written language.
C) Direct neuromodulation: BCI technology could enable precise control over brain states, allowing for on-demand enhancement of mood, focus, creativity, or other cognitive and emotional capacities.
Before proceeding, I want to emphasize that there is no stark line between cognitive therapy and cognitive enhancement in the context of BCIs (or more generally for that matter). Consider transcranial direct current stimulation (tDCS), a non-invasive technique sometimes used to treat depression, chronic pain, and stroke rehabilitation by applying weak electrical currents to specific areas of the brain. Researchers are currently exploring the potential of tDCS to enhance working memory in healthy individuals. Now, if a person with, say, borderline cognitive decline uses tDCS to improve their working memory, does this count as cognitive therapy or enhancement? Similarly, if a person with borderline ADHD symptoms uses neurofeedback techniques to improve their focus, does this constitute a kind of therapy or an enhancement of normal cognition? There is no definitive answer to these questions because the line between cognitive therapy and cognitive enhancement is (objectively) blurry.
The rest of the article offers an overview of the main topics in the philosophy of BCIs. First, I explore some metaphysical and epistemological questions surrounding personal identity, cognitive atrophy, extended cognition, and the possibility of overlapping minds via brain-to-brain interfaces. Then, I consider moral implications of BCIs related to normative ethical frameworks, neurorights, neurohacking, and neurocapitalism. Finally, I reflect on broader societal perspectives, touching on social justice concerns, religious viewpoints, and the geopolitical landscape of BCI development.
Metaphysics and Epistemology of Brain-Computer Interfaces
Personal Identity and Authenticity
A central metaphysical concern regards the impact of BCIs on an individual’s personhood. One of the main worries here is that BCIs may undermine personhood by artificially altering essential “person-making” psychological or emotional characteristics of the user. For example, BCI-induced mood enhancement may compromise authenticity insofar as experiencing one's full self requires engaging with the full spectrum of human emotions in natural proportions, including negative ones. Relatedly, there are risks of possible unexpected psychological and emotional side effects of neuroenhancement analogous to the risks of unintended genetic side effects associated with genetic enhancement. Directly increasing one’s IQ via future neuromodulation techniques, for instance, could lead to increased anxiety or depression, as some studies have suggested a correlation between high intelligence and certain mental health disorders.
Cognitive Atrophy and Threats to Knowledge
Beyond their potentially profound impact on our emotional lives and personhood, enhancement BCIs promise to revolutionize how we acquire and process information, and in doing so, raise foundational questions about the nature of knowledge and cognitive ability. Even if a neuroenhancement device like neuromedia is epistemically reliable in the sense that it consistently generates true beliefs for the user (which is a big ‘if’), it does not necessarily follow that the device is a knowledge generator. Possessing true beliefs is not sufficient for knowledge; the beliefs must also be justified. The ‘epistemic justification’ condition is often taken to indicate that knowledge is a cognitive achievement of some kind, one that requires cognitive effort on the part of the knower. This traditional analysis of knowledge as something earned through cognitive effort is not easily reconcilable with the effortless access to information provided by a future device like neuromedia. If a person instantaneously counts as knowing everything on the internet the moment they implant a neuromedia device, then the concept of knowledge becomes devalued beyond recognition.
Beyond failing to expand our knowledge base, there is the worry that overreliance on BCIs for epistemic purposes could lead to the loss of existing knowledge and the atrophy of cognitive capacities and intellectual virtues. I expand upon the epistemic threat BCIs pose to intellectual virtue cultivation in previous work, focusing on the virtue of intellectual perseverance. The basic concern stems from the notion that cognitive abilities and intellectual virtues, like muscles, need exercise to remain sharp. If BCIs consistently perform cognitive tasks for people, they will, over time, lose the ability to perform these tasks independently.
Brain-Computer Interfaces and The Extended Mind
Proponents of the extended mind thesis offer a different way of conceptualizing the BCI-user relationship that has the capacity to alleviate some (not all) of the above metaphysical and epistemological concerns. The extended mind thesis, proposed by philosophers Andy Clark and David Chalmers, asserts that the mind is not confined to the brain but can extend into the external environment through the use of tools and technologies. Clark and Chalmers’ argument is based on what they call “the parity principle”, which holds that an external object (such as a BCI device) should be considered a genuine part of one’s mind if it plays a functional role in their cognitive system that an internal biological process would otherwise play. The extended mind thesis opens up the prospect of BCI-facilitated extended knowledge and extended personhood, leading to an ostensibly more techno-optimistic perspective on the human-BCI relationship. As Robert Clowes (2013) remarks, “From the [cognitive] internalist (and embedded) vantage-point it is as if our minds are being steadily off-loaded and dissipated into our tools, potentially with the result that we become less autonomous, less knowledgeable and arguably less interesting… For HEC [hypothesis of extended cognition] theorists this does not follow, for rather than being dissipated into our tools, we are incorporating them into us” (Clowes, 2013: p. 127). The extended mind interpretation of the BCI-user relationship is of course controversial, with some scholars arguing that this relationship is better understood through the lens of enactive, embodied, or embedded cognition.
Brain-to-Brain Interfaces and Overlapping Minds
Beyond individualistic extended cognition, brain-to-brain interfaces (BBIs) enable the possibility of collective cognition and overlapping minds. BBIs allow for direct communication between two or more brains, using brain scanning devices (e.g., EEG, fMRI) to read neural signals from one brain and neurostimulation techniques (e.g., transcranial magnetic stimulation, direct electrical stimulation) to write information back into the other brain. While still in its infancy, experiments like BrainNet have already demonstrated the capability for multiple humans to collaborate on simple tasks using brain-to-brain communication. John Danaher and Steve Petersen observe that while we are already living in a “hivemind society” in the broad sense that we participate in various forms of internet-facilitated collective agency, BBIs invite the possibility of hiveminds in the stronger sense that our individuality may literally dissolve into a broader collective consciousness. This possibility challenges traditional individualistic assumptions about the nature of consciousness, personal identity, and morality. Notable questions that arise here include: How (if at all) is the collective consciousness of a BBI-facilitated hive mind unified? If two or more people share the same thought or experience, who does that mental state ‘belong’ to? Do people that become part of a hivemind lose their independent moral status?
Interestingly, while science fiction often portrays hiveminds in a dystopian fashion (e.g., Star Trek's the Borg), Danaher and Petersen argue that hivemind societies may foster human flourishing, emphasizing the potential of BBIs to engender deeper forms of intimacy and empathy with others, and to improve the problem-solving abilities of groups. Additionally, BBIs could theoretically enable the direct downloading of memories, knowledge, motor skills, and languages from one individual to one another, similar to the technology portrayed in the movie The Matrix. Or they could facilitate a collective human memory bank, where cultural knowledge can be preserved and transmitted to individuals across generations. Ultimately, the desirability of such hivemind societies will depend on one’s political ideology and whether they think the benefits of enhanced connectivity outweigh the potential risks to individuality and autonomy.
Neuroethics
BCIs and Normative Ethics
I turn now to ethical considerations surrounding brain-computer interfaces. First, on the level of normative ethics, one area of inquiry considers how neuromodulation (i.e., direct alteration of brain activity) challenges traditional normative ethical frameworks like hedonic consequentialism and virtue ethics. Hedonic consequentialism evaluates the moral value of an action based on the quantity and quality of pleasure it produces and pain it reduces. This view faces challenges when considering neuromodulation techniques that directly alter one’ mood or inject pleasure into one’s consciousness. For instance, imagine someone in poor health, living in poverty without a job, family, or a sense of purpose. Through neuromodulation, this person can artificially generate positive emotions, maintaining a constant state of happiness, or what they perceive as happiness, despite their unfortunate circumstances. Hedonic consequentialism would view this life as morally good, based solely on the individual’s pleasure, which seems to defy common sense. This thought experiment, therefore, represents a counterexample to hedonic consequentialism. To take a different normative approach: virtue ethics determines the value of an action based on the individual's moral and intellectual character. In the context of neuroenhancement, one may ask: If a person’s virtuous behavior is the result of BCI-facilitated moral enhancement, is it genuinely virtuous? Moreover, does enhancing certain cognitive traits through BCIs undermine the concept of character development through effort and experience?
Neurorights and Legal Concerns
Moving down now to the level of applied ethics and regulatory concerns: a crucial question in this vicinity is whether neurorights deserve their own unique conceptual category and legal protection or can be protected via the extension of existing legal frameworks. Arguments for sui generis neurorights emphasize the especially intimate and revealing nature of brain data. For instance, the collection of brain data raises the possibility of novel privacy violations like ‘mind-reading’ and ‘mind-control’ that appear to be qualitatively different from other types of privacy violations. The counter perspective claims that brain data does not pose fundamentally different privacy concerns than other sensitive data like genetic information. Proponents of this view think that the right to neuroprivacy can be accounted for by existing privacy-related frameworks, and caution against the risk of ‘rights inflation’, which occurs when the enactment of too many new rights functions to devalue rights protections in general.
Beyond the debate over whether sui generis neurorights are necessary, there is a further question of how existing rights frameworks should be applied to BCIs. For example, it is unclear whether BCIs should be protected under bodily autonomy rights or intellectual property rights. While BCIs clearly function as extensions of the body in various cases (e.g., neuroprosthetics), they also involve the generation and processing of data, which is generally construed as intellectual property. The line between bodily autonomy and intellectual property rights is further complicated by the aforementioned extended mind thesis, according to which BCIs literally extend one’s cognitive processes.
Neurohacking and Neurocapitalism
Regardless of where one stands regarding these legal debates, it is instructive to further reflect upon how BCIs present new versions of existing digital ethics threats. Neurohacking, for instance, presents a new version of computer hacking. As Marcello Ienca and Pim Haselager explain, a distinction can be made between at least four main types of neurohacking, corresponding to different phases of the BCI cycle: (1) input manipulation (e.g., presenting specific visual stimuli to extract sensitive brain data like PIN codes), (2) measurement manipulation (e.g., adding noise to disrupt BCI function), (3) decoding/classification manipulation (e.g., altering the interpretation of brain signals to change BCI outputs), and (4) feedback manipulation (e.g., providing false feedback to induce specific user behaviors).
Of course, brain data will be of interest not only to criminals but also to corporations and employers, which raises the possibility of neurocapitalism. Neurocapitalism can be construed as a new version (or the next iteration) of surveillance capitalism, a term coined by Harvard professor Shoshana Zuboff to describe the ad-based economic system governing the contemporary digital economy. Surveillance capitalism works by harvesting users’ digital footprints to create behavioral prediction models that are sold to advertisers, who use these models to engage in behavioral modification (e.g., influencing purchasing decisions). Surveillance capitalism raises numerous ethical concerns, including data ownership, digital privacy, and informed consent, all of which are poised to be exacerbated within neurocapitalism, where brain data, rather than behavioral data, becomes the primary currency.
Societal Perspectives: Social Justice, Religion, and Geopolitics
I will close this article by considering some broader societal considerations, covering the themes of social justice, religion, and geopolitics.
Social Concerns
At the core of social concerns surrounding BCI-based neuroenhancement technologies is the issue of social justice. As with any emerging technology, access will likely first be made available (if not limited entirely) to the wealthiest segments of society. In the context of future neuroenhancement, this “access privilege” could grant wealthy individuals an unprecedented cognitive advantage over the rest of society, creating an even bigger gap between the ‘haves’ and the ‘have nots’ than already exists. The codification of a general right to enhancement, or a specific right to neuroenhancement, could help mitigate these social inequality concerns. While beyond the scope of this article, there are also a myriad of social challenges tied to specific BCI applications across various sectors, such as the workplace, education, sports, healthcare, and the military. In education and sports, for example, neuroenhancement raises complex questions about fairness, and the value we place on natural abilities.
BCIs and Religion
The intersection of religion and neurotechnology raises several important considerations. First, there is the question of how different religions view the permissibility of BCIs. The classic ‘playing God’ objection, which is often levied against many forms of human enhancement, can also be applied to neuroenhancement by religious practitioners. However, the relationship between religion and neurotechnology is not necessarily antagonistic. James Hughes argues for a synergy between neurotechnology and Buddhism, explaining how BCIs and neurofeedback techniques can be integrated into meditative practices to promote Buddhist ideals such as self-control, compassion, and altered states of consciousness. He is currently writing a book building on his previous work on the matter entitled Cyborg Buddha. More controversially, neurotechnologies have the potential to be used by religious institutions to enforce particular holy doctrines. For example, the age-old religious notion that certain thoughts are sinful could take on a new, literal form if future BCIs are deployed by religious authorities to monitor, and perhaps even modify, thought patterns that are deemed to be sacrosanct.
BCIs and Geopolitics
The geopolitical landscape of BCI development is increasingly shaped by US-China competition. While the US currently leads in R&D and market share, China is pouring a ton of investment into BCI development and is establishing a BCI standardization committee to accelerate domestic innovation. This geopolitical competition is not just economic, but also defense-related, as both countries view BCIs as a promising source of military leverage. In this technological arms race between the two countries, China has several significant advantages, including large colonies of research monkeys, fewer regulatory hurdles for human testing, and a greater cultural acceptance among the populace of data collection by the government and employers. An example of the last point can be gleaned in the fact that Chinese state-owned companies have already implemented EEG headsets to monitor workers' attention levels and emotional states, a practice that would likely face staunch privacy objections in the US.
Conclusion
This geopolitical technological arms race, along with the BCI-related threats of neurocapitalism, cognitive atrophy, compromised authenticity, and increased social inequality, can inspire an attitude of technological fatalism and defeatism. Consider how many people, despite various ethical reservations, are effectively compelled to use smartphones and social media to function in modern society. There is a legitimate worry that BCIs could follow a similar path. Despite the numerous philosophical concerns raised by BCIs outlined in this article, they might become so integral to workplace productivity or social communication that individuals feel they have no choice but to adopt them. This predicament reflects a collective action problem that I do not pretend to know how to solve. However, I do know that positive technological outcomes are less likely if we collectively resign ourselves to an attitude of technological fatalism and defeatism. As the philosopher Robert Mark Simpson states when discussing social media, “Technologies and practices that bubbled up into existence less than two decades ago are being imaginatively reified as nailed-in, loadbearing structures in humanity’s housing, as opposed to movable cultural furniture” (Simpson, 2022: 2). Simpson’s point is that although social media has become a permanent fixture in the cultural imagination, the technology is still relatively new, and it remains possible to shape its development (and to decide whether we want it at all). Similarly, I submit that we should view the advent of BCIs not as an oncoming technological wave destined to wreak havoc on society, but as a promising technological opportunity that we can control and shape to our values. Adopting such a moderate, agency-based, techno-optimistic outlook can elevate the likelihood of positive outcomes (even if only marginally) by encouraging proactive, value-sensitive design and regulatory approaches to BCI technology.
Thank for your article , just a quick question....in the strict sense deep brain stimulation devices can not be defined as a brain computer interface according to the BCI Society working definitionn (and other authors like Wolpaw and Vidal, Chen) , for many reasons
https://bcisociety.org/bci-definition/
BCI Working Definition:
"A brain-computer interface is a system that measures brain activity and converts it in (nearly) real-time into functionally useful outputs to replace, restore, enhance, supplement, and/or improve the natural outputs of the brain, thereby changing the ongoing interactions between the brain and its external or internal environments. It may additionally modify brain activity using targeted delivery of stimuli to create functionally useful inputs to the brain"
Chen, Y., Wang, F., Li, T., Zhao, L., Gong, A., Nan, W., Ding, P., & Fu, Y. (2024). Considerations and discussions on the clear definition and definite scope of brain-computer interfaces. Frontiers in neuroscience, 18, 1449208. https://doi.org/10.3389/fnins.2024.1449208
Thanks for providing this fine overview of the challenges posed by brain/computer interfaces. I’d like to connect with the co-authors in connection with an initiative within Unitarian Universalism to advocate for the responsible development of neurotechnologies.