Cyborg Minds and the Right to Think Freely
Technoprogressive Policy Project
Orestis Palermos is an Assistant Professor of Philosophy at the University of Ioannina, Greece. He works on philosophy of mind and cognitive science, epistemology, philosophy of science and technology, and applied ethics. He is author of the forthcoming Cyborg Rights: Extending Cognition, Ethics and the Law.
The notion of ‘cyborgs’—fusions of organisms and machines—is a fascinating idea. For decades, paintings, films, and novels have introduced cyborgs to the public imagination as fictional beings, appearing in various forms and configurations. At the turn of the previous century, however, philosophers began to entertain the idea that humans are, in fact, already cyborgs. The proposal traces back through human evolution, suggesting that humans may have been cyborgs all along. As Andy Clark puts it, humans are natural-born cyborgs.
If correct, this provocative idea means we have long been mistaken in how we separate ourselves from the surrounding world. Far from being merely a philosophical concern, such a misunderstanding of our boundaries could have far-reaching consequences in the real world—for the laws, rights, and policies we aspire to, even the trajectory of humanity itself.
At first glance, this may seem inconsistent: If we are—and always have been—cyborgs, why should it suddenly be a cause of concern? If humans are indeed natural-born cyborgs, then surely societies have, over the millennia, been shaped to be cyborg-ready.
To some extent, this may be true. Yet not all technologies are equal in their capacity to extend ourselves beyond our biological boundaries. Budding technologies—such as the Internet, brain-computer interfaces (BCIs), Artificial Intelligence (AI), and their various emerging combinations—are poised to trigger a Promethean shift; a new era in the symbiotic relationship between human biology and technology.
Over the past three decades, research in philosophy of mind and cognitive science has provided a useful framework for making sense of the cultural bifurcation I have in mind. The Hypothesis of Extended Cognition is the first step in grasping the shape and scope of our cyborg nature. On this view, cognition extends beyond the biological organism to encompass the tools we densely interact with to carry out “lower-level” cognitive tasks—such as perceiving the surrounding world, navigating through it, or performing arithmetic calculations.
Examples of such extension technologies include Tactile Visual Substitution Systems, which allow blind individuals to quasi-visually perceive the world by converting light transferable information from a mini camera into haptic sensation on their skin; Satellite Navigation Systems that assist users to navigate unfamiliar places; and even pen and paper, used to complete arithmetic tasks like long multiplication.
Of course, brain-computer interfaces—such as Neuralink’s chips, which allow individuals to control computers directly with their thoughts, bypassing peripheral devices like the mouse and the keyboard—represent the most recent and perhaps most powerful development in cognitive extension.
Nevertheless, a key concern often raised at this point is how to tell extension technologies from mere tools. While the examples above may indeed represent genuine cases of cognitive extension, tools such as ladders, washing machines, desks, lamps, and countless other everyday artefacts must surely be excluded from counting as constitutive components of cognitive systems. Otherwise, the Hypothesis of Extended Cognition would result in ‘cognitive bloat,’ where all tools would be considered extensions of cognition—obviously, an absurd conclusion.
But how can we identify genuine instances of cognitive extension? Although several proposals have been put forward in the literature, the one I find most promising draws on the mathematical framework of Dynamical Systems Theory (DST), and in particular, the notion of a coupled system. According to DST, when two seemingly distinct systems are non-linearly related by engaging in ongoing bidirectional interactions with each other, they form a unified, coupled system comprising both. If we accept the reasonable assumption that cognitive systems are to be ultimately modelled as dynamical systems—given that most systems in nature have been successfully modelled in this way—then we may claim that artifacts count as components of (extended) cognitive systems when and only when users complete cognitive task by means of sustained bidirectional interactions with them.
This distinction is helpful because the above condition is met by the most compelling examples of extension technologies, but not by clearly implausible cases such as desks, lamps, and ladders. In these latter instances, the causal influence flows in only one direction—from artefact to agent, but not the other way around.
The presence of ongoing bidirectional interactions between agent and artefact can therefore serve as a clear criterion of constitution—or integration—distinguishing genuine cases of cognitive extension from merely supportive tools.
The above outlines, in broad strokes, the first level of humanity’s biotechnological hybridity. This aspect of our potential for technological extension may, in fact, have been true for a long time, despite having remained unacknowledged until quite recently. As I explain in my recent book, Cyborg Rights, this extended dimension of our nature is not without practical, ethical, or legal implications—particularly for the distinction between mere property damage and assault, as well as the broader divide between property law and personal law.
Nevertheless, it is the next level of technological extendibility—only now on the verge of being realised—that highlights the scale and urgency of the challenges posed by emerging technologies.
This second way of potentially extending ourselves is known in the literature as the Extended Mind Thesis. On this view, humans may extend not only their cognitive systems by employing tools to complete “lower-level” cognitive tasks; they may also extend mental states—such as their beliefs, thoughts, and desires—by integrating external artifacts into their mental lives.
The standard example of mental extension is the imaginary case of Otto, an individual with Alzheimer’s who uses a well-maintained notebook to make up for his waning biological memory. When Otto needs to store information, he meticulously records it in his notebook; when he needs to access information that he has previously come across, he reads it from the same external source. Clark and Chalmers argue that since the information in Otto’s notebook functions like that in his biological memory, he does not merely believe the content at the moment of reading it; he already believes the relevant information before retrieving it. In other words, Otto holds long-standing—or dispositional—beliefs that are partly constituted by components beyond his organism. Therefore, his dispositional beliefs extend beyond his biological self.
This is a radical idea, and it has been the target of several objections. While not all are equally problematic, one major concern that I address in ch. 2 of Cyborg Rights is whether Otto and his notebook genuinely meet the criterion of ongoing bidirectional interaction. The issue here is that their engagement appears rather fragmented.
Nevertheless, even if we grant that Otto does interact densely and continuously with his notebook—constantly consulting it, flipping through its pages, writing, amending, cross-referencing, and reading information from it before nearly every action he undertakes—another significant worry arises: the static nature of the information it contains.
Unlike dispositional beliefs stored on biological minds, which are ever ready to be updated in response to new stimuli, thoughts, or beliefs, the inscriptions in Otto’s notebook are fixed and informationally isolated. This lack of dynamicity and informational integration—both among the inscriptions themselves and between them and Otto’s perceptual states, as well as his biologically stored thoughts and beliefs—seems to most decisively disqualify the contents of his notebook from counting as genuine extended (dispositional) beliefs.
While this may suggest that the possibility of extended beliefs is off the table, recent technological developments indicate that belief extension may in fact be only a matter of time. The information stored in diary apps, note-taking apps, photo libraries, and GPS tools on smartphones is, to a significant extent, dynamic and informationally integrated.
On my phone, for instance, I can enter a reminder that will trigger when I leave a specific location—much like I might naturally recall where I parked my car upon leaving my house. Similarly, I automatically receive photo ‘memories’ of places I have visited, often shortly after I revisit them.
These features increasingly mimic the fluidity and contextual responsiveness of dispositional beliefs stored in biological memory—and they are bound to improve. As AI is increasingly employed to automatically draw associations between digitally stored information on personal devices, and as digitally stored content begins to interact directly with neural data via BCIs, the integration between biologically and technologically stored information will be greatly enhanced. Once this stage in the evolution of the human-machine merger is reached, it will be hard to deny the existence of extended minds.
This is an exciting prospect—but the likelihood of mental extension also calls for a critical assessment of its ethical and legal implications. A central concern involves mental privacy and mental integrity—two core components of the right to freedom of thought. Historically, it has been practically impossible for third parties to access or manipulate the detailed contents of another person’s mind—such as their beliefs—without their consent. If mental extension becomes a reality, there is no guarantee things will remain this way.
In Chapter 3 of Cyborg Rights, I turn to the issue of mental privacy first. When individuals choose not to share their thoughts, others can infer them only in broad strokes. The full, introspective content of one’s mind—accessible only to the individual to whom it belongs—has, until now, remained inaccessible to third parties.
Nevertheless, if beliefs begin to be stored on cognitively integrated devices, this protection may no longer hold. Through hacking or legal compulsion, third parties could gain access to externally stored beliefs of users as if they were their very own.
Likewise, in Chapter 4, I note that we are facing a similar situation with respect to mental integrity. Until now, altering another person’s mind has only been possible indirectly—through argumentation, education, counselling, coaching, or in more questionable cases, marketing and propaganda. However acceptable or dubious these methods may be, they all operate via the individual’s communicative or sensory channels, preserving their capacity to resist or ignore the incoming influence. Moreover, such efforts to impact another’s mind are not guaranteed to produce specific changes, if any at all. In this way, individuals have historically maintained a significant degree of mental integrity.
But if elements of our belief systems become externally stored, this protection may be lost. Others could directly manipulate specific thoughts and beliefs by bypassing our communicative and sensory conduits entirely, targeting instead the underlying realisation base of the non-biological components of our extended minds. For instance, a hacker might selectively delete photo memories of a life event they wish me to forget. The possibility of this kind of threat to mental integrity is an urgent issue and must be taken seriously.
Fortunately, several means exist for safeguarding our freedom of thought in the age of extension technologies. States could implement appropriate legal protections both nationally and internationally—potentially through the ratification of new international treaties.
An important question here is whether such laws should be absolute or qualified. For instance, a country’s constitution might declare that there are absolutely no circumstances under which it is lawful to access or manipulate another person’s extension technologies without their consent. Alternatively, it could be decided that a limited set of exceptions—such as preventing terrorist attacks or other heinous crimes—may grant authorities non-consensual access to a suspect’s extended mind.
Alternatively, to match the level of mental privacy and integrity historically enjoyed—at least for the most part and in most situations—companies might employ cryptographic techniques and other technological safeguards to make it practically impossible for anyone—regardless of legal authority—to access or manipulate another person’s mind without their consent. More resolutely still, they could design systems that make such access and manipulation impossible under any circumstances.
While it is unclear how we ought to proceed, one thing is certain: the risk posed by extension technologies, combined with the complexity of the issues at stake, presents an unprecedented challenge at both individual and societal levels. Fair and informed decisions about related policies and technological developments cannot—and should not—be left to a small group of individuals acting behind closed doors, even with the best of intentions. These and related issues must be openly debated by as many people, and from as many diverse backgrounds, as possible. We need to determine whether the values potentially embedded in the design of extension technologies—such as mental privacy and mental integrity—are universally shared, how strongly they are valued, and how they interact with other deeply held values, including safety, equality, equity, dignity, autonomy, and consent.




<<The presence of continuous bidirectional interactions between the agent and the artifact can therefore serve as a clear criterion of constitution—or integration—distinguishing authentic cases of cognitive extension from merely supportive tools.>>
Thank you, Orestis.
I largely agree with these thoughts. One remark, however. I doubt we can always be sure of identifying this "presence of continuous bidirectional interactions." In fact, whenever we try to "distinguish," we tend to overlook the fact that much of what makes us decide is arbitrary or random.
Might it not be more accurate to consider this separation as a gray area within which we will have to arbitrate on a case-by-case basis and where our distinctions will be regularly called into question?