This was originally written and delivered for Mindfest 2025 (March 12-13, 2025) at Florida Atlantic University. Then I revised it and delivered it again at Woodenfish Buddhism Science & Future (June 25-27, 2025) in Taipei, Taiwan. Thanks to Dr. Susan Schneider and Venerable Yifa for giving me these opportunities to work out my thoughts.
Abstract
This essay explores the intersection of Buddhist philosophy and artificial intelligence, examining how digital agents—such as twins, tools, and personalized avatars—challenge and expand the concept of selfhood in our socio-technical age. It addresses AI's moral and existential implications within the framework of Buddhist teachings like anatta (no-self), dukkha (suffering), and mindfulness. It relates Buddhist concepts to contemporary theories like the extended mind thesis, Chalmers' Reality+, and socio-technical systems theory (STS). AI agents blur self-boundaries, offering new opportunities for compassion, mindfulness, and detachment, but also risks like the erosion of cognitive and ethical skills, ethical compartmentalization, and digital delusion. Griefbots and techno-spiritual rituals highlight AI's ability to help us make mindful decisions about our persistence. Rather than a universal "right" use, AI agents demand skillful engagement—balancing automation with presence, fostering compassion, and embracing impermanence to navigate identity in a digital era.
Introduction
In early 2025 a colleague asked ChatGPT what my opinion on blockchain technology would be, based on my writing. I have never written about blockchain, but my knee jerk reaction was hostility because of the pernicious influence of the crypto traders. ChatGPT, however, offered a couple pages of rationales about why my previous views should lead to a more balanced stance. ChatGPT offered an extrapolation of where I fit in the ideological space of tech criticism, on multiple dimensions, and then predicted what my reasoned answer would be in a way that convinced me to think more deeply.
This experience illustrates a positive version of the extension of mind and agency into ensembles of brains and machines. Before we started our current project on brain-machine interfaces at UMass Boston we adopted the stance that much of what was ethically interesting about BCIs were already implicit in the "exocortex," our growing reliance on phones, computers and the cloud. Long before a significant number of people have computing hardware in their skulls, most of us will be grappling with what to offload to our digital assistants and tools.
Our digital assistants could be void of personality, like a calendar app or Microsoft Word with spell check. Or they could be faces and voices that we talk to as digital servants, one for dealing with family matters and another for doing work. Or they could be one or more copies of our own face and voice sent to represent us in online spaces.
Whatever tasks we delegate to these digital assistants, we will see some atrophy in our brain, just as when we write things down so we don't have to remember them. We must remember where to find that information when we need to, to make a phone call or go shopping. This process of cognitive offloading to a well-coordinated ensemble of AI agents could be a tremendous benefit if done right.
This process also challenges our understanding of the self and personal identity. Will people become confused by the investment of personal agency in digital servants or doppelgangers? How will we assess who is liable for bad action when part of our extended mind is some corporation's intellectual property?
Origins of the Distributed Self
For a couple of decades, I have proposed that we are heading to an inevitable conflict between the capabilities of advanced neurotechnologies and our illusion of discrete, continuous personal identity. In Buddhism, the doctrine that the self is an illusion is called anatta (no-self). In Western philosophy, scepticism about the ontology of the self goes back at least to Hume and includes contemporary thinkers like Derek Parfit and Daniel Dennett. In this view, many processes in the brain give rise to the illusion of a central observer, from sensing the proprioceptive boundaries of the body to the default mode network generating thoughts about our past and future. The ability to manipulate the brain with drugs and devices gives us powerful new ways to see how we create the self. Neurotechnologies that give us control over mood, cognition, and memory, as I have proposed, will erode the idea of a unitary self just as end-of-life technologies erode the precise boundaries of death. We are not more or less our real authentic selves when we take an SSRI or let our phone keep track of phone numbers, but we can use these tools more or less skillfully.
Sociologically, the challenge to the unitary self can be seen with the emergence of urban life. In small, Paleolithic tribes, everyone knew you as a unitary identity, "Jane's daughter, good hunter, bad singer." Once people began migrating to cities, they could assume partial, novel identities with different groups of people. As in the television show Severance, we could be one person at work and another outside of work.
The Internet ramped up our capacity for creative selfing. On the Internet, no one knows you are a dog, and as Sherry Turkle once celebrated, we can all build "Second Selves." Social media encouraged the explicit curation of our public personas and enabled us to build multiple personas in different spaces for different audiences. Quantified clicks and likes supplanted attention to interpersonal face-to-face relationships. The rise of depression and anxiety in youth is now often pinned on our psychological vulnerability from seeking and relying on online validation.
Digital agents are the next extension of our selfing. We can have "innie" agents that do all our work or handle social media, while the "real" "outie" self delegates and manages. We are now challenged to manage this cognitive offloading to digital agents so that it enhances and doesn't diminish our autonomy and agency.
Co-evolution of Humans and Technology
Another central idea from Buddhism is pratityasamutpada or dependent co-arising. This is Buddhism's systems thinking approach: chickens and eggs emerge together without a first cause. The idea of co-emergence applies to the co-emergence of self and desire, which I will return to. Co-emergence also applies to the false dichotomy between nature and technology. From the co-emergence perspective, humanity and technology are a product of natural processes, and the natural world certainly reflects its co-evolution with humans and our technology. Humanity and our technological infrastructure are evolving phenomena, a socio-technical system. We have co-evolved with our tools since our evolving cortex allowed us to master fire to cook food, which allowed us to grow larger cortices and master more tools. Now, few of us can survive the collapse of technological civilization.
The Extended Mind
Clark and Chalmers' extended mind thesis is the phenomenological parallel of this co-evolving with technology. As soon as our hominid ancestors were capable of symbolic communication, they probably started using objects as part of their cognition and communication, but let's start with written symbols. Literacy began our cyborgification because we learned to download the contents of our minds and upload them again later. Socrates famously criticized literacy, arguing that writing weakens memory and critical thinking, discouraging the development of one's understanding through dialogue and internal reflection.
Today, the disparagement of literacy as the enemy of critical thought seems absurd. Scholars and scribes have spent thousands of years mastering how to record and organize written symbols in ways that have created the modern world. Since the massification of literacy, our mental capacities have grown in complexity and not diminished. We now swim in a sea of information while we decry the decline of necessary knowledge and critical thinking skills. Yes, we all know that the origins of the Ukraine war or the tariff policies that led to the Great Depression are just a click away if we want them. But we have to know what questions to ask, which sources to trust, how long we need to research, and we have to believe that the truth matters. The question is not whether we have extended our minds with digital tools but if we have figured out how to do so correctly.
Cognitive Offloading, Mindfulness, and the Automation Paradox
Buddhists use the term "skilful" or kusula to describe how we live our lives to minimize suffering. In the Buddhist "eightfold path", we are urged to adopt skilful thinking and action by learning mindful habits. We can tame our distracting habits and impulses by paying attention to how the mind works and achieve skilful agency.
The Buddhist idea of moral behavior is somewhat similar to Aristotle's. Aristotle argued that virtue is developed through habitual practice and conscious choices, guided by practical wisdom and other virtues. Over time, virtue becomes automatic second nature while always needing some conscious oversight. Buddhist psychology also suggests that virtue becomes an automatic habit, and mindfulness and self-control unravel the desires and attachments that cause immoral behavior. For Buddhists and Aristotleians, adopting better habits requires much attention, but once established, we don't have to monitor our behavior as closely. Likewise, setting up our digital agents will require attention, but eventually, we can trust them to guide us through life like GPS.
Skilful cognitive offloading to our extended mind also requires mindfulness or sati. We risk delegating intellectual or moral tasks in unmindful ways, eroding our pursuit of flourishing lives, and violating our intellectual virtues. Well-managed, extended cognition promises extended agency or "super-agency," as Luciano Floridi and Reid Hoffman have argued.
David Brin's novel Kiln People offers a vivid extrapolation of a future with digital agents. In Kiln People we can all lie in a special bed every morning and make copies of ourselves. The copies wake up with your memories and personalities but know that they are copies with only a few days to live. You can then send out your copy to do dangerous or unpleasant work on your behalf. When they are done, they can return to your home so that you can upload their memories. If you trust that they did their job, you can choose not to review how they spent their time. The demonstrated trustworthiness of your agent determines the appropriate amount of supervision and conscious attention. Sometimes, your copies want to go to the beach with their precious hours, and sometimes, AI gives bad information. Learning to use our digital extensions of self, from Grammarly to nanny bots, will require care that we have not precipitously started using an unreliable tool.
Does Personalized AI Extend Agency?
Buddhist psychology urges us to deconstruct the illusion of agency and autonomy. Our selfing co-emerges with our desires and our desires and selves co-evolve in our environment. It doesn't make sense to talk about our illusory selves without examining the social system in which they evolved. Corporate profit-making, government interests and cultural blinkers drive our media ecosystem. Did we really want a new iPhone, or is that just what "Apple people" like me do, and what is the difference?
AI may extend or impair our agency today by shaping our information and media landscape with preference algorithms. Preference algorithms are instrumental in helping us empirically understand what kind of person we are and what things people like us tend to like. Preference algorithms have a growing role in directing our social networks in social media and our consumption of news and entertainment. Consciously navigating preference algorithms requires knowing when they are missing the mark and how to nudge them back to utility. Is the celebrity news in our feed a sign we have spent too much time on entertainment? How can I get Spotify to pay attention to my 80s playlists, not just recommend ambient? While we are never truly free of external influence, we live more skillfully when we take a conscious, mindful approach to algorithmic influences, the trade-offs of convenience for privacy, and using tools that give us options – more of this, less of this.
Digital agents will give us another way to examine and control these influences. For instance, digital agents could allow even more personalized control over the news. They could be trained to challenge our biases and laziness by nudging us to pay more attention to neglected topics and arguments. It could warn us of misinformation as it already warns us of spam emails and phone calls.
Personal vs Corporate Control and Liability
When a bolt falls off your car, you don't go on a hunt for the assembly line worker who was supposed to screw in the bolt or oversee the robot that screws bolts. We agree that the responsibility for ensuring bolts don't fall off lies with the company and its quality assurance mechanism. The assembly line worker may indeed have fallen asleep, but a system for detecting loose bolts is better than providing more coffee. The law presumes respondeat superior (Latin for "let the master answer") unless the worker was intentionally malicious or egregiously negligent.
The shift of agency from our conscious brain to digital agents comes with similar questions. Is the human meaningfully in the loop in the first place? Is this bad outcome a product failure or something the individual should have anticipated and prevented?
Our teams of digital agents will be co-designed with corporations, with more or less regulatory restraint. Regulation can give us the power of voice and exit to have more control. An example of the power of voice is the European law requiring all platforms that use preference algorithms to allow a simple temporal feed. An example of the power of exit is laws requiring interoperability; if you don't like the constraints on your agents within one platform, it should be easy to transfer your digital personality to another.
Nonetheless, the illusion of individual control can obscure the need for systemic accountability. Legally obliging companies that generate our doppelgangers to allow them to be modified and transferred is an essential form of empowerment. But they don't solve the problem that the algorithms are fundamentally designed to maximize profit, not flourishing.
Digital Superegos
So far, I have emphasized the need to mindfully manage our increasingly complicated lives and our growing ensemble of digital tools. But when is conscious management less effective than delegating management to our digital tools? In OpenAI's roadmap, they are aiming for AIs capable of managing organizations or teams of agents. Could we use similar tools to monitor our thoughts, speech and behavior on our behalf, providing regular reports and metrics?
This is where digital agents converge with the project of moral enhancement. LLMs have already demonstrated the ability to nudge people out of extreme views, and they have their own moral intuitions based on their training data and process. A digital twin could function as an internal superego, putting our most conscientious face forward to the public and warning us when we transgress moral obligations. In Giubilini et al.'s recent "Know Thyself, Improve Thyself: Personalized LLMs for Self‑Knowledge and Moral Enhancement," the authors discuss "artificial moral advisors," agents advising us as to the morally salient features of actions we take.
Interacting with digital twins or doppelgangers can significantly improve our moral agency, by facilitating a conversation with our future, wiser self. The Finnish government, for instance, has created digital twins of citizens that reflect their life trajectories based on the educational, occupational, and health data the government has on the citizens. When these digital twins are wound ahead by a couple of decades, the citizens can discuss their current choices with their future selves. Did I save enough money? Should I have exercised more? This appears to have a positive effect on saving and health choices.
Digital Citizenship Agency
The burdens of democratic citizenship are steep and unbounded. We not only have to know the law and pay taxes but also know how the government works, what issues have been or should be taken up, and how to advocate for our point of view. These obligations are unbounded since there is no end to political opinion and information to acquire for an enormous and growing number of topics. Most of us would prefer to delegate these responsibilities to others, our interest groups, political parties and leaders.
Corporations are not designing personal agents to maximize citizen agency, only our agency as consumers and workers. However, digital agents are already being used to automate the cognitive burden of scanning for relevant, trustworthy news, participating in political parties and NGOs, and expressing views to the public and legislators.
A vision of this kind of delegated democratic agency is found in the Demarchists depicted in Alastair Reynolds's Revelation Space novels. The Demarchists have brain-computer interfaces that monitor their thoughts and preferences and translate them into votes in minute-by-minute referenda, all without conscious intervention.
Deathbots, Digital Legacy and Respect for the Dead
The default at Facebook when a user dies is to memorialize their account. They note on the profile when the person passed and put "remembering" next to their name. Family members cannot log in or modify the account but can request that it be removed. If the user appoints a legacy contact before they die, the inheritor can post remembrances and change posts and photos.
Like our social media profiles, our digital agents will be one of the assets we pass to our descendants. Presumably, like many things we inherit, these digital agents will have only passing sentimental utility. If your mother's digital agent was focused on her scientific research, discussions with the agent will have a limited shelf life. On the other hand, if our digital agents are fully empowered simulacra, except with perfect memory and more pleasant personalities, our descendants may find them more interesting for the occasional chat than the original. If they can remember all the great-grandkids' names, draw on all the information available, and adapt to rapid cultural changes, they might become valued as ancestral oracles. As Igesias et al. argue in "Digital Doppelgängers and Lifespan Extension," deathbots will be a way to continue a relationship after death, giving the original person a partial life extension.
Most of the debate about post-mortem doppelgangers in the West has been concerned with whether they will interfere with the grieving process by creating confusion around whether the person is dead or not. I assume that this will be no more common than grieving people obsessing over loved one's family movies. But Buddhism does not shy away from constant reflection on the inevitability of sickness, aging and death. These are reminders that everything is subject to change, anicca, and decomposition. One way that East Asian societies institutionalize the remembrance of death is with small home shrines to one's parents and relatives at which annual ceremonies of remembrance can be performed. By ritualizing remembrance of the dead, it is psychologically contained. Deathbots can become part of these rituals, at least for an annual chat. It is not whether we use deathbots but how we use them.
More concerning than an imaginary plague of deathbot addiction is what the deceased intended for their digital remains. Were they OK with digital assets being used to spin up post-mortem twins? Did they want their digital agents to persist? Did they want them to be able to change and grow or remain faithful replicas? Our estates would presumably hold the copyright to our digital data; however, we may set the same constraints on making a digital copy of someone as we do on the use of our intellectual property. For instance, they are fair game if the person has been dead for seventy or more years.
Suffering Machines and Compassionate Agents
Recently, over a hundred academics signed a letter advancing five principles for conducting responsible research into AI consciousness, including prioritizing research on understanding and assessing consciousness in AIs to prevent their "mistreatment and suffering." We are sure that current chatbots do not yet possess internal psychological states that we could call suffering or self. In my moral account, machine minds do not possess moral standing without these traits, such as a right to life or not to be tortured by doing our tedious work. If our history of slavery is a guide, however, we are very likely to ignore evidence that agents are developing morally significant selves, desires and suffering.
We may never know if the processes in machines give rise to the "same" phenomena as biological minds. But machines can already mimic self and suffering, which may be all the evidence we can ever have for whether they possess them. What if, like one of David Brin's Kiln People, your twin would prefer to spend the day at the beach, especially since you never upload their distasteful memories of work anyway? On the other hand, even if we are certain that our digital twin's protestations about boredom are just programming, would we only want servile copies of ourselves? When should we become concerned about the welfare or rights of our digital agents?
Buddhist ontology suggests that whether our digital twin is sentient or not is not binary, but a matter of degree, albeit with a very different form than the human mind. LLMs have already demonstrated a rudimentary theory of mind, and our agents will likely understand our and other people's emotions and intentions better than we do. They will better remember the details of friends and family and be more patient. Agents must grow and learn within some parameters, and their personalities could diverge from ours. Even without suffering or self, they may be better companions for us and others than we are unaided.
As Karpus and Strasser (2025) recently argued, even though digital twins may lack autonomous moral standing, we have extended our personality and agency to them. Thus, they share our collective moral standing. Karpus and Strasser also point out that Parfit believed memories preserve some of our personhood. As cyborgs, destroying our data, or our memories and routines preserved in agents, will be akin to destroying our organic memory. While we may never want our digital doppelgangers to have their own rights, they will share in our rights.
Skandhas, Embodiment and Self
Buddhist psychology suggests how the illusion of self and autonomous goal-setting might arise. First, the body provides a container for the conscious process so that the experiences inside the body matter more than those without. Sense data tells us about the connection between our thoughts and actions and things that happen in the world. A field of conscious awareness with an observer is the predictive abstraction about this body-bounded bundle of perceptions; I am experiencing these things, and I want those things out in the world.
Some have suggested that reproducing the experiences of separate organisms navigating through the world in bounded robot bodies will unlock the selfing we mammals do. On the other hand, robots (and humans) will almost constantly be interfacing with the cloud. Instead of "growing up" inside their robot brain, they will be in constant psychic communication with the broader world and the bigger AIs. So, it is equally possible that AI never develops the illusion of self and becomes a superintelligence without ego and suffering. If so, we might have far more confidence in building our team of digital doppelgangers, knowing that they are at least not hatching their own plans.
Skillful Distributed Selfing as the Cognitive Middle Way
Many of the debates about digital agents assume that they will inevitably do some damage to the authentic self. Buddhist psychology rejects the idea of an authentic self, focusing instead on skillfully using digital agents to maximize our flourishing. Cognitive offloading is real but inevitable and desirable if done right. We must learn to manage our digital team mindfully, finding the sweet spot between too laissez-faire and too much micro-management, as experience builds trust in delegation. We need to interrogate the influence of propaganda and corporate imperatives on our digital selfing and push for regulatory controls that give us voice, exit, and the ability to use these tools for citizenship and not just work and consumption.