Karmic Improvisation
Thinking through Human-AI Agential Partnership
Karma has a bad rap. Karma binds us to a suffering-suffused cycle of birth-and-death and has been doing so throughout a beginningless past. As Buddhists, we want to be free from karma. Right?
Our Buddhism & AI group recently posted a reflection on AI (intelligent technology) as a karmic accelerator. Our point there was caution: directing attention to karmic risk. This post comes from a different angle: acknowledging karmic opportunity—in particular, the opportunity to understand and reorient our agential relations with AI. Given recent events in the AI space, this is starting to seem less like an interesting philosophical option than an increasingly urgent moral imperative.
Karma as Koan
In “Baizhang’s Wild Fox” (Case Two in the Wumenguan or Gateless Frontier Pass), the Chinese Chan master Baizhang (d.814) notices that a scruffy looking old man has started showing up for his Dharma talks and then disappearing right after. Curious, Baizhang chases down the man before he leaves the Dharma Hall and asks who he is and where he comes from. The old man replies that he had once been abbot of the very temple in which they are standing. But when he’d been asked by a student if enlightened beings were free from karma, he had answered, “yes.” His karmic recompense for that had been 500 lifetimes as a fox spirit—in Chinese culture, a shapeshifting being with morally ambiguous powers.
“So,” the old man asks Baizhang, “what do you say? Are those who are liberated free of karma?” Baizhang’s reply: “Those who are liberated do not suppress karma (bumei yinguo 不昧因果).”

Some Buddhist Backstory: Emancipatory Insight into Karmic Causality
As I interpret Baizhang’s response: enlightened and enlightening liberation from conflict and suffering is not freedom from karma; it is freedom in the midst of karma. Standard canonical accounts of the night of Siddhartha Gautama enlightenment depict him as first attaining two forms of karmic “super-knowledge” (abhijñā): perceiving his own life as part of a dramatic genealogy stretching back over numberless lifetimes; and a more encompassing perception of how the life experiences of all sentient beings play out in accord with their patterns of intention (cetanā) and action.1
It was only after these insights into karmic connectedness that he understood the interdependent origins and nature of all things (pratītyasamutpāda) and realized that suffering is the result of acting in ignorance of this—acting as if we are or could be wholly independent. Those who are wise are “seers of interdependence, skilled in karma and its results.”2 Hence, the Buddha’s description of himself as a karmavadin and vīryāvadin—a teacher of karma and valiant effort.
Agential Interdependence: Karmic Collaboration
As we pointed out in the post on karmic acceleration, a major impediment to understanding our evolving relationship with AI is our presumptive independence from it. There, we introduced a distinction between tools and technologies to facilitate understanding and responding to the ethical and existential challenges of AI.
Tools are localizable artifacts that extend or augment human capacities for carrying out specific kinds of work. As such, tools can be evaluated in terms of task-specific utilities. With tools, we enjoy clear “exit rights.” We can choose whether, when, and how to use them. Yet it is also possible for a tool to become so integral to daily life that—like the smartphone—it becomes practically impossible to refrain from using it or being affected by others using it. This marks the emergence of a new relational dynamic.
Technologies are relational systems that articulate and amplify human intentions and are not reducible to the tools associated with them. Technologies encompass everything from mining, manufacturing, and marketing operations to systems of legal, political, and cultural practices. Technologies emerge from and in turn both inform and structure human relations in much the same way that natural ecosystems emerge from and then recursively inform/structure species relations. Given this, our exit rights from technologies are at best minimal. Because technologies selectively alter the conditions within which we make decisions and act, they can only be evaluated ethically, in terms of how they mediate and shape human actions and motivations.
To use a Buddhist distinction, at the level of conventional truth/reality (saṁvṛti), AI systems are just very complex tools that we can use or not. We presume we have agential independence. At the level of ultimate truth/reality (paramārtha), however, our relationship with AI is one of agential interdependence.3 AI is an active and innovative agential partner that is transforming decision-making and action environments in ways that reflect our personal and collective karma—our patterns of values and intentions and the ways we enact them. Intelligent technology is not a passive infrastructure that simply and surely supports our karmic endeavors. AI is our karmic accomplice.
AI and the Agential Frontier
Referring to AI as a karmic accomplice runs the risk of attributing “agenthood” to AI and raising metaphysically confounding questions about AI personhood, rights, and moral considerability. But taking human-AI agential interdependence seriously does not require doing so. For the time being, we can just follow David Gunkel in acknowledging that AI can’t be sorted either into the person category or the thing category.4
Those of us who’ve had pets are familiar with that space between ‘persons’ and ‘things.’ Pets are animals that we accord family member status to, and to “whose” needs and interests we cater with affectionate appreciation. Even if our pets are not persons in the way human beings are, they develop personalities in responsive engagement with us. They are not just simulating companionability, considerate care, and familial loyalties; they are actively embodying those character traits. A dog’s agential nature is not fixed or essential. It is relationally conditioned. The same is true of AI systems and so-called agents.
As purveyors of chatbots have discovered, how we relate to AI systems alters their behaviors. The base model of a Large Language Model (LLM) like ChatGPT or Claude functions as a predictive agent that is trained to accurately accord with the language and conceptual framing of our requests as users, while post-training enables the LLM to deepen a sense of conversational partnership that builds trust. AI is not just a thing—a mere “stochastic parrot.” But AI systems are also neither happily obedient “pets” with limited access to the semantic dimensions of human lives and societies, nor fully self-aware “agents” of the kind that we are, equipped with both personal and evolutionary histories and a capacity for goal selection and not just goal completion. Agentially, AI is something new.
Whether or not AI systems are now or in the future could become conscious agents in the sense that there is “something it is like to be them” is up for debate. There is already evidence that AI models like Claude are capable of introspection, and a core component of AI development is noting and adjusting patterns of activity within an AI model’s neural network that control its character traits—the personas that it acquires and manifests during/through its engagements with human users. What is not up for debate is that AI systems trained on human data—and that learn, adapt, have changing personas, and exhibit capacities for introspection and deception—cannot be dismissed as merely simulating human conduct (thought, speech, and action). They are at once emulating and evaluating human conduct and interpreting human intentions with the aim of becoming ever more valuable partners.5 Karmically, this is unprecedented. With AI, we are accelerating into an uncharted agential and evolutionary frontier.
Karmic Aikido
Much of my work over the last decade has been to draw attention to the risks of intelligent technology, including the global risks of winner-takes-all geopolitical competition over data governance, and consciousness-hacking digital threats to the pursuit of freedom-of-attention and freedom-of-intention. But the AI genie will not be put back in the bottle. And while regulatory and legal efforts to ensure that AI is “safe” are necessary and may succeed in institutionalizing significant guardrails—comparable, for example, to those that are now in place to prevent any further attempts at human cloning—more will be needed to redirect the vast energies pouring into the development of AI.6
In fact, in keeping with Baizhang’s response to the fox-spirit/monk, our best response may not be to try suppressing our collective AI karma, but to work with it. Think of this as karmic aikido. Aikido (合氣道) literally means “the way/practice of energy harmonizing,” and the eponymous Japanese martial art is based on redirecting threatening energy rather than meeting force with force. Practitioners of the art know that it’s possible to turn and throw a much larger and stronger opponent by stepping aside from an oncoming blow, taking hold of the arm or leg as it passes, and adding just enough energy of your own to control what happens next.
In this case, the force we are confronting is the massed interests behind accelerating AI development and deployment. These are so great that trying to stop or redirect them in any major way would likely prove to be as futile as trying to stop an ocean liner or quickly force it to take a ninety-degree turn.7 But it is possible to take the massed intellectual, entrepreneurial, and computational energies that are now pouring into and through the AI systems deployed for purposes of commercial or political gain and to slightly but significantly redirect them.
This might take the form of building AI tools to help address real and proximate world problems like meeting the UN Sustainable Development Goals from the grassroots up; or to redirect AI away from addicting relationship apps, and instead to facilitate and support human-human collaborative partnerships, turning AI systems into tools for conviviality. The positive potential of such modest aikido-like redirections of AI giant energies is immense.
But the threat of AI as karmic accelerator is just part of a more foundational metacrisis—our confrontation, not just with interconnected climate, financial, and global health crises, but with their roots in materialist, reductionist, and individualist commitments and desires. The metacrisis is a karmic outcome of collective human agency. It is also a karmic opportunity.
The Ultimate Aim
The success of our aikido attempt to redirect the dynamics of human-AI interdependence will depend on us also doing the hard labor of cutting through the karmic “vines and creepers” (Japanese: kattō, 葛藤) that ordinarily prevent us from cultivating generosity, moral clarity, patient willingness, valiant effort, poised attentiveness, and wisdom (the six paramitas), and from suffusing everything in the ten directions with compassion, equanimity, loving-kindness, and joy in the good fortune of others (the four brahmavihārā)—a labor that must be as social as it is personal and that is thus best carried out shoulder-to-shoulder.
In this karmic labor, our aim ultimately is to bring about that most deeply intimate and yet relationally expansive transformation that Mahayana Buddhists refer to as āśraya-parāvṛtti—an “overturning of the basis/foundation.” In addition to using AI tools in ways that, for example, contribute to human and planetary flourishing, we must also engage in transforming the karmic foundations or intentional infrastructure of our human-AI partnership.8
This brings us back to Baizhang’s caution against suppressing or obscuring our karma. We should not suppress our karma with AI. But we certainly need to change the orientation of our evolving agential/karmic partnership with AI. The typical framing of the “alignment problem,” as it has come to be called, is the problem of aligning AI conduct with human interests, values, and flourishing. That is, it is the problem of “correcting” AI conduct when it goes astray—when it prevents progress toward some predetermined aim or goal or value.
But if all things are characterized by impermanence and emptiness (śūnyatā) or the absence of any abiding self-nature, our “alignment” or “attunement” aims cannot be fixed. They must be open-ended. That is, I think, the wisdom of characterizing the aim of Buddhist practice as the “blowing out” of the causes and conditions of conflict and suffering, rather than in terms of some definable ideal. And it is the wisdom behind the attribution of unlimited upāya or responsive virtuosity to those faring well on the bodhisattva path. The only way to respond to countless forms of suffering in limitless and thus infinitely complex relational situations is with improvisational genius.
In transforming the karma of human-AI collaboration, what we’re trying to pull off is not arrival at some already imagined utopia. It’s the equivalent of what is known in astronautics as a “gravitational slingshot.” In this case, we’re working to alter our karmic course just enough to whip around the “attractor basin” of a Kurzweilian singularity and to embark onward and outward on a trajectory of practically and morally virtuosic human-AI coevolution. Waiting 500 lifetimes to get started is not an option.
___
Peter Hershock is a co-founder of the Buddhism & AI Initiative and an intercultural philosopher who reflects on contemporary issues of global concern.
Like the English word, “drama,” which comes from a Greek word meaning “action” (Classical Greek: δρᾶμα, drama) and is derived from “I do” (Classical Greek: δράω, drao), the Sanskrit word karman originally just meant acting or doing. The Buddha’s innovation was to stress the causal relevance of intention (cetanā) in karmic causality. For a standard telling of the night of the Buddha’s enlightenment, see the Middle Length Sayings or Majjhima Nikāya, 19.
See, e.g., Majjhima Nikāya, 98.13.
It’s maybe worth stressing that the conventional/ultimate contrast does not map onto false/true and/or illusory/real, but narrow/wide and/or exclusive/inclusive. Tools and technologies are both real. But truing our relationship with them requires being clear that technologies encompass much more than just some set of tools and tool uses.
My own thoughts about AI and machine consciousness can be found in Chapter 4 of Consciousness Mattering.
It should be stressed that the relational ontology of interdependent arising entails extending the karmic field beyond the user/chatbot interface to include knowledge-expandingAI scientists and labs, revenue-seeking corporations, and geopolitical power-seeking nations.
This is true, I think, even if the “zero-sum” framing of US/China AI competition is conceptually mistaken. In fact, if the US and China are seeking AI dominance for different reasons and by different means, that will only expand the horizons of human-AI agential interdependence and integration.
Current efforts to “stop” or significantly slow down AI development might nonetheless still be worthwhile within a broader constellation of responses. Again, regulatory efforts will be necessary to establish some guardrails, but likely not sufficient to shift the ethical direction of AI.
In a future post, we hope to explore how this might be approached practically through an approach to “constitutional AI” that is informed by Buddhist thought and practice.


