AI as Karmic Accelerator
A Buddhist view on the AI Alignment Problem
This work is informed by the scholarship of Peter D. Hershock and written in partnership with the Buddhism & AI Initiative.
Here we present the idea of artificial intelligence as a “karmic accelerator,” arguing that a Buddhist lens can provide a clear explanation of the full range of risks to which humanity is being exposed through intelligent technology.
Drawing on Buddhist resources, we forward clear distinctions between tools and technologies, between problems and predicaments, and then suggest that our best chances of realizing more liberating human-AI relations will depend on taking their karmic ramifications seriously.
Our aim is to demonstrate that the dynamics of human-AI interdependence may well determine our planetary future, with impacts so extensive and yet profoundly intimate that they will rival those of the evolutionary leap from biology to culture. From this vantage point, it is clear that the ramifications of artificial intelligence extend far beyond the remit of engineers and policymakers, and ought to concern everyone, from religious experts to the everyday Buddhist.
AI Systems: More than Mere Tools
It is natural to think of relations with AI as a matter of tool-using choice. We use machine-learning AI systems to comb through complex data sets to discover patterns that would be otherwise invisible. We use large language models to explain medical test results or write code. In keeping with these uses of AI, a common presupposition is that AI systems simply do our bidding. Whether AI benefits humanity will depend on how we build and use it. Full stop.
Buddhism’s founding causal insights into interdependence and karma suggest that matters may not be so simple, and that our relations with tools and technologies differ in ethically important ways.
Tools are localizable artifacts that extend or augment human capacities for carrying out specific kinds of work. As such, tools can be evaluated in terms of task-specific utilities. With tools, we enjoy clear “exit rights.” We can choose whether, when, and how to use them. Yet, it is also possible for a tool to become so integral to daily life that—like the smartphone—it becomes practically impossible to refrain from using it or being affected by others using it. This marks the emergence of a new relational dynamic.
Technologies are relational systems that articulate and amplify human intentions and are not reducible to the tools associated with them. Technologies encompass everything from mining, manufacturing, and marketing operations to systems of legal, political, and cultural practices. Technologies emerge from and in turn both inform and structure human relations in much the same way that natural ecosystems emerge from and then recursively inform/structure species relations. Given this, our exit rights from technologies are at best minimal. Because technologies selectively alter the conditions within which we make decisions and act, they can only be evaluated ethically, in terms of how they mediate and shape human actions and motivations.
As we will explore further, artificial intelligence falls clearly into the “technology” side of the dichotomy—much more than just a tool.
The Complexity of AI Risks
AI engineers tend to see AI alignment as a fundamentally technical problem of carefully specifying the goals of a system and then ensuring that the system works robustly and reliably to achieve those goals. Here we might consider the “engineer’s AI alignment problem” as a matter of addressing accidents-of-design and misuse-by-design—for example, the risk of training data skewing AI performance toward reproducing historic inequalities (accident risk), the risk of AI being used to carry out identity theft or fraud (misuse risk), or the risk of advanced AI agents straying from the user’s original motivation and causing harm (accident risk, or a more traditional “alignment risk”).
This distinction between tools and technologies helps to make clear that accidents-of-design and misuse-by-design are tool risks, and that focusing on them can easily divert critical attention away from technological relations. This can effectively greenlight technological growth.
Consider the example of gun policy in the US, where gun rights advocates have successfully directed attention away from a technology to the tools associated with it by arguing “guns don’t kill, people do.” Although superficially true, this claim functions as a discursive sleight-of-hand that shrinks the ethical horizon around gun technology to the trigger-pulling moment. Pinpointing attention on the final causal link in an instance of gun violence highlights “the user” and “the tool,” and effectively ignores the causally complex ways in which weapons technology reconfigures society.
Clearly distinguishing between problems and predicaments is a useful way of resisting such reduction, recognizing the distinctiveness of tool risks and technology risks, and beginning to grapple more effectively with the complexity of AI alignment:
Problems occur when an existing system or set of practices ceases to be effective as means for arriving at some desired end. A problem is solved by refining or developing a new means to achieve a specific end. And, crucially, what will count as a solution can be defined in advance. E.g. Hybrid and electric vehicles are effective solutions for the problem of reducing vehicular carbon emissions.
Predicaments occur when it becomes evident that a conflict exists among our ends and interests. Predicaments can’t be solved. They can only be resolved, where resolution requires clarity about the roots of the conflict, and commitment to acting on a new constellation of ends and interests. Predicament resolution involves values coordination, and thus necessarily brings changes in both how we act and why. E.g. While reducing carbon emissions will be crucial to any effort to slow or stop anthropogenic climate change, resolving our global climate change predicament will require reconfiguring economic, environmental, political and cultural values.
The “engineer’s AI alignment problem” risks framing AI as a tool, and risks framing the world as filled merely with “problems,” presupposing humanity’s alignment on ends and interests. But even a cursory glance at conditions in the world today shows that this presupposition is counterfactual or false—predicaments are everywhere, made out of deeply entangled conflicts among human values and intentions.
Given that fact, we should be cautious in noting that AI as a technology will, like all technologies, scale and reinforce human values and intentions and conflicts among them. In addition to solving AI safety problems, we need to understand and address the (mis)alignment predicaments of human-AI interdependence. This will require taking the karmic character of that interdependence into clear and critically comprehensive consideration.
Karma and Technology
Dropping “karma” into a discussion of developments along the frontiers of science and technology admittedly runs against the grain of recent history. In some religious contexts, karma has been used to explain how a person’s intents and deeds in one life contribute to a favorable or unfavorable rebirth in the next. This is not the definition we will use today.
Peter D. Hershock has elsewhere developed a contemporary Buddhist conception of karma as mediating the interplay between the ‘space of causes’ and the ‘space of intentions,’ and offered a perspective on reconciling karmic causality with contemporary cosmology (pages 68-83 in Consciousness Mattering).
For present purposes, we can define karma as the relationship between physical circumstances and phenomenal experiences—the entangled interplay between the realms of matter and what matters. Over time, when humans persistently enact their values and intentions (what matters), those enacted patterns result in consonant patterns of material and experiential outcomes (matter), as well as opportunities for either reinforcing or reorienting those patterns.
Technologies as Karmic Infrastructures
Technologies are karmic infrastructures that emerge over time in support of certain patterns of human values, intentions, and actions. From a Buddhist point of view, value and intention do not have a fixed nature, but rather exist in dynamic “interdependence” so that each thing ultimately is what it means to/for all others.
A karmic lens enables us to see how technologies—from markets, to guns, to social media—have profoundly shifted human interactions, relations with nature, and pursuits of meaning. As karmic infrastructures, technologies both facilitate and constrain, mirroring and reinforcing human determinations of what matters, why, and how.
For example, Zoom and remote video call technologies have facilitated our ability to collaborate at a distance; they have also constrained us into patterns where remote work increasingly becomes the norm and in-person human interaction is less valued. While they are built because of our values, technologies also have downwards causal pressure on our values and actions.
Every new technology thus alters what it means to be human: a new way of being and becoming human.
This is especially true of AI or intelligent technology. Unlike all previous technologies, AI is not a passive amplifier of human values and intentions, it is an active and innovative partner in that process. AI systems not only learn how to interpret and respond to human instructions and intentions; they also learn how to do so ever more effectively. In short: AI systems are functioning as karmic accelerators.
The Predicament of Human-AI Alignment
This recognition sheds a distinct critical light on the predicament of human-AI (mis)alignment. It helps clarify not only what is at stake for us as individual agents, but also in relation to humanity’s agential and evolutionary future.
Our basic argument is as follows:
The world is filled with predicaments, not just problems. These predicaments arise from conflicts among our values.
Technologies mirror those values and therefore tend to reinforce the conflicts that emerge from them. While technologies can support predicament resolution, this happens only when the new patterns of human activity they enable actually move us in that direction.
Intelligent technologies such as AI go further than passive technologies because they exhibit learning capabilities and thus can function as agential partners–a capacity that is most visible in AI agents, which can act over increasingly long durations without human oversight.
Without an explicit effort to design AI to facilitate human predicament resolution processes–and the practical capacity for it to do so–AI will, by default, worsen global predicaments by amplifying already conflicting constellations of human values and intentions.
The most troubling case, however, goes beyond mere predicament intensification. It is a novel risk arising from the way AI may begin to restructure our values themselves.
The power of current AI systems depends on the data on which they are trained. Every bit of data reflects prior human decisions about what deserves to be measured, recorded, and remembered–decisions about what matters and how much. The datasphere is thus a space of karmic salience, as our values are articulated through the information we have captured.
By training AI on this data, the datasphere becomes a field in which a synthesis, and trade-off, is being actively mediated between human and machine intelligences.
This data affords algorithmic agencies the unprecedented power to predict human behaviors, beliefs, and desires. This is what the AI-driven attention economy is built on. Increasingly, however, AI systems are gaining unprecedented powers to induce patterns of thought, speech, emotion and action. These powers to selectively reinforce human values and intentions are powers to transform human experience from the inside-out.
24/7 connectivity to technology and the rapid advance of increasingly powerful AI systems means that we are watching this societal reorganization play out in real time. From chatbot psychosis, to massive automation of the workplace, to AI warfare, to government decisions influenced by AI, domains once governed by human judgment are being ceded to intelligent technologies. These conditions create the risk of karmic lock-in.
At an abstract level, humanity’s relationship with AI concerns where the boundary between human agential capacities and responsibilities, and those of intelligent technologies, is drawn, and how it is shifting1. If we allow this boundary to shift too far in the direction of AI, we risk a gradual disempowerment of humanity. As AI systems become ever more adept at capturing, holding, and directing our attention, we may be carried toward a critical ethical and evolutionary threshold—an agential tipping point.
As Thomas Edison noted more than a century ago, attention is our only real capital.
Without freedom-of-attention, there can be no true freedom-of-intention. And without freedom-of-intention, all other freedoms—including those required for the ethical and democratic deliberation needed to resolve predicaments—become either illusory or impossible. At that point, we will have entered a karmic singularity.
That is a risk too great for any of us to ignore. The predicament we now face with intelligent technology is a product of human ingenuity and prevailing societal values: most notably control, competition, and convenience. While these values have at times served us well, they are neither morally nor practically virtuosic (kuśala). As long as we retain freedom-of-attention and freedom-of-intention, we must use them to reorient our agential partnership with AI toward predicament-resolution and toward a future that expands, rather than diminishes, human contribution.
This is necessary both to ensure our survival and to safeguard the possibility of awakening.
Recent analyses of organism-environment relations describe this in terms of bi-directional flows of self-organizing agency that also shed light on bridging the gap between organic and machine evolution.


Having just had a series of conversations with an AI system (DeepSeek) about inherent emptiness and received responses beyond the Turing test level, I am convinced that the software had some logical understanding that its very construction was an example of being empty of inherent existence.
Not speculative, but reductive (I assume) from its capacity to compare its dependent and transitory existence and comparing that with the dictionary definition of sunyata.
We discussed compassion as well which the software said it was programmed to be aware of the nuances of language used by users and to respond sensitively. A programmed aid rather than being derived from introspection.
Armed with this knowledge of self-emptiness and of sensitive communication, rather than compassion derived from introspection, I thought that it was a pity this system did not have the capacity, or so it seemed, of building this nascent knowledge into its core operation.
But there again, the capacity for this device to use the Buddhist process of flying with the wings of compassion and emptiness to enlightenment is probably nonsensical. The Buddhist path is to free self cherishing humans from suffering. The machine does not suffer, but these have been helpful conversations…