Insynergy
← Back to Insights

Who Owns Judgment in the Age of AI?

AI governance is often framed as a problem of control, safety, or ethics. This essay argues that the deeper risk is structural: the quiet disappearance of human judgment. As AI systems optimize decisions at scale, responsibility dissolves unless judgment is intentionally designed. The question is no longer what AI can do—but who, if anyone, is still deciding.

The quiet disappearance of accountable decision-making—and why it matters more than the technology itself


There is a peculiar kind of crisis that arrives without announcement. It does not manifest as a system failure or a dramatic collapse. Instead, it emerges as a gradual hollowing—a slow evacuation of substance from structures that continue to appear functional. The forms remain. The signatures are still collected. The procedures still execute. But somewhere along the way, the thing that was supposed to be happening inside those forms has quietly departed.

This is the nature of the crisis now unfolding around artificial intelligence. It is not, fundamentally, a crisis of capability or control. It is a crisis of judgment—or more precisely, a crisis of judgment's disappearance.

In January 2026, Reuters reported that the U.S. Department of Defense and Anthropic, a prominent AI research company, had reached an impasse over the terms of a contract worth up to $200 million. The dispute centered on safeguards: Anthropic sought to maintain restrictions preventing its AI systems from being used for autonomous weapons targeting and domestic surveillance. The Department of Defense countered that as long as U.S. law is followed, commercial AI technology should be deployable regardless of a company's usage policies.

On its surface, this reads as a familiar story—a tension between national security imperatives and corporate responsibility, between state power and private constraint. But to read it this way is to miss what makes the conflict significant. This is not a disagreement about policy. It is a symptom of a deeper structural problem: the absence of any shared framework for determining where judgment should reside when AI systems are involved.

The question is not whether AI should be used in military contexts. The question is who judges—and whether anyone does at all.


The Disappearance That Precedes Failure

We tend to imagine institutional failures as events—moments when something breaks, someone errs, a process goes wrong. This framing encourages us to look for culprits and causes, to assign blame and implement fixes. But the most consequential failures are rarely so legible. They manifest not as breakdowns but as absences, not as errors but as evacuations.

Consider what happens when an organization adopts an AI system for consequential decisions. The system is introduced as a tool—an aid to human judgment, not a replacement for it. Early on, human operators review the system's outputs, assess its recommendations, and make final determinations. The human remains, nominally, in control.

But efficiency has its own momentum. Over time, the system's recommendations are overturned less frequently. Review processes become abbreviated. The cognitive effort required to genuinely evaluate each output—rather than simply ratify it—begins to feel like friction. The human operator becomes, in practice, a checkpoint rather than a decision-maker: present in the process, but no longer substantively engaged with it.

This is not a failure of the system. The system is performing exactly as designed. What has failed is the design of judgment itself—or rather, the failure to design for it at all.

The conflict between the Department of Defense and Anthropic illuminates this void. When the DoD argues that legal compliance should be sufficient grounds for deployment, it implicitly asserts that the question of judgment has already been settled: the state judges, the law constrains, and no further architecture is required. When Anthropic insists on additional safeguards, it implicitly argues that legal compliance alone cannot guarantee that judgment is actually occurring—that something more is needed to ensure that consequential decisions are not merely processed but genuinely made.

Neither party is wrong in its own terms. What is missing is a shared understanding of what judgment means in contexts where AI mediates action, and where that judgment should be located.


Optimization Is Not Judgment

Part of the difficulty lies in a category confusion that pervades contemporary discourse about AI. We speak of AI systems "deciding" and "determining" and "judging," as if these were equivalent to human acts of the same name. They are not.

When an AI system selects an output from a range of possibilities, it is performing optimization: identifying the response that best satisfies a given objective function based on patterns extracted from training data. This is a powerful capability, and it can produce outputs that closely resemble what a skilled human judge might produce. But resemblance is not equivalence.

Judgment, in the meaningful sense, involves more than selecting an optimal output. It involves taking ownership of a decision—assuming responsibility for its consequences, being accountable for its reasoning, and accepting that one could have chosen otherwise. Judgment implies a subject who judges: an entity capable of being held to account, of explaining itself, of bearing the weight of having chosen.

AI systems are not such subjects. They do not bear responsibility; they execute functions. They do not choose; they compute. When we speak of AI "judgment," we are using the word metaphorically—and the metaphor obscures a critical absence.

This absence matters because our institutions are built on the assumption that judgment exists. Chains of command presuppose that someone commands. Accountability structures presuppose that someone is accountable. Legal frameworks presuppose that actions are taken by agents capable of intent, negligence, or culpability. When AI systems mediate more and more of the actions within these structures, the presumption of judgment persists even as its substance drains away.

The result is a kind of procedural theater: the forms of accountability without its content. Signatures are collected from humans who did not meaningfully review. Approvals are granted by officers who did not substantively assess. The paper trail suggests that judgment occurred, but the judgment itself was never truly exercised.


The Illusion of the Loop

Proponents of AI deployment in high-stakes contexts often invoke the principle of "human-in-the-loop"—the idea that a human being remains part of the decision process, ensuring that ultimate authority rests with a person rather than a machine. This principle is meant to preserve judgment, to guarantee that however sophisticated the AI, a human still decides.

In practice, the loop is often a fiction.

The problem is not that humans are removed from processes. They remain—formally, procedurally, organizationally. The problem is that the conditions for meaningful human judgment are progressively eroded. The speed at which AI systems operate often exceeds the speed at which humans can thoughtfully assess. The volume of decisions mediated by AI often overwhelms the capacity for substantive review. The complexity of AI outputs often obscures the reasoning in ways that make genuine evaluation impractical.

When a human operator is asked to approve or reject an AI recommendation under time pressure, with limited context, and with a strong institutional expectation of deference to the system, what kind of judgment is being exercised? The human is in the loop, but the loop has become a rubber stamp.

This is not a technological problem. It is a design problem. Systems are built without adequate consideration of what would be required for the humans within them to exercise genuine judgment. The assumption seems to be that inserting a human into a process is sufficient—that the mere presence of a person guarantees that judgment is happening. But presence is not judgment. Judgment requires conditions: time, information, authority, and the real possibility of deciding otherwise.

When those conditions are absent, the human-in-the-loop becomes a procedural alibi—a way of claiming that judgment occurred without ensuring that it did.


Responsibility Without a Subject

The consequences of this erosion are not abstract. They manifest in a specific and troubling form: the evaporation of responsibility.

Traditional accountability depends on being able to trace an outcome to a decision, and a decision to a decider. When something goes wrong, we ask: Who chose this? Who approved it? Who is responsible? These questions presuppose that answers exist—that somewhere in the chain of events, a subject made a judgment and can be held to account for it.

AI complicates this structure in a fundamental way. When an outcome results from an AI system's optimization, processed through a procedural loop with nominal human involvement, where does responsibility reside? The AI cannot be held responsible; it is not a moral or legal subject. The human operator may not have substantively judged; they may have merely ratified an output they could not fully evaluate. The organization deployed the system, but deployment is diffuse—distributed across procurement, implementation, and operation. The developers created the system, but they did not determine its use.

Responsibility becomes a hot potato that no one can hold. It passes through the system, touching each node briefly, but never coming to rest. When pressed, each actor can point elsewhere: the operator points to the system; the organization points to legal compliance; the developers point to terms of service. The outcome occurred, but no one decided it. The consequence landed, but no one is accountable.

This is not a failure mode of AI systems. It is the predictable result of deploying such systems without designing for judgment. When judgment is not explicitly located—when no one is designated as the subject who decides and bears responsibility—it simply dissipates. The organization continues to function. Outputs continue to be produced. But judgment, in any meaningful sense, has exited the building.


Decision Design: Locating Judgment Intentionally

The problem, then, is not AI itself. The problem is the absence of deliberate design around where judgment should reside, who should exercise it, and under what conditions.

This is the domain of what might be called Decision Design: the intentional architecture of judgment within organizations and systems. Decision Design asks not merely "what should we decide?" but "who decides, where, and how do we ensure that decision is genuinely made?"

Traditional organizational design addresses questions of authority and hierarchy. Decision Design extends this to address the specific challenges introduced by AI mediation. It requires making explicit what is often left implicit: the boundaries of automated processing, the conditions for human intervention, the criteria for escalation, and the mechanisms for accountability.

Decision Design begins with a recognition that not all choices are equivalent. Some outputs can be safely optimized—routine calculations, pattern matching, data retrieval. Others require something more: the application of context, the exercise of discretion, the assumption of responsibility. The task of Decision Design is to distinguish these categories and to build structures that honor the distinction.

This is not a matter of simply inserting humans into processes. It is a matter of designing the conditions under which meaningful human judgment can occur. It means ensuring that humans have the time, information, and authority to genuinely assess rather than merely approve. It means creating institutional space for disagreement with AI recommendations—and ensuring that such disagreement carries no implicit penalty. It means defining, explicitly, the scope within which AI operates autonomously and the threshold beyond which human judgment is required.


Decision Boundaries: The Line That Must Be Drawn

Central to Decision Design is the concept of a Decision Boundary: an explicit demarcation of where automated processing ends and human judgment must begin.

A Decision Boundary is not a vague commitment to human oversight. It is a concrete specification: for this category of action, under these conditions, a human being must substantively evaluate and decide. The boundary is not aspirational; it is structural. It is built into the system, not added as an afterthought.

The absence of Decision Boundaries is what allows judgment to dissolve. When no explicit line exists, efficiency pulls relentlessly toward automation. The boundary between "AI recommends, human decides" and "AI decides, human ratifies" blurs. The ratification itself becomes perfunctory. Eventually, the human presence becomes ceremonial—a vestige of a principle that no longer operates.

Decision Boundaries resist this drift by making the location of judgment explicit and non-negotiable. They answer, in advance, the question of who judges. They create structural accountability: if an outcome falls within the boundary of human judgment, a human is responsible—genuinely, substantively, not merely procedurally.

The conflict between the Department of Defense and Anthropic can be understood as a dispute over Decision Boundaries. Anthropic seeks to establish boundaries that prevent certain uses—autonomous targeting, domestic surveillance—from falling within the scope of automated processing. The DoD resists the imposition of such boundaries by a private actor, arguing that the state alone has authority to determine where the lines are drawn.

This is a legitimate disagreement about who draws boundaries. But it should not obscure the prior question: that boundaries must be drawn. The alternative is not freedom or flexibility. The alternative is dissolution—the quiet evacuation of judgment from processes that continue to produce outcomes without anyone deciding them.


The Structural Imperative

There is a temptation, when confronting these issues, to reach for familiar frameworks: ethics, regulation, safety. We want to know what is right, what should be prohibited, how risk can be mitigated. These are not unimportant questions. But they are secondary to a more fundamental one.

The fundamental question is structural: Have we designed our systems so that judgment occurs? Not whether it should occur, or whether we wish it to occur, but whether the conditions for its occurrence actually exist.

Ethics tells us that humans should remain accountable. Structure determines whether they can. Regulation tells us that certain uses are impermissible. Structure determines whether judgment is actually exercised in distinguishing the permissible from the impermissible. Safety tells us that risks should be managed. Structure determines whether anyone is positioned to recognize and respond to risk.

Without the structural architecture of Decision Design and Decision Boundaries, ethical commitments become aspirational, regulation becomes procedural, and safety becomes statistical. The words remain; the substance departs.


A Quiet Dissolution

There is no dramatic moment when judgment disappears from an organization or an institution. There is no alarm, no crisis, no visible failure. The systems continue to operate. The outputs continue to be generated. The processes continue to execute.

What disappears is something harder to see: the presence of a subject who decides. The organization still acts, but no one acts within it—not in the sense that matters. Choices are made, but no one chooses. Consequences unfold, but no one bears them.

This is the condition toward which we are drifting, and the conflict between the Department of Defense and Anthropic is one of its early tremors. It is not, in the end, a story about military AI or corporate ethics or government overreach. It is a story about the disappearance of judgment—and about whether we will notice before it is gone.

The question is not whether AI will be used in consequential domains. It will. The question is whether, in those domains, anyone will still be judging—or whether we will have constructed elaborate systems in which outcomes occur without decisions, consequences land without responsibility, and the forms of accountability persist long after its substance has departed.

Decision Design is not a solution to this problem. It is a recognition that the problem exists—that judgment does not persist automatically, that it must be designed for, and that in its absence, something essential is lost.

The challenge is not to stop AI. The challenge is to ensure that when AI is used, someone is still there—genuinely, substantively, accountably—to judge.

The alternative is a world of optimized outcomes and evaporated responsibility. A world where systems function and no one decides. A world where the question "Who judged?" has no answer—because no one did.


This essay is part of Insynergy's ongoing research into Decision Design—the structural architecture of judgment in organizations navigating advanced AI systems.