Insynergy
← Back to Insights

The Question They Didn’t Ask at Davos

As the AGI debate accelerates, a deeper question remains unasked: where does human judgment reside when AI surpasses us cognitively? This essay introduces Decision Design as the missing structural layer.

Why the AGI debate is missing its most important dimension


In January 2026, two of the world’s most influential figures in artificial intelligence sat together on stage at the World Economic Forum in Davos. Dario Amodei of Anthropic and Demis Hassabis of Google DeepMind had not appeared together publicly in over a year. Their topic: “The Day After AGI.”

The conversation that followed was remarkable for its candor. Amodei suggested that AI systems capable of exceeding human performance across virtually all cognitive domains could arrive within one to two years. Hassabis, more cautious, placed genuine artificial general intelligence five to ten years away. Yet despite this difference in timeline, both agreed on something more fundamental: this technology will not behave like previous technologies. It will not quietly integrate into existing systems. It will not leave our institutions unchanged.

Hassabis put it simply: “After the arrival of AGI, we are in uncharted territory.”

The discussion ranged across familiar terrain—the acceleration of scientific discovery, the risks of misuse, the specter of authoritarian control, the disruption of labor markets. Both leaders acknowledged that the systems they are building could, if mishandled, concentrate power in dangerous ways or escape meaningful human oversight entirely.

And yet, for all its gravity, the conversation at Davos left something essenal unexamined.


What Was Discussed—and What Was Not

The Davos dialogue addressed two categories of concern with considerable depth.

The first was capability. How powerful will these systems become? How quickly? Amodei spoke of AI that could conduct Nobel-level scientific research within two years. Hassabis described systems that could accelerate discovery across biology, physics, and mathematics. Both acknowledged the recursive nature of the challenge: AI systems are increasingly involved in designing their successors, compressing what might have been decades of progress into years or months.

The second was risk. Amodei has written extensively about the dangers of autonomous systems that exceed human intelligence, the potential for biological or chemical weapons development, and the threat of AI-enabled authoritarianism. Hassabis has emphasized the need to scale governance alongside capability. Neither dismissed these concerns.

What neither addressed, at least not directly, was a question that sits beneath both capability and risk:

Once AI systems become more capable than humans across most cognitive domains, where does human judgment actually reside?

This is not a question about control in the technical sense. It is not about alignment or safety protocols or regulatory frameworks, though all of these matter. It is a question about structure—about where, in an organization or a society, the act of judgment takes place, and who bears responsibility for its consequences.

The Davos conversation assumed that humans would remain “in the loop.” But it did not ask what it means to be in the loop when the loop moves faster than human cognition, processes more information than human attention can absorb, and generates outputs that human expertise cannot fully evaluate.


Judgment Without a Subject

Consider what happens when an organization deploys AI to support decision-making. At first, the AI provides recommendations. Humans review them, accept or reject them, and bear responsibility for the outcomes. The structure of judgment remains intact.

But efficiency creates pressure. If the AI’s recommendations prove reliable, the review process shortens. Exceptions are handled quickly. Trust accumulates. Over time, the human role shifts from judgment to oversight, from oversight to ratification, from ratification to passive acceptance.

At no point does anyone decide to remove human judgment from the process. It simply erodes—gradually, invisibly, under the weight of speed and scale.

This erosion is not a failure of governance. It is the natural consequence of optimizing for performance without attending to the structure of judgment itself. And it creates a peculiar condition: decisions are made, but no one is deciding. Outcomes are produced, but no one is responsible.

We might call this “judgment without a subject.”

It is not a hypothetical concern. It is already present in algorithmic lending, in automated content moderation, in high-frequency trading systems. The AGI era will not introduce this phenomenon; it will universalize it. When AI systems can perform virtually any cognitive task more effectively than humans, the pressure to defer to them will become overwhelming. And the structures that once located judgment in human hands will dissolve unless they are actively maintained.


The Limits of Control

The prevailing response to this challenge has been to emphasize control. Amodei has described Anthropic’s work on “Constitutional AI,” which attempts to instill values and principles into AI systems at the level of identity and character. Hassabis has called for governance that scales with capability. Regulators are beginning to require transparency and disclosure.

These efforts are valuable. But they share a common limitation: they focus on what AI systems do, not on where judgment is located.

Regulation can prohibit certain AI behaviors. It cannot specify who judges in their absence. Transparency can reveal how a system reaches its outputs. It cannot establish who bears responsibility for acting on them. Even the most sophisticated alignment techniques—training AI to behave according to human values—cannot resolve the structural question. An AI system that perfectly reflects human values still cannot bear responsibility. It cannot be held accountable in any meaningful sense. It cannot learn from failure in the way that institutions and individuals do.

Responsibility is not a property that can be engineered into a system. It is a relationship between an agent and the consequences of its choices. AI systems, however capable, do not stand in this relationship. They optimize. They generate. They predict. But they do not answer for what they produce.

This is why the language of “control” is insufficient. Control implies a subject who controls and an object that is controlled. But when the object exceeds the subject in capability across most relevant dimensions, the relationship becomes unstable. Control becomes nominal—a formality rather than a reality.

What is needed is not better control, but better design.


From Control to Design

The alternative to control is structure. Rather than asking how to keep humans in control of AI, we should ask how to design systems in which judgment has a clear location and responsibility has a clear subject.

This is a different kind of problem. It is not primarily technical. It is organizational, institutional, and ultimately philosophical. It requires thinking carefully about which decisions must remain human—not because AI cannot perform them, but because responsibility cannot otherwise be assigned. It requires understanding that the boundary between AI capability and human judgment will not maintain itself. Efficiency will erode it unless something actively resists.

We call this work Decision Design.

Decision Design is the intentional structuring of where decisions are made, by whom, and with what accountability. It is not governance, though governance depends on it. It is not ethics, though ethical action presupposes it. It is not AI safety, though safety measures are incomplete without it. Decision Design is the structural layer beneath all of these—the foundation that determines whether judgment has a subject or becomes an orphan of optimization.

At the heart of Decision Design is a concept we call the Decision Boundary: the line between decisions that may be delegated to AI and decisions that must remain with humans. This boundary is not fixed. It will shift as AI capabilities evolve and as organizations learn what can and cannot be safely delegated. But it must be actively designed and maintained. Left unattended, it will retreat steadily under the pressure of efficiency, until judgment has no clear home.

The Decision Boundary is defined not by capability but by responsibility. The question is not “Can AI do this better?” but “Can responsibility for this decision be coherently assigned if AI makes it?” Where the answer is no, the boundary must hold.


The Stakes of Structure

The implications of this perspective extend well beyond individual organizations.

At the enterprise level, Decision Design determines whether AI adoption creates competitive advantage or structural fragility. Organizations that deploy AI without attending to the location of judgment may gain efficiency in the short term, but they accumulate a different kind of risk: the inability to account for their own decisions. When something goes wrong—and something always does—they will find that no one is responsible, because responsibility was never assigned.

At the societal level, the stakes are higher still. Democratic institutions depend on the premise that decisions affecting citizens can be traced to accountable actors. If AI systems increasingly make or shape those decisions, and if no one can coherently be held responsible for the outcomes, then the link between decision and accountability dissolves. This is not a problem that regulation alone can solve. It requires rethinking how institutions are structured, where judgment is located, and how responsibility is assigned.

Amodei has warned of AI-enabled authoritarianism—systems of surveillance and control that no human population could resist. But there is a subtler danger that does not require malice: the gradual displacement of judgment from institutions that once possessed it, leaving behind structures that function but do not decide, that produce outcomes but bear no responsibility.

This is not a dystopia of control. It is a dystopia of absence.


Where the Conversation Begins

The Davos discussion was valuable for what it surfaced: the recognition, from those closest to the technology, that AI will not be a normal technology, that its effects will be profound, and that the window for shaping those effects is narrow.

But the conversation that matters most has barely begun.

It is not a conversation about how powerful AI will become. That trajectory is largely set. It is not a conversation about what risks AI poses. Those are increasingly well documented. It is a conversation about structure—about where judgment will reside once capability is no longer the limiting factor, and about who will bear responsibility when systems more capable than any human produce outcomes that affect us all.

In the AGI era, competitive advantage will not belong to those with the most advanced AI. Access to capability will equalize quickly. Advantage will belong to those who design judgment structures that remain coherent under pressure—organizations and institutions that know where decisions are made, who makes them, and who answers for the consequences.

This is the work of Decision Design. It begins with a simple recognition: that the displacement of human judgment is not an accident to be prevented, but a tendency to be resisted. That resistance requires structure. And structure requires design.

The question is not whether AI will transform decision-making. It will. The question is whether we will design that transformation, or merely undergo it.


This is the first in a series of Insights from Insynergy exploring the structural challenges of decision-making in the age of advanced AI.

Japanese version is available on note.

Open Japanese version →