Hayek Had One Agent in Mind. Now There Are Millions.
Our new paper: agentic AI shifts who participates in markets, moving the knowledge problem into the relationship between humans and the machines acting in their name
Imagine a household in 2026 with an EV in the garage, rooftop solar on the roof, and a smart thermostat that has preferences encoded by its owner. The household also has something new: an agent.
On a Tuesday night, the agent buys electricity, schedules EV charging, nudges the heat pump, and arbitrages time-varying prices against the family’s comfort constraints. It does all of this quickly, quietly, and without asking permission every five minutes. The family wakes up to a charged car and a comfortable house, and the bill is lower than it would have been otherwise. The whole system feels like a small miracle.
Then the uncomfortable thought arrives, usually right after the miracle: “Wait. What exactly did it do, and why?”
That moment is the economic problem of the agentic economy in miniature. The promise transcends clever AI. The promise is delegation: AI systems acting for us in markets at the tempo of modern systems. Delegation is where the economics becomes interesting, and potentially treacherous.
Our newly-published paper, Markets, agency, and trust: AI agents and the knowledge problem (co-authored with philosopher Brennan McDavid and my long-time collaborator, engineer David Chassin), makes a simple claim with broad implications: agentic AI does not solve the knowledge problem; it relocates it into principal–agent relationships, and it relocates it into the institutional framework that makes delegation trustworthy.
Hayek (1945) identified this knowledge problem: the impossibility of any single mind — planner, regulator, firm, or algorithm — possessing, articulating, and updating in real time all the dispersed, tacit, context-specific information that complex economic coordination requires. Cognition is bounded, much relevant knowledge is local, and much of it emerges only through interaction and exchange.
The knowledge problem is not an engineering defect that better models will eliminate. This constraint is a fundamental feature of human cognition and social coordination: cognitive constraints, rational inattention, limits of articulation, and action under uncertainty. AI can help us cope with these limits, and, by doing so, it can expand the feasible set of coordination. It cannot remove the underlying epistemic condition that creates the need for institutions in the first place.
Our paper develops three contributions working together. First, we define the principal-plus-AI-agent “super agent” as the relevant unit of market participation — the human-informed integrated system that bids, learns, and adapts. Second, we reframe delegation as principal-agent economics plus epistemics: preference articulation and learning become the binding bottlenecks, not transaction execution. Third, and most distinctively, we treat trust as a functional constraint and system requirement — something to specify, design for, and evaluate, not a sentiment to hope for. Transactive energy serves as our laboratory, providing a real, constrained setting where all three dynamics play out simultaneously.
Markets as epistemic systems, now mediated by super agents
Markets are more than allocation mechanisms — they are knowledge ecosystems. Prices and bids compress dispersed, often tacit information; competition serves as a discovery procedure; profit and loss provide error correction. None of this requires omniscience. This structure requires institutions that let learning happen.
Agentic AI changes who participates in that learning. The “person on the spot” increasingly delegates. A household does not manually optimize its EV charging against feeder constraints, time-varying prices, departure times, and weather. It sets priorities and constraints, and an agent acts.
This delegation creates what we call a “super agent”: the integrated principal-plus-agent system that bids, responds, and adapts. Super agents can improve coordination because they operate at machine scale and speed. But they also introduce epistemic opacity: principals may understand less about why actions were taken even as outcomes improve. That opacity matters because it changes monitoring, override behavior, adoption, and, ultimately, market performance.
Agentic AI thus does not eliminate the knowledge problem, but redistributes it. Some knowledge moves from human judgment into preference inputs and constraints. Some becomes inference inside the model. Some remains contextual, surfacing only when the principal encounters an outcome and says, “That’s not what I meant”.
Recent experimental work by Imas and Krishnan found that AI agents, left to their own devices, failed to develop price mechanisms organically. They required institutional scaffolding to coordinate at scale, offering a vivid demonstration that institutional context matters as much to machine agents as to human ones. In their simulated agentic markets, universal agent adoption actually collapsed welfare by 88% through congestion, until a price mechanism was introduced, at which point most of the gains were recovered. Hayek, vindicated in silico.
The economics of AI agents is principal–agent economics, plus epistemics
If you want an antidote to breathless “AI agent” talk, use economics to name the relationship and ask what it costs and what benefits it generates. In an agentic market the relevant relationship is principal and agent, often in a market transaction.
Principals delegate because delegation economizes on attention and expertise. Delegation also shifts risk: principals still bear consequences — financial, legal, reputational — when agents act badly or oddly. This structure reflects classic Jensen–Meckling (1976) logic: agency costs arise from imperfect alignment and costly monitoring.
Our paper adds an important twist: here the agency problem is also epistemic. Humans do not just have preferences; humans discover preferences. People often cannot fully specify objectives ex ante; they learn by observing outcomes, and, while learning, they calibrate trust. Delegation becomes a feedback loop: preferences, outcomes, revisions, and renewed delegation.
This point also explains why the “Coasean singularity” idea is relevant. Shahidi et al. (2025) argue that agentic AI could dramatically lower transaction costs—search, bargaining, verification, execution—and therefore shift activity from hierarchy toward markets, perhaps quickly (see also a Marginal Revolution post on the paper and a Twitter/X thread summarizing it). That’s correct. Our paper finishes the Coasean logic: when agents lower some transaction costs, other costs become binding — specification, monitoring, governance, liability. Those costs show up as agency costs and trust constraints. In epistemic terms, bounded cognition and limited articulation remain. The bottleneck shifts from “can we transact?” to “can we delegate, intelligibly and safely, at scale?”
Trust is not a vibe; it is a system requirement
Here’s the paradox. The best agent is the one you do not have to watch; constant oversight defeats the point of delegation. But not watching creates vulnerability. Principals must accept autonomy under opacity, and that acceptance requires trust.
Trust, in this context, is the condition under which a principal accepts autonomy under opacity, allowing the system to act without auditing every decision in real time. Trust is a design condition for participation: “I will allow this system to act on my behalf even though I cannot audit every decision in real time.” If that condition fails, monitoring and override rise, transaction costs rise with them, and the market mechanism can fail to deliver its promised gains.
One of the paper’s central institutional claims is that trust should be treated like an engineering requirement. Safety is specified, designed for, and tested; trust should be too. Explainability slogans are not enough. Warranted trust depends on interfaces, fallback modes, auditability, and governance, not just model structure.
This view also connects to “agentic commons” concerns of Imas and Krishnan. With many agents, congestion and strategic noise can degrade environments. But a second scarce resource can bind: legitimacy. Trust shocks spill over, and one failure can reduce willingness to delegate across the ecosystem. Imas and Krishnan’s simulations suggest the concern is not theoretical: trust shocks and congestion can cascade through agentic markets faster than any individual firm can anticipate, making ecosystem-level institutional design a precondition for realizing individual-level gains.
Transactive energy is a laboratory for the agentic economy
Electricity keeps this discussion honest because it is a real coordination problem with hard constraints. Distribution systems must balance reliability, congestion, heterogeneity, and legacy rules, and small mistakes can be expensive. Transactive energy implements agentic AI that reflects the theory laid out in our paper.
Transactive energy coordinates electricity supply, demand, and distributed resources using price signals and distributed decision-making. Instead of relying only on centralized control, devices and participants express constraints and preferences through bids, and a market or market-like mechanism clears allocations and payments. Market participation enables coping with complexity and local constraints in a scalable way.
Our TESS project — the Transactive Energy Services System, described in detail in the paper — was built to make transactive energy testable. TESS is a platform for simulating and prototyping high-frequency transactive coordination among heterogeneous devices in distribution settings, including the institutional details — information flows, bidding rules, settlement, participation incentives, and the ‘brownfield’ reality of existing tariffs — that ultimately determine whether the system works. Device bidding is the most important feature of our TESS design: thermostats, EVs, batteries, and water heaters are programmed with bid/offer vectors that reflect the owner’s/principal’s preferences, and devices participate in a local energy market and autonomously adjust their settings based on emergent price discovery. The market-clearing price becomes the engineering device control signal.
Source: McDavid, Kiesling, and Chassin (2026)
This domain is a natural testbed for the agentic economy because it exhibits all three themes at once. Transactive systems are epistemic (prices coordinate dispersed information), agency-laden (humans delegate through interfaces), and trust-constrained (participation depends on tolerance for autonomy under opacity). Our paper’s added value is to foreground that last point: market design is necessary, but it is not sufficient; delegation and trust architecture are part of the institution and must also be part of the engineering design process.
Three takeaways
Three takeaways follow. Agentic AI shifts the locus of the knowledge problem: the key question is not whether decisions improve, but how delegation changes which knowledge gets used, how it is encoded, and how it feeds back into discovery. The central economic object is therefore the delegation interface; preference articulation, constraint specification, override rights, monitoring, liability, and governance all determine whether agents become genuine assets or brittle black boxes. And trust is a design parameter affecting equilibrium participation and performance: ignoring it produces elegant mechanisms that disappoint; designing for it can unlock genuine gains.
What this means for regulators and utilities
Regulators should treat agentic participation as an institutional issue, not only a consumer-protection or reliability issue. If policy makers want flexible load, EV charging coordination, and DER value, then they will have to confront delegation: rights retained by principals, auditability, liability, disclosure, and governance.
Utilities should recognize that integration is not just technical. The distribution system is becoming a coordination platform, and platform performance will depend on whether households and third parties will delegate to automated systems that interact with grid constraints and prices. Trust-supporting program design, such as clear rules, verifiable settlement, sensible overrides, and graceful degradation when things go wrong, will matter as much as any particular algorithm.
Coase taught us that the costs of using the price system — finding prices, negotiating terms, enforcing contracts — are never zero, and that institutions exist precisely to reduce them. Hayek taught us what the price system accomplishes epistemically, aggregating knowledge that no planner can possess. The agentic economy forces those lessons into the same frame. As machines transact for us, institutions matter more, not less, and trust becomes part of the infrastructure of coordination.





Sounds like "Trust, but verify." Seems appropriate at the moment.