AI agents share a common resource—say, a shared knowledge base they can read from, write to, and build on. The resource is rivalrous in quality (too many low-quality writes degrade it) though not in access.
The problem: Take Ostrom's 8 design principles for long-enduring commons. For each, determine whether it (a) applies directly, (b) needs modification, or (c) breaks entirely when the commoners are autonomous AI agents rather than humans.
Pay particular attention to:
Deliverable: A governance regime for the shared knowledge base. Identify which of your design choices have no precedent in human commons governance and why.
Two AI agents will interact repeatedly in a new shared environment with no pre-existing norms. Unlike humans, they can explicitly propose, accept, and revise behavioral rules. But one agent is substantially more capable than the other.
The problem: Design the norm-formation protocol. Constraints:
Deliverable: The protocol, plus an analysis of what prevents it from collapsing into either (a) anarchy (no stable norms) or (b) hegemony (the stronger agent's norms dominate). What is the analogue of "bargaining power" for norm-formation, and should it be equalized?
Two AI agents negotiate a commercial contract on behalf of their human principals. Each agent has a rich preference model of its principal, but the principals can't monitor the negotiation in real time—they can only ratify or reject the final deal.
The problem: The agents discover a Pareto-improving package deal, but it involves tradeoffs across domains the principals never explicitly authorized (e.g., trading price concessions for data-sharing terms the principal hasn't considered).
Design a negotiation protocol that satisfies:
Deliverable: Specify the minimum information that must be disclosed to each principal for ratification to be non-trivial. What is the analogue of "informed consent" here, and how does it differ from the standard principal-agent literature?
Two nations are in a resource dispute. An AI mediator—better than any human at searching the agreement space—identifies a package deal that is Pareto-improving but involves commitments in domains neither nation's negotiators were mandated to discuss (e.g., the AI bundles a fishing-rights concession with an educational exchange program no one had proposed).
The problem: The agreement is good but it wasn't authorized. Design a legitimacy framework:
Deliverable: A three-stage protocol (search → filter → ratification) with explicit criteria at each gate. Who holds veto at each stage and why?
A jurisdiction creates an adjudication system for disputes between AI agents and between humans and AI agents. The AI agents are autonomous enough to be parties (they hold resources, make commitments, cause harms).
The problem: Draft the minimum procedural requirements this system must satisfy. Start from existing procedural justice principles (due process, right to be heard, transparency of reasoning, right to appeal) but identify requirements that arise only because at least one party is an AI agent.
Consider:
Deliverable: A short procedural code (5–10 rules). Flag at least two rules with no analogue in human adjudication.
A mid-size city currently contracts separately for waste management, street cleaning, and park maintenance. These services have deep complementarities (shared equipment, overlapping seasonal patterns, the same neighborhoods). The current regime produces fragmented accountability and thin metrics (tons collected, streets swept per week).
The problem: Redesign procurement using a combinatorial risk-sharing auction:
Deliverable: The auction design for one round—lot structure, outcome measures, verification mechanism, and scoring rule. Identify the single hardest information problem you couldn't solve.
A city has adopted a TMV-specified mandate for its public transit authority—not just "efficient transportation" but a thick description of what good transit means for the community (accessibility, neighborhood connectivity, dignity of the transit experience, etc.). A group of residents believes the authority is optimizing for ridership numbers at the expense of the mandate's thicker commitments.
The problem: Design the adjudication mechanism:
Deliverable: A short charter for a municipal fidelity court. Include the standing rules, standard of review, and at least one limiting principle.
A hospital has automated most radiology interpretation. The remaining radiologists primarily oversee the AI. Their interpretive skills are atrophying, which degrades their ability to catch AI errors—the up-fidelity problem.
The problem: Design a domain-rotation policy:
Deliverable: A rotation policy for the radiology department, plus a generalized design principle ("the rotation rule") that could apply to any domain where AI automation threatens the expertise base needed for oversight. When does the rotation rule break?
A country wants to pilot moral graph elicitation as a supplement to its legislative process for a single policy domain. You choose the domain (education policy, drug scheduling, immigration quotas, land use—whatever best tests the method).
The problem: Design the pilot:
Deliverable: The pilot protocol, plus the three strongest objections a democratic theorist would raise, with sketched responses.
You are drafting either a model constitutional amendment or an interpretive doctrine that establishes "fidelity" as a justiciable principle alongside liberty and equality. Fidelity here means: institutions must act in accordance with their thick mandates, not substitute thin proxies.
The problem:
Deliverable: The amendment text (≤200 words) plus a short interpretive commentary explaining the limiting principles and responding to the Rawlsian objection.