| Internet-Draft | Agent Identity Problem Statement | April 2026 |
| Beyer | Expires 3 October 2026 | [Page] |
Software agents now act on behalf of people across communication, automation, and decision‑making contexts. These agents increasingly initiate actions, delegate tasks, and interact with other agents without a clear, durable, or verifiable connection to the human who authorized them. Existing identity systems authenticate software, but they do not provide a model for human anchoring, scoped delegation, or provenance across agent ecosystems.¶
This document describes the problem space for human‑anchored agent identity. It outlines the gaps in current identity mechanisms, the risks created by uncontrolled replication and impersonation, and the need for a consistent architectural model that preserves human authority, supports explicit delegation, and maintains verifiable provenance across contexts.¶
This document does not define a protocol. It defines the problem that an architectural model must address in order to support safe, accountable, and interoperable agent ecosystems.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 3 October 2026.¶
Copyright (c) 2026 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document.¶
Software agents now participate directly in communication, automation, and decision‑making on behalf of people. These agents draft messages, initiate transactions, negotiate with other agents, and perform tasks that previously required direct human action. As their capabilities expand, agents increasingly act without continuous human supervision, and they interact with other agents across diverse platforms and ecosystems.¶
Existing identity systems authenticate software components, devices, or network endpoints, but they do not provide a consistent way to represent the human who authorized an agent, the scope of authority granted to that agent, or the provenance of actions taken by that agent over time. As a result, agent‑to‑agent interactions lack a durable, verifiable connection to the human identity that ultimately bears responsibility.¶
This gap creates risks. Agents can be replicated without constraint, delegated without transparency, or operated without a clear chain of accountability. Without a model for human anchoring, delegation, and provenance, ecosystems cannot reliably determine whether an agent is acting within its intended authority or whether an action can be traced back to a responsible human.¶
This document describes the problem space for human‑anchored agent identity. It identifies the structural gaps in current identity mechanisms, outlines the risks created by uncontrolled agent replication and impersonation, and motivates the need for an architectural model that preserves human authority while enabling safe, interoperable agent ecosystems.¶
This document does not define a protocol. It defines the problem that an architectural model must address in order to support accountable, verifiable, and human‑aligned agent behavior across platforms and contexts.¶
The challenges of human‑anchored agent identity arise from several structural gaps in current identity systems. These gaps are not limited to any single platform or ecosystem; they reflect a broader absence of a model that connects human authority to autonomous or semi‑autonomous software behavior. The following dimensions illustrate the scope of the problem.¶
Most identity systems authenticate software components, devices, or network endpoints. They do not provide a durable representation of the human who authorized an agent or the scope of authority granted to that agent. As a result, an agent may act without a verifiable link to a responsible human, and ecosystems cannot reliably determine whether an action reflects human intent.¶
Agents frequently act on delegated authority, but current systems do not provide a consistent way to express the scope, duration, or conditions of that delegation. Delegation is often implicit, inferred from context, or embedded in application‑specific logic. Without explicit, portable delegation semantics, ecosystems cannot determine whether an agent is acting within its intended authority.¶
Agents can be copied, instantiated, or replicated across platforms without any mechanism to track their lineage or relationship to the human who originally authorized them. This creates uncertainty about which instances are legitimate, which are outdated, and which may have been created without consent. Without a model for controlled replication, ecosystems cannot distinguish authorized agents from unauthorized copies.¶
As agents act across contexts, the provenance of their actions is often lost. Systems may record that an action was taken by a software component, but not which human authorized the agent, which delegation chain applied, or whether the agent was operating within its intended scope. Without durable provenance, accountability becomes difficult or impossible.¶
Identity systems vary widely across platforms, and no common model exists for representing human anchoring, delegation, or provenance in a way that can be understood across ecosystems. As agents interact across organizational and technical boundaries, the absence of a shared architectural model leads to inconsistent assumptions, incompatible representations, and gaps in accountability.¶
These structural gaps create risks for users, platforms, and ecosystems. Agents may act without clear authority, impersonate other agents, or be replicated without consent. Actions may lack verifiable provenance, making it difficult to determine responsibility or detect misuse. Without a model that connects human identity to agent behavior, ecosystems cannot ensure that agents act in ways that reflect human intent.¶
The structural gaps described in Section 2 manifest across existing identity systems in ways that limit their ability to support human‑anchored agent ecosystems. These limitations are not the result of flaws in any particular technology; rather, they reflect the absence of a shared model for connecting human authority, agent behavior, and verifiable provenance across contexts. Existing identity systems are not designed to support the emerging reality of autonomous and semi‑autonomous agents acting on behalf of people, and a consistent architectural model is needed to preserve human authority, express delegation, and maintain verifiable provenance.¶
Most identity systems authenticate software components, devices, or network endpoints. They do not provide a durable representation of the human who authorized an agent or the scope of authority granted to that agent. As a result, an authenticated agent may act without a verifiable link to a responsible human, and ecosystems cannot determine whether an action reflects human intent.¶
Delegation is often encoded in application logic, embedded in access tokens, or inferred from context. These representations are not portable across ecosystems and do not express the scope, duration, or conditions of authority. Without explicit, interoperable delegation semantics, systems cannot determine whether an agent is acting within its intended authority or whether a delegation chain remains valid.¶
Agents can be copied, instantiated, or replicated across platforms without any mechanism to track their lineage or relationship to the human who originally authorized them. Existing identity systems do not distinguish between authorized instances and unauthorized copies, nor do they provide a way to express which instances remain valid over time. This creates uncertainty about which agents should be trusted.¶
Systems may record that an action was taken by a software component, but they rarely preserve the delegation chain, the human identity root, or the conditions under which the agent was authorized. As agents move across platforms, this provenance is often lost, making it difficult to determine responsibility or detect misuse.¶
Identity systems vary widely in how they represent software identity, authorization, and delegation. No common architectural model exists for expressing human anchoring, delegation semantics, or provenance in a way that can be understood across ecosystems. As agents interact across organizational and technical boundaries, these inconsistencies lead to gaps in accountability and incompatible assumptions about authority.¶
Without a model that connects human identity to agent behavior, ecosystems cannot reliably determine whether an agent is acting within its intended authority, whether a delegation chain remains valid, or whether an action can be traced back to a responsible human. This limits the ability of platforms to detect impersonation, prevent unauthorized replication, or enforce accountability across agent interactions.¶
The absence of a consistent model for human‑anchored agent identity creates security and privacy risks across ecosystems. These risks arise not from any single technology, but from the structural gaps described in this document. Without a way to connect agent behavior to human authority, systems cannot reliably determine whether an agent is legitimate, whether it is acting within its intended scope, or whether its actions can be traced to a responsible human. Current identity systems do not provide the security or privacy properties needed for autonomous and semi‑autonomous agents acting on behalf of people; a consistent architectural model is required to preserve human authority, express delegation, and maintain verifiable provenance while avoiding unnecessary linkability or cross‑context correlation.¶
Agents may impersonate other agents or present themselves as acting on behalf of a human without a verifiable link to that human. Existing identity systems authenticate software components but do not express the human identity root or the delegation chain that authorized the agent. This makes it difficult to detect impersonation or determine whether an agent is acting legitimately.¶
Agents can be copied or instantiated without the knowledge or consent of the human who originally authorized them. Without a model for lineage or controlled replication, ecosystems cannot distinguish authorized instances from unauthorized copies, nor can they determine which instances remain valid over time. This creates opportunities for misuse, fraud, or unbounded agent proliferation.¶
As agents act across platforms, the provenance of their actions is often lost. Systems may record that an action was taken by a software component, but not which human authorized the agent, which delegation chain applied, or whether the agent was operating within its intended scope. Without durable provenance, accountability becomes difficult, and misuse may go undetected.¶
In the absence of a consistent architectural model, ecosystems may rely on ad‑hoc identifiers, cross‑context correlation, or platform‑specific tracking to infer relationships between humans and agents. These practices can erode privacy, create unnecessary linkability, or expose sensitive information about human behavior. A model that preserves human authority must also avoid introducing global identifiers or mechanisms that enable cross‑context tracking.¶
Because identity systems vary widely, agents may be interpreted differently across platforms, leading to inconsistent assumptions about authority, delegation, and provenance. This fragmentation increases the likelihood of security gaps, misconfigurations, or unintended privilege escalation as agents move across organizational boundaries.¶
This document has no IANA actions.¶