An Identity Security taxonomy for Agentic AI

An Identity Security taxonomy for Agentic AI

Anand Srinivas by Anand Srinivas on

Agentic AI is a fundamentally new paradigm. AI agents can interact with various tools and act dynamically and probabilistically as they encounter new inputs. That means they end up falling somewhere between an application and a user in terms of how they operate. Indeed, the interaction with other applications is what gives agentic AI its power; however, this also has implications for identity security and access management.

One complicating factor in this new paradigm is that AI agents come in a variety of forms; for example, they can support an individual user with a simple task or serve broader use cases that require access to sensitive data. We’ve found it helpful to develop a simple taxonomy for agentic AI that guides the specific security measures that must be considered for each agent.

At 1Password, we have a set of AI security principles that apply across this entire taxonomy. The taxonomy is intended to help determine additional, specific security and access management needs to be considered based on the agent’s intended actions. We break this down into three distinct categories:

  • What type of AI agent is it, and how does it interact with the world?
  • Where is the agent running?
  • Who is the agent running on behalf of?

What type of AI agent is it, and how does it interact with the world?

There are two broad ways in which agentic AI can interact with other applications, tools, and services. The first is to mimic how a person would operate, most likely through a browser. Alternatively, agentic AI can use programmatic means, such as APIs, MCP, or other non-browser mechanisms to access services.

While an agent may use these methods either serially or in parallel, for simplicity, we’ll assume the agent falls into one camp or the other. The key point is that the method matters profoundly for the unique requirements from an identity security standpoint. For example, browser-based credentials are often very different from those used for programmatic AI. Where a browser agent might need usernames and passwords, passkeys, or even modern frameworks like WebMCP, a programmatic agent might need API keys, MCP, or other means to interact. Moreover, a browser is an execution environment that needs secure measures, such as a credential manager, to securely fill credentials on behalf of the agentic AI. In contrast, programmatic AI relies on other mechanisms, such as APIs, to securely deliver those secrets. This leads to our first classification:

What type of AI agent is it?

  • Browser AI agent - interacts with applications and services via a browser.
  • Programmatic AI agent - interacts with applications and services via APIs, MCP, agent-to-agent (A2A), etc.

Where is the agent running?

Many AI agents run on endpoints, such as a person’s laptop or smartphone. On the other hand, there’s a growing number of AI agents being deployed remotely in public or private clouds. This delineation matters from an identity security standpoint. In the first case, it is running on a user’s local, trusted environment. From an enterprise perspective, these devices are typically managed and protected by various tools such as MDM and EDR. By contrast, a remote deployment implies that the AI agent is coming from a source outside a user’s trusted environment.

Additionally, there is a strong implication (though not always true) that in the local case, a person is actively present. In contrast, in the remote case, the workload is likely running autonomously and/or asynchronously. These distinctions are critical to understand how the agent is getting authority (e.g., does it simply inherit the user’s credentials?), how it accesses secrets (e.g., via accessing a secrets or privileged access management solution?), and other relevant questions around agentic identity (e.g,. how to make explicit, and distinct from the user identity?). It leads to the next classification:

Where is the agent running?

  • Endpoint - running on a device or workstation.
  • Remote - running in a private or public cloud.

Who is the agent running on behalf of?

AI agents can be used by individuals to vibe code or to automate various tasks. A company can use them internally to automate tasks, run testing pipelines, or host internal applications. Finally, they can also be used in production, customer-facing applications. Each of these scenarios distinguishes the authority behind the AI agent running and the access and credentials the agent relies on to interact with the tools and services it needs. Thus, a third extremely important classification is:

Who is the agent running on behalf of?

  • Employee - agents accomplish a task for an individual employee.
  • Company - Internal - agents used for an internal company use case.
  • Company - External - agent used in an external-facing production environment.

Examples of the taxonomy in the real world

While there are similarities across various use cases, as you apply the taxonomy, it quickly becomes clear how different use cases may have different security requirements. Here are some example use cases as they apply to the taxonomy:

Browser AI Agents

  • An agent is running on an endpoint, on behalf of an employee, to automate a task in that employee’s browser.
  • An agent is running remotely, on behalf of an employee, to automate employee tasks while the employee is offline via a browser.
  • An agent is running remotely on behalf of the company as part of an internal application for employees, while offline via a browser.
  • An agent is running remotely, on behalf of the company, as part of an external, customer-facing application via a browser.

Programmatic AI Agents

  • An agent is running on an endpoint, on behalf of an employee, for vibe coding or general software development.
  • An agent is running remotely, on behalf of an employee, to automate an offline employee task.
  • An agent is running remotely, on behalf of the company, as part of a deployed internal application.
  • An agent is running remotely, on behalf of the company, as part of an external, customer-facing application in production.

Applying the taxonomy to agentic AI access management

The recently announced Secure Agentic Autofill is a 1Password capability focused on securing credentials when used with a browser AI agent. We can apply the taxonomy to categorize the use cases this feature addresses:

  • Browser AI agent: 1Password can securely deliver credentials into a secure extension in the remote browser, which then fills the credentials on behalf of the AI agent
  • Remote: The credentials are end-to-end encrypted over the network, and the solution will support synchronous and asynchronous use cases.
  • Employee, Company, or Customer: All of these use cases are supported, with the nuance being exactly whose vault the credentials are coming from and what the appropriate human-in-the-loop authorization process is.

This taxonomy, along with our security principles, provides a starting point to identify ways to balance productivity and security for our customers.

Why Agentic AI Taxonomy matters for Identity Security

The classifications above provide clear guidance on the security requirements that must be implemented to safely enable organizations to adopt agentic AI. This is crucial for understanding and thinking through how agentic AI should be secured. By mapping the agent type to where it is running and who it is running on behalf of, you can quickly and easily understand what may be required to secure agentic AI.

VP, Product & AI

Anand Srinivas - VP, Product & AI Anand Srinivas - VP, Product & AI