The security principles guiding 1Password’s approach to AI

The security principles guiding 1Password’s approach to AI

Anand Srinivas by Anand Srinivas on

AI is transforming the way we work. There are immense opportunities for automation, intelligent decision-making, and productivity gains. This transformation is a tremendous opportunity, but it also comes with tremendous responsibility, especially when security is involved. For example, AI systems can now act on behalf of users, access sensitive data across tools, and make decisions without oversight, all of which have security implications.

Building AI you can trust

One broader principle we embrace at 1Password is the “principle of yes.” It’s the idea that security must enable individuals and employees to do their jobs. This underlying principle is also true of AI agents. Our goal is to enable AI agents to do what they’re designed to, but in a way that is trustworthy, secure, and follows best practices.

At 1Password, we strive to make security effortless and universal. When it comes to AI, that means enabling organizations to use AI tools effectively without compromising our core security values of privacy, transparency, and trust.

As we empower our customers to securely adopt AI, we are building around a clear set of principles. Below are the security principles that will guide how we build, adopt, and integrate AI—today and in the future.

Secrets stay secret

Encryption is the foundation of our trust model. Any interaction involving credentials must preserve 1Password’s zero-knowledge architecture, no exceptions.

Authorization must be deterministic, not probabilistic

LLMs are not authorization engines. While they can assist in interpreting user intent, access decisions must be governed by predictable, rule-based flows.

Furthermore, users should see a deterministic auth prompt that they know instead of a multi-interpretable chat message (where the LLM may even fool you asking for X while you’re granting access to Y). Users must always see exactly what they are granting access to and this should be done with a deterministic “system level” method by a trusted party (e.g. your OS or 1Password) as opposed to non-deterministically in-chat.

Raw credentials should never enter the LLM context

LLMs operate in untrusted inference environments, with open-ended context windows and memory. Raw secrets have no place in prompts, embeddings, or fine-tuning data.

Auditability must be taken into account

Every action involving credential access, by a user or an AI agent, should leave an audit trail. Given that AI agents are capable of taking action and may have access to sensitive data, it is imperative that organizations have visibility into that access, what actions took place, and the context around the approval for the agent to take that action.

Show what AI can see – and what it can’t

Users deserve clarity about how AI is used in 1Password products, including what data is accessed, when, and why.

Least privilege and minimum exposure by default

Agentic systems must follow the same access discipline we expect of humans: only what’s needed, only when needed.

Security and usability are co-requirements

Security is only effective if it’s usable. Our goal is to build secure-by-default experiences that feel intuitive for users working with or through AI.

1Password & Security-first AI

Security is not a bolt-on at 1Password. It’s built into everything we do. And we’re taking the security of AI very seriously as we continue to deliver best-in-class access management. This is something we reflected in a recent blog post on why we won’t expose raw credentials via MCP. This is also why existing approaches of using separate, siloed tools around privileged access and secret management no longer work; these concepts need to be unified into a common scheme around user and agentic AI access management.

As we bring the power of AI into our platform, these principles ensure that innovation never comes at the expense of trust. We believe the use of AI must be private, transparent, and secure by default—and we’re making it so.

VP, Product & AI

Anand Srinivas - VP, Product & AI Anand Srinivas - VP, Product & AI