Going deeper: Agentic AI Security
AI agents are rapidly evolving from experimental assistants to productive infrastructure. They orchestrate processes, access systems, and make independent decisions across system boundaries. While many discussions focus on models, use cases, and integrations, one crucial dimension often remains underexposed: security.
The uncomfortable reality is that many current AI architectures are not sufficiently thought through from a security perspective. The reason for this lies not in individual implementation details, but in a fundamental misunderstanding: AI agents are treated like traditional applications, even though they behave fundamentally differently.
This is exactly where the concept of Agentic AI Security comes in.
Autonomy is changing security logic
Traditional software follows defined workflows. A user initiates a process, the application executes it, and the underlying permissions can be modeled according to this logic. Access patterns are predictable, and interactions are clearly structured. AI agents break this model.
An agent is given a goal and decides for itself how to achieve it. It analyzes context, evaluates options, and interacts with different systems. The specific sequence of actions only emerges at runtime. This shifts control from static logic to dynamic decision-making processes.
From a technical perspective, this means: A system’s behavior is no longer fully known at design time. And this is precisely what the vast majority of security models are not designed for.
AI agents as a new identity class
A central aspect of this shift is the role of identity.
Traditional IAM systems distinguish between human users, technical services, and devices. AI agents do not fit into any of these categories. They do not act like humans, nor do they follow deterministic processes like traditional services. Instead, they make decisions independently and dynamically choose which actions to perform.
In practice, they can best be described as “reasoning-driven identities”. This new identity class has two key characteristics:
- First, their behavior is context-dependent.
- Second, their scope of action is not fully predictable
As a result, it is no longer sufficient to grant static permissions. Security must be based on dynamic behavior.
Why classic OAuth models reach their limits
OAuth2 and OpenID Connect remain central building blocks of modern security architectures. However, they were designed for a different interaction model: known clients, clearly defined resources, and relatively stable access patterns. In agent-driven systems, this model shifts.
An agent is not a traditional client. It decides for itself which resources to use and which actions are necessary. The set of possible interactions is no longer fully known in advance. Traditional scopes thus lose their precision. The central question of authorization changes fundamentally.
It is no longer a matter of whether a system is allowed to access an API.
The question is whether an agent is permitted to perform a specific action in a given context.
In MCP-based architectures, this shift becomes particularly clear, as capabilities are modeled as tools. Authorization thus takes place at the level of individual actions rather than at the system level.
Delegation and traceability as a core issue
Another critical issue is delegation. In most scenarios , AI agents act on behalf of a user. At the same time, they make independent decisions regarding the specific implementation. This creates a duality of identities: that of the user as the initiator and that of the agent as the executing entity.
Without a clear delegation model, several risks arise. Actions can no longer be unambiguously assigned, permissions can be implicitly expanded, and auditability is lost.
Technically, this problem can be solved using advanced token models, such as token exchange and actor claims. In this approach, the token represents both the agent’s identity and the user’s identity. This is the only way to determine on whose behalf an action is being performed.
This separation is essential for any serious Agentic AI Security architecture.
From static authorization to context-based policies
Another fundamental shift concerns the nature of authorization.
Role-based models quickly reach their limits because they lack the flexibility to map dynamic decisions. Instead, attribute- and policy-based approaches are gaining importance.
Authorization becomes a runtime decision that takes several factors into account: identity, context, target system, and specific action.
Specifically, this means:
Not every action by an agent is permitted per se, but is evaluated based on rules that apply situationally. These rules can, for example, stipulate that an agent may only act within certain projects or has read-only access to sensitive data.
Technologically, this leads to a greater decoupling of policy and application. Policy engines become central components that make authorization decisions, while systems merely enforce these decisions.
The real challenge: cross-system interactions
Complexity continues to rise as agents combine multiple systems with one another.
A single workflow can span different security domains. In doing so, identities, permissions, and contextual information must be transferred consistently.
This is where mechanisms such as token exchange, audience-specific tokens, and standardized claims come into play. They enable identity contexts to be propagated across system boundaries without losing control.
At the same time, it must be ensured that every action remains traceable. Auditability is not an optional feature in agent-driven systems, but a mandatory requirement.
A solution approach: identity as a control plane
The challenges described cannot be solved through piecemeal adjustments. They require a structural approach.
The central idea is to no longer view identity as a supporting function, but rather as the control plane for Agentic AI Security. Such an approach encompasses several core components:
- First, a clear identity is required for each agent. This is implemented via established mechanisms such as OAuth2 Client Credentials or Workload Identity Federation.
- Building on this, delegation must be clearly mapped so that it is always traceable whether an agent is acting in its own context or on behalf of a user.
- Another central component is policy-based authorization, which makes decisions at runtime. This is where models such as PBAC (Policy-Based Access Control) come into play, which can respond flexibly to context.
- This is supplemented by mechanisms for the secure transfer of identity contexts across system boundaries, such as through token exchange.
- Finally, end-to-end auditing is necessary to ensure that all actions can be transparently traced.
Architectural principles for Agentic AI Security
Clear architectural principles can be derived from these requirements.
- First, agents should be treated as independent identities, not as extensions of existing services.
- Second, every action performed by an agent must be explicitly authorized, ideally at the tool or action level.
- Third, delegation should always be explicitly modeled to establish clear responsibilities.
- Fourth, identity contexts must be propagated consistently across systems.
And finally, every interaction must be auditable.
Companies that rely on Agentic AI Security lay the foundation for secure and scalable AI infrastructures.
Conclusion: Agentic AI Security – control becomes the decisive factor
AI agents are transforming not only applications but the very foundations of IT security. Systems are becoming more dynamic, decisions are made at runtime, and traditional models are reaching their limits.
Agentic AI Security describes the necessary transformation to make these systems controllable. The goal is not to slow down innovation, but to make it securely scalable in the first place.
Companies that want to use AI productively must therefore ask themselves not only what their systems are capable of, but under what conditions they are permitted to act.
After all, it’s not an agent’s capabilities that determine its value, but control over them.
Learn more about Agentic AI Security
Learn more about an exemplary identity control plane at AI Agent for Identities.
Talk to us about how you can strengthen security in your company: Schedule a free consultation with our experts.