Things that are invisible until they’re not.
That’s the nature of Identity Security. When it comes to risk, the ideal outcome is to feel completely safe — protected by things working quietly in the background, never knowing they were there. In Identity Security, we learn to think about controls the way an architect thinks about load-bearing walls: invisible by design, catastrophic if wrong. The controls that protect an organization — the policies, the access rules, the governance structures built over years — require diligence to maintain but exist mostly in the background. Nobody notices them when they’re working. Everyone notices when they’re not.
For more than two decades, IT General Controls have been the foundation of enterprise compliance. If you’ve worked in IT Audit, you know the territory well: identity governance, change management, and the software development lifecycle. These are the controls that auditors test year after year — central to SOX compliance, and woven into the governance expectations of nearly every regulatory framework that touches enterprise technology.
They’ve held up remarkably well. Until now.
The rapid adoption of AI Agents is creating pressure on each of these control domains — but it’s also exposing a deeper structural problem. The inconsistency that comes with governing identities in silos — different people, different processes, different tools, each managing their piece without a consistent approach or unified view. The proliferation of nonhuman identities was already straining that fragmented landscape. AI Agents are arriving into it now — and without a unified, strategic approach to identity security with visibility across the entire enterprise, you can’t see where the weaknesses are until something gives way.
Let’s walk through what’s changing, domain by domain.
Identity Governance Controls: The LCM and UAR Problem
The two identity governance controls that appear most consistently in ITGC scopes are Identity Lifecycle Management and User Access Reviews.
Identity Lifecycle Management has traditionally governed what happens when an employee joins, moves within, or leaves an organization. Provisioning, access changes, deprovisioning — the joiner-mover-leaver model. It’s well understood, well tooled, and well tested.
But the identity landscape has changed dramatically. Today, organizations must extend these tried and tested methods far beyond the employee. We live in a world where employees, contractors, suppliers, customers, service accounts, machine identities, bots, and now AI Agents are all connecting to our systems — the systems where our most coveted data and critical business functions exist. Securing the modern enterprise means governing every identity that touches your systems — not just the humans, but the nonhuman identities alongside them.
Nonhuman identities don’t fit neatly into the traditional LCM model — but they should. A service account, a machine identity, or an AI Agent deployed into a production business process is, functionally, an identity with access. It was provisioned. It has entitlements. It can act. And yet most organizations today have no formal process for governing that lifecycle — or even know that they exist.
Nonhuman identities are created informally, given broad access to ensure they work, and rarely deprovisioned when their purpose ends. That broad access — granted to make things work, not to be secure — makes them among the most attractive targets for attackers. When a breach occurs, nonhuman identities are frequently the first door hackers try. They’re overprivileged, under-monitored, and rarely noticed until it’s too late.
The discipline of LCM needs to be extended to nonhuman identities: formal provisioning workflows, documented access grants, and a clear process for what happens when a service account, bot, or agent is modified, repurposed, or retired.
User Access Reviews exist to ensure that human access remains appropriate over time — that least privilege is maintained and that entitlements haven’t crept beyond what’s needed. The same logic applies to nonhuman identities, but almost no one is conducting periodic reviews of nonhuman access today. What can this service account do? What can this agent access? Is it still appropriate? Who owns them? Has its access and scope expanded since it was first deployed?
These are questions your auditors are already trained to ask about humans. They will start asking them about nonhuman identities. But the challenge runs deeper than reviewing each identity in isolation.
When an employee is granted access to an AI Agent, that agent may be capable of performing actions that exceed what the employee is authorized to do directly — their access review may look clean, but through the agent their effective access is far broader.
Consider a scenario: a marketing analyst is granted access to a new AI agent designed to optimize ad spend. The analyst’s direct permissions prevent them from modifying financial records. However, the agent itself has broad permissions to adjust budget allocations in the company’s ERP system. An attacker who compromises the analyst’s credentials now has an indirect, unmonitored path to manipulate financial data — a risk that a traditional user access review would never uncover.
In multi-agent scenarios, where agents have access to other agents, the complexity compounds further. Total visibility across the enterprise isn’t optional. It’s the only way to understand the full blast radius of every identity in your environment.
Change Management and SDLC Controls: Enter the ADLC
Change management controls exist for a simple reason: changes to production systems that aren’t properly reviewed, tested, and approved create risk. The same is true of SDLC controls — governing how software is developed, who reviews it, what environments it passes through before reaching production.
AI Agents are, in most cases, being built and deployed like applications. They are designed to perform specific functions within key business processes — in many cases, processes that are material to regulatory compliance. They should be subject to the same rigor we apply to any other software entering a production environment.
To address this, organizations must establish a new control domain: the Agentic Development Lifecycle (ADLC). This framework applies the time-tested rigor of the SDLC to the unique challenges posed by AI agents:
- How was this agent created, and was the process documented?
- Who reviewed and approved it before it was given access to production systems?
- What access was it granted, and is that access proportionate to its function?
- What business processes does it touch, and are any of those material to compliance?
- Who is responsible for it, and who reviews it on an ongoing basis?
The absence of formal answers to these questions is an audit finding waiting to be written. As regulators and auditors develop more specific guidance on AI governance, organizations that haven’t built ADLC controls will find themselves scrambling to retrofit them. Building them now, modeled on the SDLC frameworks already in place, is the more defensible path.
But the controls themselves aren’t enough if they exist in isolation. Just as identity governance for humans requires a unified, consistent approach across the organization, so does the development and lifecycle management of AI Agents. A patchwork of team-by-team practices — each group building and deploying agents their own way — creates exactly the kind of inconsistency that auditors find and adversaries exploit. The goal isn’t just to have ADLC controls. It’s to have one coherent framework that applies everywhere agents are built and deployed.
The Frame Hasn’t Changed. The Actors Have.
The ITGCs that have governed enterprise technology for the past two decades are not obsolete. The underlying questions they ask — who has access, was it approved, is it still appropriate, can it be abused — are exactly the right questions. They just need to be asked about a new class of identity.
Nonhuman identities — service accounts, machine identities, bots, and AI Agents — are embedded in our organizations. They’re taking on roles in material business processes. They’re being given access that can affect financial reporting, customer data, and operational integrity. The governance frameworks that apply to every other identity should apply to them too.
Soon the expectations will move into monitoring AI Agents as well. NIST prescribes that it is not enough to know what happened — it is necessary to prove it. Organizations must require agents to record their actions and intentions in an immutable and verifiable manner, with traceability back to the original human authorization.
The load-bearing walls are still there. The question is whether we’re building around them — or through them.
For leaders in audit and security, the first step is to start asking the right questions. Begin with a simple one: Do we have a complete inventory of every nonhuman identity or AI agent in our environment, and do we know what it can access?
That’s the first step on the difficult journey toward securing your agentic enterprise.