CISOs Need an AI Traffic Policy, Not a Blocking Rule

I have been thinking a lot about a category shift that most engineering and security teams have not yet internalized. Automated traffic is now growing at roughly eight times the rate of human traffic. AI-driven interactions nearly tripled over the course of 2025. The fastest-growing subcategory, agentic AI traffic, expanded by nearly 8,000 percent year over year.

Those numbers demand a different conversation.


The Traffic Is No Longer Read-Only

For years, AI traffic meant crawlers and scrapers. Crawlers collected bulk data for model training. Scrapers extracted real-time content for search and comparison tools. Both were essentially passive. They consumed information.

That is no longer the full picture. Agentic AI systems now navigate pages, fill forms, manage accounts, and complete transactions without direct human involvement. The majority of observed agentic activity still targets product and search pages, but a meaningful share now touches account management, authentication flows, and checkout. Agents are not browsing. They are buying, enrolling, and transacting.

For any organization exposing digital surfaces, this changes the threat model.


The Binary Is Broken

Security stacks have long relied on a simple question: is this traffic human or automated? That distinction no longer maps to intent. A consumer's AI shopping assistant rapidly navigating product pages and completing a purchase looks identical to a carding attack. A legitimate scraper retrieving pricing data for a comparison engine is indistinguishable from competitive intelligence theft.

The same AI operator can be running crawlers, scrapers, and transactional agents simultaneously. Three companies alone account for more than 95 percent of all AI-driven traffic. Blocking an operator means blocking all three traffic types at once. Allowing an operator means accepting the full spectrum of behavior.

Organizations that block all automation will lose revenue. Consumers arriving through AI-mediated channels convert at measurably higher rates than those from traditional search. Organizations that allow all automation unchecked will absorb fraud. The gap between helpful and harmful automation, measured across trillions of interactions, is less than one percent.

Neither posture works. The operative question is no longer whether traffic is automated but whether a specific interaction deserves trust.

Identity Claims Are Not Identity

Declared identity is unreliable. A significant portion of traffic claiming to originate from recognized AI crawlers does not actually come from those operators' infrastructure. Attackers spoof user-agent strings to exploit the trust that organizations extend to known bots, bypassing allowlists and rate-limit exemptions.

Any organization granting access based solely on user-agent strings is granting access to an unknown number of unauthorized actors. Behavioral validation is now a baseline requirement: network property analysis, browser authenticity signals, interaction pattern correlation. Not an advanced capability. A starting point.

The Convergence Problem

The digital surfaces that agentic AI is reshaping are the same surfaces that threat actors target. Account takeover attempts are shifting from login-stage credential stuffing to post-login compromise, exploiting session tokens and weak step-up controls. Carding volume has surged. Researchers have documented carding behavior mediated by verified AI agents. Fake account creation continues to accelerate. Web scraping now approaches 20 percent of all traffic for some organizations.

These threats share a common attack surface with legitimate agentic commerce. As that commerce scales, so does the exposure.



Practical Constraints

Seeing the problem clearly is the first step. Solving it is harder than it looks.

First, most fraud and chargeback tooling was built for human commerce. Device fingerprints, IP reputation, and cardholder history are the traditional signals. When traffic comes through an agent, these signals are weak or absent. Retrofitting existing stacks is not trivial.

Second, granular governance policies for AI traffic require cross-functional alignment between security, fraud, product, marketing, and commerce teams. Most organizations lack this coordination. The technical decisions carry direct business consequences.

Third, the industry is still converging on standards for cryptographic agent verification. HTTP Message Signatures and similar approaches show promise, but adoption is early. Organizations will need to rely on behavioral analysis while these standards mature.

The Strategic Response

There is no single tool or policy that resolves this. It requires a governance framework that treats AI traffic with the same rigor applied to any access control problem.

  • Audit AI traffic exposure now. Most organizations do not know what share of their traffic is AI-driven, which operators are present, or what those systems are doing. Visibility is the prerequisite.
  • Stop granting trust based on declared identity. Require behavioral and infrastructure validation before extending access.
  • Build granular policies, not binary rules. Define which agents can access which surfaces, what actions are permitted at each stage, and what behavioral boundaries apply.
  • Evaluate whether existing fraud tooling can handle agent-mediated transactions. If the answer is no, start adapting now.


Security teams that treat this as their problem alone will find themselves overruled by commerce teams who see the revenue opportunity. Commerce teams that ignore the fraud surface will absorb losses at scale. Both perspectives need to be integrated into a coherent policy before the volume forces reactive decisions.

About the author

Lucas Hendrich
CTO at Forte Group

You may also like

Thinking about your own AI, data, or software strategy?

Let's talk about where you are today and where you want to go - our experts are ready to help you move forward.