Proofpoint Secures AI Agents with Agent Integrity Framework
Proofpoint Secures AI Agents with Agent Integrity Framework
The leap from simple LLM-powered chatbots to more or less autonomous artificial intelligence agents has been very brief. Even before most people (and companies) could get accustomed to the new AI tools, agentic AI began to assert itself, promising greater autonomy while still leaving decisions in the hands of humans. The problem, however, is that AI security has not progressed as quickly.
Proofpoint is stepping in with the Agent Integrity Framework, a model to protect AI usage in companies.
Controlled and Secure AI with ProofPoint's Agent Integrity Framework
Today, there are already solutions to ensure the security of corporate AI systems, but in most cases, they have significant limitations. They display the traffic generated by AI and can evaluate access permissions, but often lack the ability to determine whether the AI is generating responses in line with the intentions of the users. A risk not to be underestimated: the problem here is not that the AI might offer irrelevant responses, but that it could be manipulated by malicious actors to extract confidential information.
In practice, current systems to protect AIs can detect attack attempts, block AIs trying to access systems for which they do not have access permissions, but do not understand if they are providing out-of-context responses, which could also lead to data breaches.
Proofpoint's solution specifically addresses this: it analyzes the behavior of AIs and AI agents from a semantic perspective, flagging risky situations and, in general, responses that are not aligned with the intentions of those who made the requests.
Not only that: the Agent Integrity Framework can detect the use of unauthorized AI tools within the company, as well as the MCP servers that AIs rely on to interface with the data.
The framework defines agent integrity simply: it must only do what it was designed for, with the stipulated permissions, without deviations in interactions, tools, and data access. It is based on five pillars (intent alignment, identity and attribution, consistency, verifiability, and operational transparency) and ultimately serves to make AI governance something tangible, without the need to start from scratch with existing security.
"AI is now an integral part of the way we operate, and security must evolve accordingly," explains Sumit Dhawan, CEO of Proofpoint. "Humans and AI agents share similar risks: both can be manipulated and take actions that diverge from their purpose, yet traditional security was never designed to validate intent. Proofpoint uniquely positions itself as a unified cybersecurity platform, created to protect people, defend data, and govern AI agents together, providing continuous and intent-based verification that behavior aligns with policies and purpose in the agentic work environment."
"It is expected that humans operate with integrity when using corporate systems, and the same standard must be applied to AI agents," states Ryan Kalember, Executive Vice President of Cybersecurity Strategy at Proofpoint. "Using Agent Integrity means ensuring that AI agents act within the limits of their intended purpose, authorized permissions, and expected behavior in every interaction, tool call, and data access. With Proofpoint AI Security and the Agent Integrity Framework, we can provide a clear blueprint to help companies comprehensively address the entire spectrum of risks that emerge when AI agents operate autonomously within corporate systems."