At Glean, we believe the future of AI security is AI itself. 🛡️Reasoning models are becoming increasingly better at staying ahead of evolving threats by detecting attacks and sensitive data exposures. Today, we’re spotlighting AI security models and sensitive content models (public beta).
AI Security Models (GA) ✔️
Glean’s AI security models add a protection layer that inspects prompts, agent steps, and outputs to block unsafe activity in real time. On leading benchmark suites, they deliver: 97.8% accuracy on prompt injection detection, 93.5% accuracy on toxic content detection, and 94.3% accuracy on malicious code detection.
Sensitive Content Models (Public Beta) ⚡️
Glean’s sensitive content models pair traditional infotype and regex classifiers with AI that understands document context and enterprise relationships to reduce noise and highlight true exposure across unstructured data at an 80%+ accuracy rate.
These models help teams use AI while staying secure, keeping sensitive data protected and AI outputs trustworthy.
🎥 Watch the sensitive content models demo and the AI security demo to see Glean in action.
Join our Product office hours on Sept. 25 to ask questions or learn more 👉️
https://community.glean.com/events/55-monthly-product-updates-with-the-glean-product-team