OKTANE On the Road & Auth0
I just returned from the Okta & Auth0 conference, and there's some genuinely important work happening in AI security that deserves attention.
The TL;DR: Auth0 has developed some really well-thought-through approaches to securing agentic workflows. If you or your developers are working on this—and honestly, they probably are whether they're telling you or not—it's worth diving into.
OpenFGA & Fine-Grained Authorization
Thanks to Daniel Thompson, Tech Lead of the AI software team, for introducing me to OpenFGA—an open-source tool that OKTA contributes to, providing fine-grained authorization for agentic workflows. I've been complaining that nothing like this seemed to exist, and not only does it, but it's open source. OKTA have integrated it into Auth0, and it really does seem to be the missing piece for those of us trying to apply guardrails to AI workflows.
Identity-Driven Security for High-Performance Systems
What makes this particularly compelling is that Auth0's approach to enforcing guardrails at the identity layer allows high-performance data systems to maintain secure access through machine identities while still enforcing meaningful guardrails. This is crucial for systems that can't afford latency overhead from application-layer checks.
Auth0 also presented their Token Vault, which enables secure storage and fine-grained programmatic access of tokens between AI services, APIs, identity systems like OKTA, and agents—effectively creating a secure token exchange mechanism for complex AI workflows.
The RBAC Question
Adam Evans gave a compelling talk about Auth0's approach to agentic AI security. Their proposed security model uses graphs—which to my eye looks very much like RBAC. However, they're positioning it as something fundamentally different, which either means I haven't fully grasped their approach or there's a meaningful distinction I'm missing. I need to dig deeper here, but the key insight is how they're handling the fact that agents act on users' behalf: logs must differentiate between agent and user actions, and access must be limited to exactly what the agent needs.
Human-in-the-Loop Guardrails
Recommended by LinkedIn
A practical example of effective guardrails is the "Human in the Loop" (HITL) pattern—also known as Client Initiated Back Channel Authentication (CIBA). This means checking with a person before an agent takes critical actions. Existing MCP and agentic approaches do this inconsistently, but Auth0 is offering a way to asynchronously enforce these guardrails across the board.
Cross-App Access & Upcoming Features
The afternoon sessions covered the broader OKTA platform, but with a focus on AI. The really exciting development for me was their announcement of Cross App Access—they've partnered with major SaaS vendors like Google Drive and Zoom to provide a shim between the vendor and your OKTA Workforce Identity, enabling guardrails and fine-grained access control. Given that major LLM companies now offer Google Drive integration and similar capabilities, this can't come soon enough.
Workforce Identity will also gain Offline access and AI Agent access (similar to the Auth0 capabilities mentioned above) in Q1 next year. There's also an intriguing upcoming feature for Verifiable Digital Credentials—starting with driving licenses and planned to extend to corporate IDs—aimed at helping prevent deepfake scams.
Key Takeaways & Insights
How do you enforce guardrails at runtime? When writing agents & integrations, be explicit about what access they need to your services and data. Provide read-only access wherever possible. For anything mission-critical or handling large amounts of data, ensure human-in-the-loop oversight.
91% of organizations using AI agents (seems optimistic to me). 23% have no governance in place (also seems high, though for different reasons...). In a red team exercise against a large-scale public organization, nearly 100% of attempts to subvert the agents succeeded—sobering stuff.
One effective defense: ensure all your SaaS applications have inbound IP restrictions. If you can't lock it down to your company's IP ranges, consider geofencing instead. Simple idea, massive impact. Attackers don't break in anymore—they log in (I know I've heard this before, but it remains a pithy and accurate observation).
I'm definitely going to be exploring OpenFGA further, and I suspect a conversation with our Auth0 account manager is in my near future. The intersection of identity-driven security and high-performance agentic systems feels like the right place to be building defenses right now.
How are YOU securing your AI systems?
Great write up John, you might want to hear along to this agacent event next month! https://genai.owasp.org/event/genai-security-project-agentic-ai-summit-europe/
You make some good observations