10:20 - 10:45
Scaling AI Adoption
FR
Long Talk (25min)
Governing an agent platform: prompts, tools, risks, and responsibilities
Description
When an organization starts multiplying AI agent use cases, the question is no longer merely technical: it's a matter of governance. Who creates the agents? Who validates the prompts? How can we avoid misuse, black box effects, and security risks? I will share our approach to governing an internal agent platform: responsibility models, validation workflows, quality and risk management.
Key points:
- Responsibility model: who owns what (platform, agents, data, operational risks).
- Governance of prompts and tools (tooling): nomenclature, versioning, review, deprecation.
- Control mechanisms: safeguards, usage limits, monitoring.
- How to integrate risk/compliance functions into the loop without blocking innovation.
- Examples of incidents or near-incidents that led to the evolution of governance.
Speaker(s)


