Beyond "Is AI Accurate?" A Practical AI Risk Modeling Playbook
A Workshop by Prof. Hernan Huwyler (AI GRC Lead and Professor, IE Business School - Capgemini)
About this Workshop
Our talk is a hands-on session where we live-test a large language model on screen, surface hallucinations, inaccuracies, and bias, and immediately translate those failures into an AI threat model.
Participants get a concise taxonomy of concrete scenarios and vulnerabilities organizations face when adopting AI, such as data leakage, prompt injection, model drift, insecure integrations, and over-reliance, mapped to control choices and use-case limitations you can apply when developing or procuring AI software, models, and algorithms.
We will discuss how to turn observations into actionable AI controls that keep value high and downside contained. Setting accuracy metrics and warranties only makes sense when they’re anchored in quantified technical risk scenarios. We will discuss a practical way to model scenario frequency, impact, and degradation risks, then convert them into acceptance criteria, SLAs, warranties, fallback triggers, and monitoring thresholds for AI systems in production.
The session also helps you build the governance spine for AI and risk management and documentation that withstands audit without getting lost in theory. This capability is in high demand across the job market because companies need people who can assess risks to business efficiency, automated decision-making, and the security and reliability of AI systems.
Our talk will prioritize speed to value: you’ll leave with a one-page AI risk taxonomy, a lightweight assessment checklist, and a reproducible risk quantification method. Come ready to challenge the model, pressure-test controls, and walk out with tools you can deploy just after our workshop.