This is not a course that gives you a work programme to copy. It is a course that equips you to think clearly about AI risk so you can build your own.
Most AI training for auditors falls into one of two traps: either it’s a technology briefing that leaves you knowing what AI is but not how to audit it, or it’s a series of auditing use cases that may not match your organisation.
This programme does neither. It teaches the deeper principles that allow internal auditors to form defensible judgements about any AI deployment — today’s augmented tools and tomorrow’s autonomous agents — by understanding why these systems behave the way they do, what that means for risk and control, and where traditional audit methodology needs to adapt.
Day 1 builds genuine understanding: how LLMs work in ways that matter for audit, why regulation protects the public but won’t protect your organisation, what AI risk appetite actually means when outcomes aren’t deterministic, and where the three lines model breaks down when applied to AI. Participants work through the challenge of drafting risk appetite statements and discover first-hand why this is harder than most execs and boards realise.
Day 2 converts that understanding into audit judgement. Sessions on workflow design, the real effectiveness of human-in-the-loop controls, and the limits of traditional testing methods are followed by two extended case studies — one compliance-driven, one tackling agentic AI — where participants design audit approaches, form opinions, and draft audit committee messages.
The programme closes with facilitated action-planning: what to do first when you get back.
This is not a course that gives you a work programme to copy. It is a course that equips you to think clearly about AI risk so you can build your own.
Programme at a glance
Day 1 — Understanding what you are auditing
- Pre-work exercise and orientation.
- What has already gone wrong with AI and why.
- How LLMs work — just enough to audit intelligently.
- Why regulation won’t protect your organisation.
- Risk appetite for AI (with workshop).
- The three lines and where assurance breaks down.
- Getting AI onto the audit plan.
Day 2 — From understanding to audit judgement
- Workflow design: is AI the right answer, and is it controllable?
- HITL: genuine safeguard or comfortable fiction?
- Testing AI: what works, what doesn’t, what’s new.
- Two extended case studies (compliance/augmented AI and business risk/agentic AI).
- Case study debriefs.
- Takeaways and action-planning for your audit function.
CPE
You will receive 14 CPE’s for CIA, CRMA and Diplomert internrevisor for attending the course.
Instructors
James Paterson is former head of audit for AstraZeneca PLC and has been consulting and training for the past 10 years for the IIA in Belgium, Finland, Luxembourg, Netherlands, Norway, Switzerland and the UK.
He is the author of the book “Lean Auditing” and Root Cause Analysis and System Thinking and has spoken at several IIA International Conferences and also the ECIIA conference. He contributed to the IIA global team that worked on developing practice guidance for internal audit plans.
Full Bio: Stephen Foster
NB
This training is in Euro, not kroner.