COURSE

AI Threat Modelling with OWASP Top 10 for LLMs

Master the unique security challenges of AI systems through practical threat modelling, red teaming and defence strategies.

  • 2 Days
  • Advanced
  • In-person / Online
  • £ On Request

Your team will learn...

Apply STRIDE methodology and PIPE framework to identify AI-specific attack vectors

Threat model against all OWASP Top 10 for LLM vulnerabilities through hands-on exercises

Secure RAG systems and agentic AI architectures with specialised threat modelling techniques

Integrate NIST AI RMF principles into organisational security practices

Craft and test real prompt injection payloads and adversarial attacks

Build detection and monitoring strategies for AI-specific threats

Run collaborative AI threat modelling workshops using industry frameworks

Overview

As artificial intelligence transforms modern applications, traditional security approaches fall short. AI systems introduce entirely new attack surfaces and threat vectors that require specialised knowledge to identify and mitigate. From prompt injection to model poisoning, the threats facing AI applications demand a fundamental evolution in how we approach threat modelling.

This advanced two-day workshop builds upon foundational threat modelling knowledge to tackle the unique security challenges of AI systems. Through intensive hands-on exercises, red team simulations and collaborative workshops, participants will master the OWASP Top 10 for LLM applications whilst learning to integrate NIST AI Risk Management Framework principles throughout the AI lifecycle.

By the end of this workshop, you'll understand not just the theory of AI vulnerabilities but the practical reality of how they're exploited. You'll craft real prompt injection payloads, poison training data, manipulate RAG systems and test AI defences. We emphasise learning by attacking - participants will use the same tools and techniques as threat actors, then apply that offensive knowledge to build comprehensive threat models and robust defences.

The course integrates the PIPE (Prompt Injection Primer for Engineers) framework throughout, providing a structured methodology for assessing AI security risks through the lens of untrusted inputs and impactful functionality. This immediately applicable approach enables security professionals to assess, test and secure AI applications in production environments.

This workshop is designed for security practitioners who need to secure AI systems, not theorise about them.

Outline

Foundation and AI security context

  • Extending STRIDE methodology for AI systems and ML pipelines
  • Trust boundaries in AI: data layer, model layer, inference layer and human interaction
  • NIST AI RMF integration and how threat modelling supports trustworthy AI characteristics
  • AI security landscape: real-world case studies of recent AI security incidents
  • AI application architecture patterns and supply chain complexity
  • Zero trust principles applied to AI system architectures
  • Shadow AI risks and governance challenges

OWASP Top 10 for LLM Applications - foundations and mitigations

  • Direct and indirect prompt injection attack patterns
  • Jailbreaking techniques and context manipulation
  • PIPE framework methodology for risk assessment
  • Training data extraction and model inversion attacks
  • Output leakage patterns and unintended disclosure
  • Compromised models, malicious datasets and dependency risks
  • Model marketplace and third-party AI service security
  • Transfer learning and model provenance considerations
  • Backdoor attacks and adversarial example injection
  • Clean-label attacks and detection challenges

RAG systems security

  • Why RAG systems require specialised threat modelling
  • RAG architecture components and unique attack surface
  • Retrieval poisoning and vector database vulnerabilities
  • Context window exploitation and access control bypass

OWASP Top 10 for LLM Applications - advanced threats and mitigations

  • XSS and code injection via AI-generated content
  • Cross-system injection risks
  • Autonomous actions without appropriate oversight
  • Multi-agent system coordination risks
  • Tool calling and function execution vulnerabilities
  • Context extraction and reverse engineering techniques
  • Vector database security and similarity manipulation
  • Adversarial embeddings in retrieval systems
  • Security implications of AI-generated misinformation
  • Denial of service and cost exploitation patterns
  • Context window and memory exhaustion techniques

Advanced AI security concepts

  • Agentic AI Security
  • Model Serving and Deployment Security
  • Detection, Monitoring and Incident Response
  • Adversarial Machine Learning
  • AI Model Lifecycle Security

Comprehensive AI threat modelling workshop

Participants work in teams to threat model a complex AI system using integrated OWASP Top 10, NIST RMF principles and PIPE methodologies.

Requirements

This advanced two-day workshop requires completion of our Threat Modelling course OR equivalent experience with STRIDE methodology and data flow diagrams. Participants should have 6+ months industry experience in software development, security or related fields. Basic understanding of AI/ML concepts is beneficial but not required.

Attendees must be comfortable working in collaborative workshop environments and should bring real-world AI system examples where possible to enhance practical learning outcomes.

This is an intensive, hands-on course with extensive practical exercises. Participants will work with vulnerable AI applications in safe isolated environments using real security testing tools. A laptop capable of running a web browser and connecting to cloud-based virtual machines is required.

COURSE

AI Threat Modelling with OWASP Top 10 for LLMs

Master the unique security challenges of AI systems through practical threat modelling, red teaming and defence strategies.

  • 2 Days
  • Advanced
  • In-person / Online
  • £ On Request

image/svg+xml
image/svg+xml