Overview
The threats faced by software today have never been greater. Integrating LLM tools into software brings additional challenges - from prompt injection to data leakage, from model poisoning to excessive agency. Traditional security patterns prove insufficient for AI-powered features.
One of the most fundamental best practices in helping ensure our software applications stay safe is the concept of secure by design, where software is designed in such a way that it is inherently secure. This intensive two-day workshop focuses on the patterns, idioms and practices that result in better, more secure AI-powered applications.
Through hands-on exercises and architectural workshops, you'll learn to design AI features with security as a foundational principle rather than an afterthought. We'll explore the unique attack vectors that LLM integration introduces and build defence-in-depth strategies to mitigate them.
By the end of this workshop, you'll have architectural patterns, code examples and practical techniques for building AI applications that resist common attacks whilst maintaining functionality and user experience.
Outline
Security landscape for LLM applications
- Unique security challenges of AI-powered applications
- OWASP Top 10 for LLMs
- Real-world incidents and threat landscape
- Regulatory and compliance considerations
Secure-by-design principles for AI
- Security-first thinking for AI features
- Trust boundaries and zero trust architectures
- Least privilege for AI components
- Defence-in-depth strategies
Input validation and prompt injection defence
- Understanding prompt injection attacks
- Input sanitisation strategies
- Separating user input from system instructions
- Context isolation techniques
Secure data handling
- Managing sensitive data in LLM applications
- PII and confidential information protection
- Data minimisation principles
- Preventing training data leakage
Output handling and validation
- Risks of AI-generated content
- Output sanitisation and encoding
- Content Security Policies for AI applications
- Sandboxing AI-generated content
RAG system security
- Securing document ingestion pipelines
- Vector database access controls
- Retrieval poisoning mitigation
- Source attribution and verification
Tool calling and function execution security
- Least privilege for AI tool access
- Validating tool parameters
- Approval workflows for sensitive operations
- Sandboxing tool execution
Authentication, authorisation and access control
- User authentication for AI features
- Authorising AI actions on behalf of users
- Preventing privilege escalation
- Multi-tenancy considerations
Supply chain and deployment security
- Securing model sources and dependencies
- Third-party model and API risks
- Secrets management for API keys
- Container security for AI applications
Architectural patterns for secure AI
- Microservices architectures for AI
- API gateway patterns for LLM access
- Rate limiting and resource protection
- Circuit breakers for AI services
Logging, monitoring and detection
- What to log in AI applications
- Detecting prompt injection attempts
- Monitoring for anomalous behaviour
- Privacy-preserving logging strategies
Testing for security
- Security testing strategies for AI features
- Penetration testing AI applications
- Red teaming exercises
- Integrating security tests into CI/CD
Production hardening
- Deployment security for AI applications
- Network segmentation and isolation
- Transport security
- Incident response procedures
Building secure development practices
- Secure coding standards for AI features
- Code review practices for LLM integration
- Threat modelling AI features
- Security culture building
Requirements
This intermediate two-day course is designed for software engineers, architects and security-conscious developers building AI applications. Participants should have at least 6 months of experience building software and a basic understanding of security concepts.
Familiarity with LLM applications is beneficial - ideally participants will have taken our Building GenAI Applications course or have equivalent experience building AI features. Understanding of the OWASP Top 10 (for web applications) and common security vulnerabilities is helpful but not required.
This course complements our Threat Modelling AI Applications workshop. Whilst threat modelling focuses on identifying risks, this course focuses on architectural patterns and implementation techniques to mitigate those risks. Taking both courses provides comprehensive coverage of AI security.
Participants should also have a good understanding of engineering best practices. Our Engineering Best Practices course provides valuable foundational knowledge for this workshop.
Participants must bring laptops with development environments configured. Bringing examples of AI applications from your own work significantly enhances practical learning, as you'll be able to apply security patterns to real systems.