Overview
AI assistants like ChatGPT and GitHub Copilot are rapidly changing how professionals work, offering unprecedented capabilities to accelerate coding tasks. However, the effectiveness of these tools depends greatly on how well users can communicate with them.
This intensive one-day workshop focuses on the art and science of crafting effective prompts and using generative AI to solve problems in new ways. You'll learn practical techniques for leveraging AI effectively across various domains, from software development to data analysis, content creation and problem solving.
The course emphasises how to enhance your work through AI augmentation whilst maintaining the critical thinking and domain expertise that remain essential in your respective fields. You'll develop the ability to recognise when AI assistance improves your work and when human judgement remains critical.
By the end of this workshop, you'll have a comprehensive toolkit of prompting techniques, reusable patterns and the practical knowledge to integrate AI effectively into your daily workflows.
Outline
Understanding LLMs: Capabilities and configuration
- What is an LLM? Capabilities, training and stochastic nature
- How LLMs process and generate text: mental models for effective use
- Context windows and token limits: practical implications for your work
- Model selection: choosing the right model for different tasks
- Understanding model parameters and their effects on outputs
- Common misconceptions about AI capabilities and limitations
- Ethical considerations: responsible use, data privacy and IP awareness
Fundamental prompting techniques
- Conversational prompting: getting started with natural language
- Context engineering: providing relevant background information effectively
- Being specific: crafting clear, detailed requests that get better results
- Structured prompting: role, goal and step-by-step instructions
- System prompts vs user prompts: understanding the distinction
- Persona techniques: leveraging different perspectives and expertise
- Interactive feedback: refining outputs through conversation
Advanced prompting strategies
- Chain of thought: encouraging step-by-step reasoning for complex problems
- Zero-shot vs few-shot prompting: when and how to use examples
- Self-criticism and evaluation: getting LLMs to review their own outputs
- Decomposition: breaking down complex problems into manageable parts
- Reframing and fresh starts: managing context effectively
- Multi-turn conversations and context management
- Prompt chaining for complex workflows
Structured outputs and format control
- JSON and structured data output formats
- Markdown formatting for documentation and reports
- Code generation with specific formatting requirements
- Table and list generation with consistent structure
- XML and other structured formats
- Managing hallucinations in structured outputs
- Validation and error handling for structured data
Domain-specific applications
Software Development
- Code generation and transformation patterns
- Documentation automation techniques
- Test case generation and quality assurance
- Debugging assistance and problem diagnosis
- Code review and refactoring support
- Architecture and design discussions
Data Analysis
- Data exploration and pattern identification
- Query generation and data transformation
- Statistical analysis guidance
- Visualisation suggestions and refinement
- Report generation and summarisation
Content Creation
- Content planning and outlining strategies
- Technical documentation and user guides
- Editing and refinement workflows
- Style and tone adaptation techniques
- Communication drafting and improvement
Problem-Solving
- Decision support frameworks
- Analysis and recommendation patterns
- Structured thinking with AI assistance
- Root cause analysis techniques
Benchmarking and evaluation
- Establishing quality criteria for AI outputs
- Comparative evaluation across models and prompts
- Consistency tracking and improvement strategies
- Performance vs cost considerations
- A/B testing methods for prompts
- Building feedback loops for continuous improvement
- Measuring the effectiveness of your prompts
Prompt libraries and reusability
- Building a personal prompt toolkit
- Sharing and standardising prompts across teams
- Parameterising prompts for flexibility
- Versioning and iterative improvement
- Managing prompt collections effectively
- Documentation strategies for prompt libraries
AI-augmented workflows
- Incorporating AI into existing daily workflows
- Identifying tasks where AI adds most value
- Recognising when AI helps and when it hinders
- Building resilient processes that leverage AI appropriately
- Avoiding over-reliance on AI tools
- Maintaining skill development alongside AI usage
- Balancing speed with quality
Critical evaluation and quality control
- Assessing AI output reliability and accuracy
- Identifying hallucinations and errors systematically
- Fact-checking and verification strategies
- Recognising bias in AI-generated content
- Maintaining domain expertise whilst using AI
- The human judgement that remains essential
- Building reflexes for critical evaluation
Integration and next steps
- AI tools landscape: ChatGPT, Claude, Copilot and specialised tools
- Selecting the right tool for specific tasks
- Privacy and security considerations for different tools
- Organisational policies for AI tool usage
- Continuous learning strategies and staying current
- Pathways to advanced AI skills
Where this course fits in your learning journey
Prompt Engineering is the foundation for all AI-augmented work. Whether you're using AI tools to enhance your daily work or planning to build AI-powered features, effective prompting is essential.
After this course, you might consider:
-
Building GenAI Applications (2 days) - Learn to integrate LLM capabilities into software products, building RAG systems, implementing tool calling and creating AI-powered features. Requires this Prompt Engineering course as a foundation.
-
AI Reliability and Critical Evaluation (1 day) - Deepen your critical thinking skills for evaluating AI outputs, complementing the evaluation foundations from this course.
-
Engineering Best Practices with AI Tools (2 days) - Learn how clean code principles and best practices work alongside AI coding assistants.
-
Testing in a GenAI World (2 days) - For those building AI features, learn how to test non-deterministic systems effectively.
Requirements
This is a hands-on course suitable for all levels of experience. Participants should be comfortable with technology and ideally have basic familiarity with at least one programming language, though extensive programming experience is not required.
Access to AI tools like ChatGPT, Claude or GitHub Copilot is essential. Free tier access is sufficient for most exercises, though having access to more capable models enhances the learning experience.
Participants should bring laptops with internet access and their preferred development tools. Bringing examples of actual work tasks, problems or content from your daily work significantly enhances the practical value of the course.
Notes on variations
Whilst this course is designed for technical professionals, we also offer role-specific variations for non-technical teams, including:
- Business analysts and product managers
- Marketing and communications professionals
- Executives and leadership teams
- Customer support and operations teams
Contact us to discuss tailored versions that address the specific AI augmentation needs of different roles within your organisation.