COURSE

Prompt Engineering

A hands-on workshop exploring techniques for getting the most out of LLM tools like ChatGPT and Copilot while maintaining strong development fundamentals.

  • 2 Days
  • All Levels
  • In-person / Online
  • £ On Request

Your team will learn...

How to get the most out of an LLM

To work better, faster, with skilled prompting

How to solve problems using GenAI

The latest trends and skills of an AI toolkit

Overview

AI assistants like ChatGPT and GitHub Copilot are rapidly changing how professionals work, offering unprecedented capabilities to accelerate coding tasks. However, the effectiveness of these tools depends greatly on how well users can communicate with them.

This workshop focuses on 2 aspects of prompt engineering:

  • Part 1: How to get the most out of an LLM
  • Part 2: How to solve problems using GenAI

Part 1 focuses on the art and science of crafting effective prompts, and how to get the most useful, accurate and secure outputs from large language models. Part 2 is focuses on how to use generative AI products to solve problems in new ways.

The course emphasises practical techniques for leveraging AI effectively across various domains, from software development to data analysis, content creation and problem solving. Participants will learn how to enhance their work through AI augmentation while maintaining the critical thinking and domain expertise that remain essential in their respective fields.

Objectives

  • Understand how large language models work and their fundamental limitations
  • Master a comprehensive toolkit of prompting techniques from basic to advanced
  • Learn to tailor prompts for specific domains and use cases
  • Develop skills with structured outputs, RAG and tool/function calling
  • Create reusable prompt libraries and patterns for team-wide consistency
  • Integrate AI tools effectively into existing development workflows
  • Benchmark and evaluate AI outputs to ensure quality and reliability
  • Enhance decision-making through effective AI collaboration

Outline

Understanding LLMs: Capabilities and Configuration

  • What is an LLM?: Capabilities, training and stochastic nature
  • Technical parameters: Temperature, top-p, top-k and their effects
  • Context windows and token limit: Practical implications
  • Model selection: Choosing the right model for different tasks
  • Ethical considerations: Responsible use, data privacy and IP awareness

Fundamental Prompting Techniques

  • Conversational prompting: Getting started with natural language
  • Context engineering: Providing relevant background information
  • Being specific: Crafting clear, detailed requests
  • Structured prompting: Role, goal and step-by-step instructions
  • System prompts: Setting the foundation for interaction
  • Persona techniques: Leveraging different perspectives and expertise
  • Interactive feedback: Refining outputs through conversation

Advanced Prompting Strategies

  • Chain of thought: Encouraging step-by-step reasoning
  • Zero-shot vs few-shot prompting: When and how to use
  • Self-criticism and evaluation: Getting LLMs to review their own outputs
  • Decomposition: Breaking down complex problems into manageable parts
  • Reframing and fresh starts: Managing context effectively

Structured Outputs and Format Control

  • JSON and structured data output formats
  • Markdown formatting for documentation and reports
  • Code generation with specific formatting requirements
  • Table and list generation with consistent structure
  • XML and other structured formats for system integration
  • Managing hallucinations in structured outputs

RAG and Knowledge Integration

  • Retrieval-Augmented Generation: Core concepts and benefits
  • Embedding and vector databases: A practical introduction
  • Document chunking strategies for effective retrieval
  • Integrating external knowledge into prompts
  • Citation and source tracking for reliability
  • Handling contradictory information

Tool and Function Calling

  • Understanding tool/function calling capabilities
  • Defining functions for LLMs to use
  • Designing effective function schemas
  • Chaining functions for complex workflows
  • Error handling and validation in function calls
  • Integration with existing systems and APIs

Domain-Specific Applications

  • Development: Code Generation and transformation patterns; Documentation automation; Test generation and quality assurance; Debugging assistance
  • Data Analysis: Data exploration and visualisation prompting; Statistical analysis assistance; Report generation
  • Content Creation: Content planning and outlining; Editing and refinement; Style and tone adaptation
  • Problem-solving: Decision support frameworks; Analysis and recommendation patterns

Benchmarking and Evaluation

  • Establishing quality criteria for AI outputs
  • Comparative evaluation across models and prompts
  • Consistency tracking and improvement
  • Performance vs cost considerations
  • A/B testing methods for prompts
  • Building feedback loops for continuous improvement

Prompt Libraries and Reusability

  • Building a personal prompt toolkit
  • Sharing and standardising across teams
  • Parameterising prompts for flexibility
  • Versioning and iterative improvement
  • Managing prompt collections effectively

Optional Add-on: Custom AI Assistants Workshop

For those looking to create specialised AI assistants for specific workflows or domains, this optional half-day workshop provides hands-on guidance for building, testing and deploying custom assistants.

  • Before you customise: Use cases and appropriate applications; Platform selection and comparison; Cost and scalability considerations
  • Creating purpose-specific AI assistants
  • Defining system instructions
  • Adding knowledge files
  • Configuring capabilities
  • Testing and refining custom assistants

This add-on is particularly valuable for teams looking to embed specialised AI capabilities into their workflows or create tailored assistants for specific professional domains.

Requirements

This is a hands-on, course suitable for all levels of development experience. Participants in the 2-day course should have basic familiarity with at least one programming language and have access to AI tools like ChatGPT, Claude or GitHub Copilot, though extensive experience in using these is not required.

COURSE

Prompt Engineering

A hands-on workshop exploring techniques for getting the most out of LLM tools like ChatGPT and Copilot while maintaining strong development fundamentals.

  • 2 Days
  • All Levels
  • In-person / Online
  • £ On Request

ShapeCreated with Sketch.
ShapeCreated with Sketch.
image/svg+xml
image/svg+xml