COURSE

Building GenAI Applications

Go from prompt engineering to production-ready AI features through intensive hands-on development with modern AI frameworks and tools.

  • 2 Days
  • Intermediate
  • In-person / Online
  • £ On Request

Your team will learn...

Build custom RAG applications from scratch with retrieval and generation

Implement tool calling to extend LLM capabilities with external functions

Integrate Model Context Protocol (MCP) into existing applications

Use GitHub Copilot and AI coding assistants effectively during development

Design and implement AI-powered features for production systems

Create user interfaces for AI applications with Streamlit

Apply best practices for LLM application architecture and deployment

Overview

Prompt engineering teaches you to use AI tools effectively. But what about building AI-powered features into your own products? How do you move from crafting prompts in ChatGPT to shipping production-ready AI capabilities that your users interact with?

This intensive two-day workshop bridges that gap through hands-on development. Building on prompt engineering fundamentals, you'll learn to integrate LLM capabilities into software products by constructing three increasingly sophisticated systems: a custom RAG application, tool calling implementation and Model Context Protocol (MCP) integration.

Through practical development using Python, modern AI frameworks and Streamlit for rapid UI creation, you'll gain the architectural understanding and implementation experience needed to ship AI features confidently. Throughout the course, you'll use GitHub Copilot and other AI coding assistants, experiencing firsthand how to leverage AI during development whilst building AI capabilities.

By the end of this workshop, you'll have working code examples, architectural patterns and the practical knowledge to implement AI features in your own applications. This is learning by building - you'll leave with repositories you can reference and adapt for your projects.

Outline

Revisiting prompt engineering for development

  • Core concepts for building vs using
  • System prompts vs user prompts in applications
  • Managing conversation context programmatically
  • Streaming responses and error handling

LLM APIs and integration patterns

  • Overview of major LLM APIs
  • Authentication and API key management
  • Request formatting and response parsing
  • Choosing the right model for your use case

Introduction to RAG architecture

  • Why retrieval augmentation matters
  • RAG system components and data flow
  • Embeddings and semantic search fundamentals
  • Vector databases and similarity search

Building a custom RAG application

  • Document ingestion and preprocessing
  • Generating and storing embeddings
  • Implementing semantic search retrieval
  • Integrating retrieval with LLM generation

Creating user interfaces with Streamlit

  • Streamlit fundamentals for AI applications
  • Building conversational interfaces
  • Displaying retrieved sources and citations
  • Real-time streaming of LLM responses

Using AI tools during development

  • GitHub Copilot for accelerating AI feature development
  • AI-assisted debugging and refactoring
  • When to trust AI-generated code and when to verify

Tool calling and function execution

  • Understanding tool calling capabilities
  • Defining function schemas and parameters
  • Implementing tool handlers
  • Multi-step tool calling patterns

Model Context Protocol (MCP) integration

  • What is MCP and why it matters
  • MCP architecture and components
  • Implementing MCP servers and clients
  • Exposing application context to LLMs

Advanced integration patterns

  • Agentic systems and autonomous workflows
  • Multi-agent architectures
  • Memory and state persistence
  • Context window management strategies

Production considerations

  • Architecting LLM features for scale
  • Caching strategies for LLM responses
  • Monitoring and observability patterns
  • Security and data privacy considerations

Testing LLM integrations

  • Unit testing LLM-powered features
  • Mocking LLM responses for reliable tests
  • Evaluation frameworks for LLM outputs
  • Regression testing for non-deterministic features

Deployment and operations

  • Containerising AI applications
  • Environment configuration and secrets management
  • Rate limiting and quota management
  • Production debugging techniques

Building complete applications

  • Implementing conversation history and context
  • Adding source attribution and citations
  • Building user feedback mechanisms

Architectural patterns and best practices

  • Layered architecture for AI features
  • Separating prompts from code
  • Version control for prompts and configurations
  • Gradual rollout strategies

Requirements

This intermediate two-day course requires completion of our Prompt Engineering workshop OR equivalent experience using AI tools. Participants should have at least 6 months of software development experience and be comfortable writing code in Python.

Basic understanding of APIs, HTTP requests and JSON is essential. Familiarity with Python development environments, pip package management and basic command line usage is expected.

Participants must bring laptops with Python 3.10+ installed. We'll provide instructions for setting up the development environment prior to the course. Access to GitHub Copilot or similar AI coding assistants is recommended but not required.

Some exercises will require API access to LLM providers (OpenAI, Anthropic or similar). We can provide limited API credits for the course or participants can use their own accounts.

This course is primarily delivered in Python with Streamlit for UI components. It can be re-platformed to Java or TypeScript on request - contact us to discuss custom delivery options.

COURSE

Building GenAI Applications

Go from prompt engineering to production-ready AI features through intensive hands-on development with modern AI frameworks and tools.

  • 2 Days
  • Intermediate
  • In-person / Online
  • £ On Request

image/svg+xml
image/svg+xml