Course Outline

Day 1 – Generative AI & LLM Fundamentals
Introduction to generative AI and LLM use cases
Understanding transformer-based models (GPT, LLaMA, T5, etc.)
Tokens, tokenization, and embeddings
Working with pre-trained models via APIs (OpenAI, Claude)
Working with pre-trained models via Hugging Face
Prompting fundamentals: zero-shot and few-shot prompting
Hands-on: prompt engineering in a Python notebook
Building a simple LLM-powered application (CLI or web)
Practical limits: tokens, rate limits, and basic reliability techniques

Day 2 – RAG and Vector Search
Why RAG: combining LLMs with your own data
RAG architecture: ingest, index, retrieve, generate
Preparing and chunking documents for retrieval
Generating text embeddings with APIs or Hugging Face
Introduction to vector stores (e.g. Chroma, Pinecone)
Hands-on: building a basic semantic search script
Hands-on: building a document Q&A system with RAG
Scaling ingestion and embeddings (overview of bigger-data workflows)
Design trade-offs in RAG: chunking, top-k, cost vs quality

Day 3 – Workflows, Agents, and Production
What AI agents are and when to use them
Introduction to LangGraph and graph-based LLM workflows
Hands-on: building a simple LangGraph workflow with tools
Adding memory and multi-step reasoning to workflows
Combining RAG and agents (agentic RAG)
Monitoring and evaluating LLM and RAG systems
Deployment options for LLM applications (APIs, containers, services)
Cost and performance optimization strategies
Basic safety, guardrails, and responsible usage
Capstone mini-project: end-to-end RAG/agent application demo


 

Requirements

solid Python programming skills and familiarity with APIs.

Target Audience:

This course is intended for organizations that want to move from experimentation to real LLM-powered solutions. It is suitable for software, backend, and full-stack engineers integrating LLMs into products and services; data and machine learning engineers working with RAG, embeddings, and vector search; solution and enterprise architects designing LLM-based architectures; as well as technical product owners and engineering leaders responsible for evaluating AI use cases, costs, and risks.

 21 Hours

Number of participants


Price per participant

Testimonials (1)

Upcoming Courses

Related Categories