Online or onsite, instructor-led live Large Language Models (LLMs) training courses demonstrate through interactive hands-on practice how to use Large Language Models for various natural language tasks.
LLMs training is available as "online live training" or "onsite live training". Online live training (aka "remote live training") is carried out by way of an interactive, remote desktop. Onsite live Large Language Models (LLMs) trainings in Varna can be carried out locally on customer premises or in NobleProg corporate training centers.
NobleProg also offers bespoke Large Language Models (LLMs) consultancy services in Varna. Our consultants have helped hundreds of clients around the world get unstuck. Our clients value our highly-personalized consulting approach and find consulting to be well-suited for complex long-term projects, short-term projects requiring niche expertise, urgent problem fixing, critical knowledge transfer, and team coaching and support. To learn more about our past consultancy engagements, see consultancy case studies.
If instead you need people for continuous projects, NobleProg can support your organisation with a full range of staff. Whether your needs are for medium-term or long-term assignments, entry-level or highly-skilled expertise, single-person or multi-person personnel, our interim staffing / staff augmentation solutions can provide you with the talent needed to complete your most challenging projects. Contact us for more information.
The "Central Point" complex offers quick access to main roads leading to the airport, the northern and southern resorts and the Varna - Sofia and Varna - Burgas highways.
This instructor-led, live training in Varna (online or onsite) is aimed at senior management professionals who wish to understand what LLMs are, explore their potential impact on business operations, and evaluate practical uses of AI tools such as ChatGPT, Microsoft Copilot, or Grok for real-world tasks like content creation, data summarization, and decision support.
By the end of this training, participants will be able to:
Understand what LLMs are and how tools like ChatGPT and Copilot function.
Use prompt techniques to get practical, reliable results from LLMs.
Evaluate real use cases such as email drafting, summarizing documents, and productivity automation.
Identify investment opportunities and strategic applications for AI adoption.
This instructor-led, live training in Varna (online or onsite) is aimed at senior management teams who wish to understand the strategic value of LLMs and enterprise AI tools. Participants will explore how to integrate these tools into high-level workflows, draft better prompts, and evaluate opportunities for increased productivity and ROI through AI adoption.
By the end of this training, participants will be able to:
Understand how LLMs function and how tools like ChatGPT and Copilot apply them.
Use prompt-based interactions to automate and accelerate tasks.
Apply AI tools to real scenarios such as email drafting, report summarization, and agreement review.
Evaluate strategic benefits, limitations, and licensing considerations for LLM adoption.
This instructor-led, live training in Varna (online or onsite) is aimed at intermediate-level to advanced-level AI researchers, data scientists, and developers who wish to understand, fine-tune, and implement Meta AI's Large Language Models for various NLP applications.
By the end of this training, participants will be able to:
Understand the architecture and functioning of Meta AI's Large Language Models.
Set up and fine-tune Meta AI LLMs for specific use cases.
Implement LLM-based applications such as text summarization, chatbots, and sentiment analysis.
Optimize and deploy large language models efficiently.
This instructor-led, live training in Varna (online or onsite) is aimed at intermediate-level AI professionals, business analysts, and technology leaders who wish to understand the principles of generative AI and the applications of LLMs in business settings. Participants will learn about transformers, prompt engineering, and ethical considerations in deploying these models for real-world solutions.
By the end of this training, participants will be able to:
Understand the underlying principles of generative AI and large language models.
Implement and fine-tune LLMs for specific business applications.
Apply prompt engineering techniques for optimal model outputs.
Recognize ethical considerations and manage risks in LLM deployment.
This instructor-led, live training in Varna (online or onsite) is aimed at intermediate-level AI professionals and ethicists, data scientists and engineers, and policy makers and stakeholders who wish to understand and navigate the ethical landscape of LLMs.
By the end of this training, participants will be able to:
Identify ethical issues and challenges associated with LLMs.
Apply ethical frameworks and principles to LLM deployment.
Assess the societal impact of LLMs and mitigate potential risks.
Develop strategies for responsible AI development and usage.
This instructor-led, live training in Varna (online or onsite) is aimed at intermediate-level NLP practitioners and data scientists, content creators and translators, and global businesses who wish to use LLMs for language translation and creating multilingual content.
By the end of this training, participants will be able to:
Understand the principles of cross-lingual learning and translation with LLMs.
Implement LLMs for translating content between various languages.
Create and manage multilingual datasets for training LLMs.
Develop strategies for maintaining consistency and quality in translation.
This instructor-led, live training in Varna (online or onsite) is aimed at intermediate-level financial analysts, data scientists, and investment professionals who wish to leverage LLMs for financial market analysis and prediction.
By the end of this training, participants will be able to:
Understand the application of LLMs in financial market analysis.
Use LLMs to process financial news, reports, and data for market insights.
Develop predictive models for stock prices, market trends, and economic indicators.
Integrate LLM insights into investment decision-making processes.
This instructor-led, live training in Varna (online or onsite) is aimed at intermediate-level environmental scientists and researchers, data analysts, and policy makers and environmental advocates who wish to use LLMs for environmental modeling and analysis.
By the end of this training, participants will be able to:
Understand the application of LLMs in environmental science.
Utilize LLMs to analyze and model environmental data.
Interpret LLM outputs for environmental impact assessments.
Communicate findings effectively to inform policy and conservation efforts.
This instructor-led, live training in Varna (online or onsite) is aimed at intermediate-level VR and AR developers, game designers, and AI engineers who wish to incorporate LLMs into VR and AR applications to create more engaging and responsive environments.
By the end of this training, participants will be able to:
Understand the role of LLMs in creating immersive VR and AR experiences.
Develop VR and AR applications that utilize LLMs for interactive dialogues and content creation.
Integrate LLMs with VR and AR development tools for enhanced user engagement.
Apply best practices for designing AI-driven narratives and interactions in virtual spaces.
This instructor-led, live training in Varna (online or onsite) is aimed at intermediate-level data scientists, machine learning engineers, and software developers who wish to apply Large Language Models (LLMs) to multimodal data for advanced AI applications.
By the end of this training, participants will be able to:
Understand the principles of multimodal learning with LLMs.
Implement LLMs to process and analyze text, image, and audio data.
Develop applications that leverage the strengths of multimodal data integration.
Evaluate the performance of multimodal LLM systems.
This instructor-led, live training in Varna (online or onsite) is aimed at intermediate-level cybersecurity professionals and data scientists who wish to utilize LLMs for enhancing cybersecurity measures and threat intelligence.
By the end of this training, participants will be able to:
Understand the role of LLMs in cybersecurity.
Implement LLMs for threat detection and analysis.
Utilize LLMs for security automation and response.
Integrate LLMs with existing security infrastructure.
This instructor-led, live training in Varna (online or onsite) is aimed at intermediate-level data scientists and business analysts who wish to utilize large language models (LLMs) to forecast trends and behaviors in various industries.
By the end of this training, participants will be able to:
Understand the fundamentals of LLMs and their role in predictive analytics.
Implement LLMs to analyze and forecast data in various industries.
Evaluate the effectiveness of predictive models using LLMs.
Integrate LLMs with existing data processing pipelines.
This instructor-led, live training in Varna (online or onsite) is aimed at intermediate-level data scientists who wish to gain a comprehensive understanding and practical skills in both Large Language Models (LLMs) and Reinforcement Learning (RL).
By the end of this training, participants will be able to:
Understand the components and functionality of transformer models.
Optimize and fine-tune LLMs for specific tasks and applications.
Understand the core principles and methodologies of reinforcement learning.
Learn how reinforcement learning techniques can enhance the performance of LLMs.
This instructor-led, live training in Varna (online or onsite) is aimed at intermediate-level content creators, marketers, and educational technologists who wish to harness the power of LLMs for generating high-quality, diverse, and engaging content across various domains.
By the end of this training, participants will be able to:
Understand the capabilities of LLMs and their application in content generation.
Set up and use LLMs for generating various types of content.
Apply best practices for prompting and fine-tuning LLMs to produce desired outputs.
Evaluate the quality of AI-generated content and refine it for specific audiences.
Explore advanced techniques for creative and multi-modal content generation with LLMs.
This instructor-led, live training in Varna (online or onsite) is aimed at educators, EdTech professionals, and researchers with varying levels of experience and expertise who wish to leverage LLMs for creating personalized educational experiences.
By the end of this training, participants will be able to:
Understand the architecture and capabilities of LLMs.
Identify opportunities for personalization in educational content using LLMs.
Design adaptive learning platforms that utilize LLMs for content personalization.
Implement LLM-driven strategies for enhancing student engagement and learning outcomes.
Evaluate the effectiveness of LLMs in educational settings and make data-driven decisions for
This instructor-led, live training in Varna (online or onsite) is aimed at intermediate-level ML practitioners and AI developers who wish to fine-tune and deploy open-weight models like LLaMA, Mistral, and Qwen for specific business or internal applications.
By the end of this training, participants will be able to:
Understand the ecosystem and differences between open-source LLMs.
Prepare datasets and fine-tuning configurations for models like LLaMA, Mistral, and Qwen.
Execute fine-tuning pipelines using Hugging Face Transformers and PEFT.
Evaluate, save, and deploy fine-tuned models in secure environments.
This instructor-led, live training in Varna (online or onsite) is aimed at beginner-level to intermediate-level software developers and data scientists who wish to implement LLMs in speech recognition and synthesis systems.
By the end of this training, participants will be able to:
Understand the role of LLMs in speech technologies.
Implement LLMs for accurate speech recognition and natural-sounding speech synthesis.
Integrate LLMs with speech recognition engines and speech synthesizers.
Evaluate and improve the performance of speech systems using LLMs.
Stay informed about current trends and future directions in speech technologies.
This instructor-led, live training in Varna (online or onsite) is aimed at beginner-level to intermediate-level customer support and IT professionals who wish to implement LLMs to create responsive and intelligent customer support chatbots.
By the end of this training, participants will be able to:
Understand the fundamentals and architecture of Large Language Models (LLMs).
Design and integrate LLMs into customer support systems.
Enhance the responsiveness and user experience of chatbots.
Address ethical considerations and ensure compliance with industry standards.
Deploy and maintain an LLM-based chatbot for real-world applications.
This instructor-led, live training in Varna (online or onsite) is aimed at intermediate-level data scientists and AI engineers who wish to fine-tune large language models more affordably and efficiently using methods like LoRA, Adapter Tuning, and Prefix Tuning.
By the end of this training, participants will be able to:
Understand the theory behind parameter-efficient fine-tuning approaches.
Implement LoRA, Adapter Tuning, and Prefix Tuning using Hugging Face PEFT.
Compare performance and cost trade-offs of PEFT methods vs. full fine-tuning.
Deploy and scale fine-tuned LLMs with reduced compute and storage requirements.
This instructor-led, live training in Varna (online or onsite) is aimed at intermediate-level to advanced-level machine learning engineers, AI developers, and data scientists who wish to learn how to use QLoRA to efficiently fine-tune large models for specific tasks and customizations.
By the end of this training, participants will be able to:
Understand the theory behind QLoRA and quantization techniques for LLMs.
Implement QLoRA in fine-tuning large language models for domain-specific applications.
Optimize fine-tuning performance on limited computational resources using quantization.
Deploy and evaluate fine-tuned models in real-world applications efficiently.
This instructor-led, live training in Varna (online or onsite) is aimed at intermediate-level data and marketing professionals who wish to apply LLMs to analyze and interpret public sentiment from various text sources such as social media posts, product reviews, and customer feedback.
By the end of this training, participants will be able to:
Understand the principles of sentiment analysis and its application using LLMs.
Preprocess and prepare datasets for sentiment analysis.
Train and fine-tune LLMs to accurately reflect sentiment in text.
Analyze sentiment in real-time from social media and other text sources.
Integrate sentiment analysis findings into business strategies and decision-making processes.
This instructor-led, live training in Varna (online or onsite) is aimed at intermediate-level software developers and technical writers who wish to leverage LLMs to streamline their coding workflow and create detailed, comprehensive documentation.
By the end of this training, participants will be able to:
Understand the role of LLMs in automating code generation and software documentation.
Utilize LLMs to create accurate and efficient code snippets and documentation.
Integrate LLMs into their software development lifecycle for enhanced productivity.
Maintain high-quality documentation standards using automated tools.
Address ethical considerations and best practices for using AI in software development.
This instructor-led, live training in Varna (online or onsite) is aimed at intermediate-level business professionals and data analysts who wish to harness the power of LLMs for extracting business insights.
By the end of this training, participants will be able to:
Understand the fundamentals and applications of LLMs in the context of business intelligence.
Apply LLMs to analyze large datasets and extract meaningful insights.
Integrate LLM-driven analytics into strategic business decision-making processes.
Evaluate the ethical considerations and best practices for using LLMs in business.
Anticipate future trends in AI and prepare for the evolving landscape of business intelligence.
This instructor-led, live training in Varna (online or onsite) is aimed at intermediate-level to advanced-level developers and data scientists who wish to master LlamaIndex for developing innovative LLM-powered applications.
By the end of this training, participants will be able to:
Set up and configure LlamaIndex for use with LLMs.
Index and query custom datasets using LlamaIndex to enhance LLM functionality.
Design and develop sophisticated applications that utilize LlamaIndex and LLMs.
Understand and apply best practices for working with LLMs and LlamaIndex.
Navigate the ethical considerations involved in deploying LLM-powered applications.
This instructor-led, live training in Varna (online or onsite) is aimed at intermediate-level AI researchers, machine learning professionals, and data scientists who wish to use LlamaIndex to enhance the capabilities of AI models, making them more accurate and reliable for various applications.
By the end of this training, participants will be able to:
Understand the principles and components of LlamaIndex.
Ingest and structure data for use with LLMs.
Implement context augmentation to improve AI model performance.
Integrate LlamaIndex into existing AI systems and workflows.
Mistral AI is a powerful family of open-source and enterprise-ready AI models for language, multimodal, and agentic applications.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level professionals who wish to build, deploy, and manage AI agents using Mistral’s Medium 3, Le Chat Enterprise, and Devstral models.
By the end of this training, participants will be able to:
Understand the architecture and capabilities of Mistral Medium 3, Le Chat Enterprise, and Devstral.
Design and implement AI agents leveraging Mistral models for enterprise and developer use cases.
Integrate coding systems, connectors, and enterprise data into agent workflows.
Optimize performance, cost, and compliance for Mistral-powered agents.
Format of the Course
Interactive lecture and discussion.
Lots of exercises and practice.
Hands-on implementation in a live-lab environment.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
Mistral AI is an open AI platform that enables teams to build and integrate conversational assistants into enterprise and customer-facing workflows.
This instructor-led, live training (online or onsite) is aimed at beginner-level to intermediate-level product managers, full-stack developers, and integration engineers who wish to design, integrate, and productize conversational assistants using Mistral connectors and integrations.
By the end of this training, participants will be able to:
Integrate Mistral conversational models with enterprise and SaaS connectors.
Implement retrieval-augmented generation (RAG) for grounded responses.
Design UX patterns for internal and external chat assistants.
Deploy assistants into product workflows for real-world use cases.
Format of the Course
Interactive lecture and discussion.
Hands-on integration exercises.
Live-lab development of conversational assistants.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
This instructor-led, live training in Varna (online or onsite) is aimed at intermediate-level professionals who wish to leverage the power of prompt engineering and few-shot learning to optimize LLM performance for real-world applications.
By the end of this training, participants will be able to:
Understand the principles of prompt engineering and few-shot learning.
Design effective prompts for various NLP tasks.
Leverage few-shot techniques to adapt LLMs with minimal data.
Optimize LLM performance for practical applications.
LangGraph is a framework for building stateful, multi-actor LLM applications as composable graphs with persistent state and precise control over execution.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level professionals who wish to design, implement, and operate LangGraph-based legal solutions with the necessary compliance, traceability, and governance controls.
By the end of this training, participants will be able to:
Design legal-specific LangGraph workflows that preserve auditability and compliance.
Integrate legal ontologies and document standards into graph state and processing.
Implement guardrails, human-in-the-loop approvals, and traceable decision paths.
Deploy, monitor, and maintain LangGraph services in production with observability and cost controls.
Format of the Course
Interactive lecture and discussion.
Lots of exercises and practice.
Hands-on implementation in a live-lab environment.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
Mistral AI is an open and enterprise-ready AI platform that provides features for secure, compliant, and responsible AI deployment.
This instructor-led, live training (online or onsite) is aimed at intermediate-level compliance leads, security architects, and legal/ops stakeholders who wish to implement responsible AI practices with Mistral by leveraging privacy, data residency, and enterprise control mechanisms.
By the end of this training, participants will be able to:
Implement privacy-preserving techniques in Mistral deployments.
Apply data residency strategies to meet regulatory requirements.
Set up enterprise-grade controls such as RBAC, SSO, and audit logs.
Evaluate vendor and deployment options for compliance alignment.
Format of the Course
Interactive lecture and discussion.
Compliance-focused case studies and exercises.
Hands-on implementation of enterprise AI controls.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
LangGraph is a framework for building stateful, multi-actor LLM applications as composable graphs with persistent state and control over execution.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level professionals who wish to design, implement, and operate LangGraph-based finance solutions with proper governance, observability, and compliance.
By the end of this training, participants will be able to:
Design finance-specific LangGraph workflows aligned to regulatory and audit requirements.
Integrate financial data standards and ontologies into graph state and tooling.
Implement reliability, safety, and human-in-the-loop controls for critical processes.
Deploy, monitor, and optimize LangGraph systems for performance, cost, and SLAs.
Format of the Course
Interactive lecture and discussion.
Lots of exercises and practice.
Hands-on implementation in a live-lab environment.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
Vertex AI provides powerful tools for building multimodal LLM workflows that integrate text, audio, and image data into a single pipeline. With long context window support and Gemini API parameters, it enables advanced applications in planning, reasoning, and cross-modal intelligence.
This instructor-led, live training (online or onsite) is aimed at intermediate to advanced-level practitioners who wish to design, build, and optimize multimodal AI workflows in Vertex AI.
By the end of this training, participants will be able to:
Leverage Gemini models for multimodal inputs and outputs.
Implement long-context workflows for complex reasoning.
Design pipelines that integrate text, audio, and image analysis.
Optimize Gemini API parameters for performance and cost efficiency.
Format of the Course
Interactive lecture and discussion.
Hands-on labs with multimodal workflows.
Project-based exercises for applied multimodal use cases.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
Mistral models are open-source AI technologies that now extend into multimodal workflows, supporting both language and vision tasks for enterprise and research applications.
This instructor-led, live training (online or onsite) is aimed at intermediate-level ML researchers, applied engineers, and product teams who wish to build multimodal applications with Mistral models, including OCR and document understanding pipelines.
By the end of this training, participants will be able to:
Set up and configure Mistral models for multimodal tasks.
Implement OCR workflows and integrate them with NLP pipelines.
Design document understanding applications for enterprise use cases.
Develop vision-text search and assistive UI functionalities.
Format of the Course
Interactive lecture and discussion.
Hands-on coding exercises.
Live-lab implementation of multimodal pipelines.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
Devstral and Mistral models are open-source AI technologies designed for flexible deployment, fine-tuning, and scalable integration.
This instructor-led, live training (online or onsite) is aimed at intermediate–level to advanced–level ML engineers, platform teams, and research engineers who wish to self-host, fine-tune, and govern Mistral and Devstral models in production environments.
By the end of this training, participants will be able to:
Set up and configure self-hosted environments for Mistral and Devstral models.
Apply fine-tuning techniques for domain-specific performance.
Implement versioning, monitoring, and lifecycle governance.
Ensure security, compliance, and responsible usage of open-source models.
Format of the Course
Interactive lecture and discussion.
Hands-on exercises in self-hosting and fine-tuning.
Live-lab implementation of governance and monitoring pipelines.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
LangGraph enables stateful, multi-actor workflows powered by LLMs with precise control over execution paths and state persistence. In healthcare, these capabilities are crucial for compliance, interoperability, and building decision-support systems that align with medical workflows.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level professionals who wish to design, implement, and manage LangGraph-based healthcare solutions while addressing regulatory, ethical, and operational challenges.
By the end of this training, participants will be able to:
Design healthcare-specific LangGraph workflows with compliance and auditability in mind.
Integrate LangGraph applications with medical ontologies and standards (FHIR, SNOMED CT, ICD).
Apply best practices for reliability, traceability, and explainability in sensitive environments.
Deploy, monitor, and validate LangGraph applications in healthcare production settings.
Format of the Course
Interactive lecture and discussion.
Hands-on exercises with real-world case studies.
Implementation practice in a live-lab environment.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
LangGraph is a framework for building stateful, multi-actor LLM applications as composable graphs with persistent state and control over execution.
This instructor-led, live training (online or onsite) is aimed at advanced-level AI platform engineers, DevOps for AI, and ML architects who wish to optimize, debug, monitor, and operate production-grade LangGraph systems.
By the end of this training, participants will be able to:
Design and optimize complex LangGraph topologies for speed, cost, and scalability.
Engineer reliability with retries, timeouts, idempotency, and checkpoint-based recovery.
Debug and trace graph executions, inspect state, and systematically reproduce production issues.
Instrument graphs with logs, metrics, and traces, deploy to production, and monitor SLAs and costs.
Format of the Course
Interactive lecture and discussion.
Lots of exercises and practice.
Hands-on implementation in a live-lab environment.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
This instructor-led, live training in Varna (online or onsite) is aimed at intermediate-level developers who wish to learn how to use generative AI with LLMs for various tasks and domains.
By the end of this training, participants will be able to:
Explain what generative AI is and how it works.
Describe the transformer architecture that powers LLMs.
Use empirical scaling laws to optimize LLMs for different tasks and constraints.
Apply state-of-the-art tools and methods to train, fine-tune, and deploy LLMs.
Discuss the opportunities and risks of generative AI for society and business.
LLMs and autonomous agent frameworks like AutoGen and CrewAI are redefining how DevOps teams automate tasks such as change tracking, test generation, and alert triage by simulating human-like collaboration and decision-making.
This instructor-led, live training (online or onsite) is aimed at advanced-level engineers who wish to design and implement DevOps automation workflows powered by large language models (LLMs) and multi-agent systems.
By the end of this training, participants will be able to:
Integrate LLM-based agents into CI/CD workflows for smart automation.
Automate test generation, commit analysis, and change summaries using agents.
Coordinate multiple agents for triaging alerts, generating responses, and providing DevOps recommendations.
Build secure and maintainable agent-powered workflows using open-source frameworks.
Format of the Course
Interactive lecture and discussion.
Lots of exercises and practice.
Hands-on implementation in a live-lab environment.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
Mistral is a high-performance family of large language models optimized for cost-effective production deployment at scale.
This instructor-led, live training (online or onsite) is aimed at advanced-level infrastructure engineers, cloud architects, and MLOps leads who wish to design, deploy, and optimize Mistral-based architectures for maximum throughput and minimum cost.
By the end of this training, participants will be able to:
Implement scalable deployment patterns for Mistral Medium 3.
Apply batching, quantization, and efficient serving strategies.
Optimize inference costs while maintaining performance.
Design production-ready serving topologies for enterprise workloads.
Format of the Course
Interactive lecture and discussion.
Lots of exercises and practice.
Hands-on implementation in a live-lab environment.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
This instructor-led, live training in Varna (online or onsite) is aimed at intermediate-level to advanced-level AI developers, architects, and product managers who wish to identify and mitigate risks associated with LLM-powered applications, including prompt injection, data leakage, and unfiltered output, while incorporating security controls like input validation, human-in-the-loop oversight, and output guardrails.
By the end of this training, participants will be able to:
Understand the core vulnerabilities of LLM-based systems.
Apply secure design principles to LLM app architecture.
Use tools such as Guardrails AI and LangChain for validation, filtering, and safety.
Integrate techniques like sandboxing, red teaming, and human-in-the-loop review into production-grade pipelines.
LangGraph is a graph-based orchestration framework that enables conditional, multi-step LLM and tool workflows, ideal for automating and personalizing content pipelines.
This instructor-led, live training (online or onsite) is aimed at intermediate-level marketers, content strategists, and automation developers who wish to implement dynamic, branching email campaigns and content generation pipelines using LangGraph.
By the end of this training, participants will be able to:
Design graph-structured content and email workflows with conditional logic.
Integrate LLMs, APIs, and data sources for automated personalization.
Manage state, memory, and context across multi-step campaigns.
Evaluate, monitor, and optimize workflow performance and delivery outcomes.
Format of the Course
Interactive lectures and group discussions.
Hands-on labs implementing email workflows and content pipelines.
Scenario-based exercises on personalization, segmentation, and branching logic.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
Le Chat Enterprise is a private ChatOps solution that provides secure, customizable, and governed conversational AI capabilities for organizations, with support for RBAC, SSO, connectors, and enterprise app integrations.
This instructor-led, live training (online or onsite) is aimed at intermediate-level product managers, IT leads, solution engineers, and security/compliance teams who wish to deploy, configure, and govern Le Chat Enterprise in enterprise environments.
By the end of this training, participants will be able to:
Set up and configure Le Chat Enterprise for secure deployments.
Enable RBAC, SSO, and compliance-driven controls.
Integrate Le Chat with enterprise applications and data stores.
Design and implement governance and admin playbooks for ChatOps.
Format of the Course
Interactive lecture and discussion.
Lots of exercises and practice.
Hands-on implementation in a live-lab environment.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
Devstral is an open-source framework designed for building and running coding agents that can interact with codebases, developer tools, and APIs to enhance engineering productivity.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level ML engineers, developer-tooling teams, and SREs who wish to design, implement, and optimize coding agents using Devstral.
By the end of this training, participants will be able to:
Set up and configure Devstral for coding agent development.
Design agentic workflows for codebase exploration and modification.
Integrate coding agents with developer tools and APIs.
Implement best practices for secure and efficient agent deployment.
Format of the Course
Interactive lecture and discussion.
Lots of exercises and practice.
Hands-on implementation in a live-lab environment.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
This instructor-led, live training in Varna (online or onsite) is aimed at intermediate-level to advanced-level professionals who wish to customize pre-trained models for specific tasks and datasets.
By the end of this training, participants will be able to:
Understand the principles of fine-tuning and its applications.
Prepare datasets for fine-tuning pre-trained models.
Fine-tune large language models (LLMs) for NLP tasks.
Optimize model performance and address common challenges.
LangGraph is a framework for composing graph-structured LLM workflows that support branching, tool use, memory, and controllable execution.
This instructor-led, live training (online or onsite) is aimed at intermediate-level engineers and product teams who wish to combine LangGraph’s graph logic with LLM agent loops to build dynamic, context-aware applications such as customer support agents, decision trees, and information retrieval systems.
By the end of this training, participants will be able to:
Design graph-based workflows that coordinate LLM agents, tools, and memory.
Implement conditional routing, retries, and fallbacks for robust execution.
Integrate retrieval, APIs, and structured outputs into agent loops.
Evaluate, monitor, and harden agent behavior for reliability and safety.
Format of the Course
Interactive lecture and facilitated discussion.
Guided labs and code walkthroughs in a sandbox environment.
Scenario-based design exercises and peer reviews.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
LLMs for Code Understanding, Refactoring, and Documentation is a technical course focused on applying large language models (LLMs) to improve code quality, reduce technical debt, and automate documentation tasks across software teams.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level software professionals who wish to use LLMs such as GPT to analyze, refactor, and document complex or legacy codebases more effectively.
By the end of this training, participants will be able to:
Use LLMs to explain code, dependencies, and logic in unfamiliar repositories.
Identify and refactor anti-patterns and improve code readability.
Automatically generate and maintain in-line comments, README files, and API documentation.
Integrate LLM-driven insights into existing CI/CD and review workflows.
Format of the Course
Interactive lecture and discussion.
Lots of exercises and practice.
Hands-on implementation in a live-lab environment.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
This instructor-led, live training in Varna (online or onsite) is aimed at intermediate-level data scientists, AI developers, and AI enthusiasts who wish to use LLMs to perform various NLP tasks and create novel and diverse content for different purposes.
By the end of this training, participants will be able to:
Establish a development environment with LLMs and essential tools.
Expertly perform NLU and NLI tasks with LLMs.
Extract, infer, and utilize knowledge graphs effectively.
Generate and manage dialogues using LLMs for conversational applications.
Evaluate content quality and diversity generated by LLMs and generative AI.
Apply ethical principles, ensuring fairness and responsible use of LLMs.
LangGraph is a framework for building graph-structured LLM applications that support planning, branching, tool use, memory, and controllable execution.
This instructor-led, live training (online or onsite) is aimed at beginner-level developers, prompt engineers, and data practitioners who wish to design and build reliable, multi-step LLM workflows using LangGraph.
By the end of this training, participants will be able to:
Explain core LangGraph concepts (nodes, edges, state) and when to use them.
Build prompt chains that branch, call tools, and maintain memory.
Integrate retrieval and external APIs into graph workflows.
Test, debug, and evaluate LangGraph apps for reliability and safety.
Format of the Course
Interactive lecture and facilitated discussion.
Guided labs and code walkthroughs in a sandbox environment.
Scenario-based exercises on design, testing, and evaluation.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
This instructor-led, live training in Varna (online or onsite) is aimed at beginner-level to intermediate-level developers who wish to use Large Language Models for various natural language tasks.
By the end of this training, participants will be able to:
Set up a development environment that includes a popular LLM.
Create a basic LLM and fine-tune it on a custom dataset.
Use LLMs for different natural language tasks such as text summarization, question answering, text generation, and more.
Debug and evaluate LLMs using tools such as TensorBoard, PyTorch Lightning, and Hugging Face Datasets.
Mistral Medium 3 is a high-performance, multimodal large language model designed for production-grade deployment across enterprise environments.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level AI/ML engineers, platform architects, and MLOps teams who wish to deploy, optimize, and secure Mistral Medium 3 for enterprise use cases.
By the end of this training, participants will be able to:
Deploy Mistral Medium 3 using API and self-hosted options.
Optimize inference performance and costs.
Implement multimodal use cases with Mistral Medium 3.
Apply security and compliance best practices for enterprise environments.
Format of the Course
Interactive lecture and discussion.
Lots of exercises and practice.
Hands-on implementation in a live-lab environment.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
This instructor-led, live training in Varna (online or onsite) is aimed at beginner-level professionals who wish to install, configure, and use Ollama for running AI models on their local machines.
By the end of this training, participants will be able to:
Understand the fundamentals of Ollama and its capabilities.
Set up Ollama for running local AI models.
Deploy and interact with LLMs using Ollama.
Optimize performance and resource usage for AI workloads.
Explore use cases for local AI deployment in various industries.
Read more...
Last Updated:
Testimonials (1)
prompts engineering part
Michal - GE HealthCare
Course - Generative AI with Large Language Models (LLMs)
Online LLMs (Large Language Models) training in Varna, LLMs (Large Language Models) training courses in Varna, Weekend Large Language Models courses in Varna, Evening Large Language Models (LLMs) training in Varna, Large Language Models (LLMs) instructor-led in Varna, LLMs (Large Language Models) one on one training in Varna, Large Language Models (LLMs) private courses in Varna, Weekend Large Language Models (LLMs) training in Varna, LLMs (Large Language Models) boot camp in Varna, LLMs (Large Language Models) coaching in Varna, LLMs instructor in Varna, LLMs trainer in Varna, Large Language Models (LLMs) instructor-led in Varna, Evening LLMs courses in Varna, Large Language Models on-site in Varna, Large Language Models (LLMs) classes in Varna, Online LLMs training in Varna