Online or onsite, instructor-led live GPU (Graphics Processing Unit) training courses demonstrate through interactive discussion and hands-on practice the fundamentals of GPU and how to program GPUs.
GPU training is available as "online live training" or "onsite live training". Online live training (aka "remote live training") is carried out by way of an interactive, remote desktop. Onsite live GPU training can be carried out locally on customer premises in Varna or in NobleProg corporate training centers in Varna.
The "Central Point" complex offers quick access to main roads leading to the airport, the northern and southern resorts and the Varna - Sofia and Varna - Burgas highways.
Huawei Ascend comprises a series of AI processors engineered for high-efficiency inference and training tasks.
This instructor-led live training, available either online or at your location, targets intermediate-level AI engineers and data scientists seeking to create and optimize neural network models via Huawei’s Ascend platform and the CANN toolkit.
Upon completion of this training, participants will be capable of:
Establishing and configuring the CANN development environment.
Creating AI applications utilizing MindSpore and CloudMatrix workflows.
Enhancing performance on Ascend NPUs through custom operators and tiling techniques.
Deploying models into cloud or edge computing environments.
Course Format
Engaging lectures combined with interactive discussions.
Practical application of Huawei Ascend and the CANN toolkit within sample projects.
Supervised exercises centered on model construction, training, and deployment.
Customization Opportunities
For customized training tailored to your specific infrastructure or datasets, please reach out to us to arrange a session.
Huawei’s AI ecosystem — spanning from the low-level CANN SDK to the high-level MindSpore framework — delivers a cohesive environment for developing and deploying AI solutions, specifically optimized for Ascend hardware.
This instructor-led live training (available online or onsite) is designed for technical professionals ranging from beginner to intermediate skill levels who want to understand how CANN and MindSpore interact to facilitate AI lifecycle management and inform infrastructure choices.
Upon completion of this training, participants will be able to:
Comprehend the layered structure of Huawei’s AI compute architecture.
Recognize how CANN facilitates model optimization and hardware-level implementation.
Assess the MindSpore framework and its toolchain against industry standards.
Place Huawei's AI stack within the context of enterprise or cloud/on-premises environments.
Course Format
Interactive lectures and discussions.
Live system demonstrations and case-based walkthroughs.
Optional guided labs focusing on the model flow from MindSpore to CANN.
Course Customization Options
To arrange customized training for this course, please contact us.
This instructor-led, live training in Varna (online or onsite) is aimed at beginner-level to intermediate-level developers who wish to use OpenACC to program heterogeneous devices and exploit their parallelism.
By the end of this training, participants will be able to:
Set up an OpenACC development environment.
Write and run a basic OpenACC program.
Annotate code with OpenACC directives and clauses.
The CANN SDK (Compute Architecture for Neural Networks) offers robust deployment and optimization tools designed for real-time AI applications in computer vision and natural language processing, particularly on Huawei Ascend hardware.
This instructor-led live training, available online or onsite, targets intermediate-level AI professionals looking to build, deploy, and optimize vision and language models using the CANN SDK for production scenarios.
Upon completion of this training, participants will be capable of:
Deploying and optimizing CV and NLP models via CANN and AscendCL.
Utilizing CANN tools to convert models and integrate them into active pipelines.
Enhancing inference performance for tasks such as detection, classification, and sentiment analysis.
Developing real-time CV/NLP pipelines suitable for edge or cloud deployment environments.
Course Format
Interactive lectures paired with live demonstrations.
Practical labs focused on model deployment and performance profiling.
Designing live pipelines using real-world CV and NLP use cases.
Customization Options
To arrange customized training for this course, please contact us.
This instructor-led live training in Varna (online or onsite) is tailored for beginner to intermediate-level developers who wish to understand the basics of GPU programming and the key frameworks and tools used for developing GPU applications.
By the end of this training, participants will be able to: Understand the difference between CPU and GPU computing and the benefits and challenges of GPU programming.
Choose the right framework and tool for their GPU application.
Create a basic GPU program that performs vector addition using one or more of the frameworks and tools.
Use the respective APIs, languages, and libraries to query device information, allocate and deallocate device memory, copy data between host and device, launch kernels, and synchronize threads.
Use the respective memory spaces, such as global, local, constant, and private, to optimize data transfers and memory accesses.
Use the respective execution models, such as work-items, work-groups, threads, blocks, and grids, to control the parallelism.
Debug and test GPU programs using tools such as CodeXL, CUDA-GDB, CUDA-MEMCHECK, and NVIDIA Nsight.
Optimize GPU programs using techniques such as coalescing, caching, prefetching, and profiling.
CANN TIK (Tensor Instruction Kernel) and Apache TVM provide powerful tools for the advanced optimization and customization of AI model operators tailored for Huawei Ascend hardware.
This instructor-led, live training session, available online or on-site, is designed for experienced system developers who aim to create, deploy, and refine custom operators for AI models utilizing CANN’s TIK programming model and TVM compiler integration.
Upon completing this training, participants will be capable of:
Writing and testing custom AI operators using the TIK DSL for Ascend processors.
Integrating custom operators into the CANN runtime and execution graph.
Leveraging TVM for operator scheduling, auto-tuning, and benchmarking.
Debugging and optimizing instruction-level performance for specific computation patterns.
Course Format
Interactive lectures and demonstrations.
Practical coding exercises involving operators within TIK and TVM pipelines.
Testing and tuning on Ascend hardware or in simulator environments.
Customization Options for the Course
For customized training requests for this course, please reach out to us to arrange details.
This instructor-led, live training in Varna (online or onsite) is aimed at beginner-level to intermediate-level developers who wish to use different frameworks for GPU programming and compare their features, performance, and compatibility.
By the end of this training, participants will be able to:
Set up a development environment that includes OpenCL SDK, CUDA Toolkit, ROCm Platform, a device that supports OpenCL, CUDA, or ROCm, and Visual Studio Code.
Create a basic GPU program that performs vector addition using OpenCL, CUDA, and ROCm, and compare the syntax, structure, and execution of each framework.
Use the respective APIs to query device information, allocate and deallocate device memory, copy data between host and device, launch kernels, and synchronize threads.
Use the respective languages to write kernels that execute on the device and manipulate data.
Use the respective built-in functions, variables, and libraries to perform common tasks and operations.
Use the respective memory spaces, such as global, local, constant, and private, to optimize data transfers and memory accesses.
Use the respective execution models to control the threads, blocks, and grids that define the parallelism.
Debug and test GPU programs using tools such as CodeXL, CUDA-GDB, CUDA-MEMCHECK, and NVIDIA Nsight.
Optimize GPU programs using techniques such as coalescing, caching, prefetching, and profiling.
CloudMatrix is Huawei’s unified AI development and deployment platform designed to support scalable, production-grade inference pipelines.
This instructor-led, live training (online or onsite) is aimed at beginner-level to intermediate-level AI professionals who wish to deploy and monitor AI models using the CloudMatrix platform with CANN and MindSpore integration.
Upon completing this training, participants will be equipped to:
Utilize CloudMatrix for model packaging, deployment, and serving.
Convert and optimize models for Ascend chipsets.
Configure pipelines for both real-time and batch inference tasks.
Monitor deployments and tune performance in production settings.
Format of the Course
Interactive lecture and discussion.
Hands-on use of CloudMatrix with real deployment scenarios.
Guided exercises focused on conversion, optimization, and scaling.
Course Customization Options
To request a customized training for this course based on your AI infrastructure or cloud environment, please contact us to arrange.
Huawei's Ascend CANN toolkit facilitates robust AI inference on edge devices like the Ascend 310. This suite offers vital tools for compiling, optimizing, and deploying models in environments where computational power and memory are limited.
This instructor-led, live training (available online or onsite) targets intermediate-level AI developers and integrators seeking to deploy and optimize models on Ascend edge devices using the CANN toolchain.
Upon completion of this training, participants will be capable of:
Preparing and converting AI models for the Ascend 310 using CANN tools.
Constructing lightweight inference pipelines with MindSpore Lite and AscendCL.
Enhancing model performance within constrained compute and memory settings.
Deploying and monitoring AI applications in real-world edge scenarios.
Course Format
Interactive lectures and demonstrations.
Practical lab exercises focusing on edge-specific models and scenarios.
Live deployment examples on either virtual or physical edge hardware.
Course Customization Options
For customized training options, please contact us to arrange.
This instructor-led, live training in Varna (online or onsite) is aimed at beginner-level to intermediate-level developers who wish to install and use ROCm on Windows to program AMD GPUs and exploit their parallelism.
By the end of this training, participants will be able to:
Set up a development environment that includes ROCm Platform, a AMD GPU, and Visual Studio Code on Windows.
Create a basic ROCm program that performs vector addition on the GPU and retrieves the results from the GPU memory.
Use ROCm API to query device information, allocate and deallocate device memory, copy data between host and device, launch kernels, and synchronize threads.
Use HIP language to write kernels that execute on the GPU and manipulate data.
Use HIP built-in functions, variables, and libraries to perform common tasks and operations.
Use ROCm and HIP memory spaces, such as global, shared, constant, and local, to optimize data transfers and memory accesses.
Use ROCm and HIP execution models to control the threads, blocks, and grids that define the parallelism.
Debug and test ROCm and HIP programs using tools such as ROCm Debugger and ROCm Profiler.
Optimize ROCm and HIP programs using techniques such as coalescing, caching, prefetching, and profiling.
This instructor-led, live training in Varna (online or onsite) is aimed at beginner to intermediate developers who wish to use ROCm and HIP to program AMD GPUs and exploit their parallelism.
By the end of this training, participants will be able to:
Set up a development environment that includes ROCm Platform, an AMD GPU, and Visual Studio Code.
Create a basic ROCm program that performs vector addition on the GPU and retrieves the results from the GPU memory.
Use ROCm API to query device information, allocate and deallocate device memory, copy data between host and device, launch kernels, and synchronize threads.
Use HIP language to write kernels that execute on the GPU and manipulate data.
Use HIP built-in functions, variables, and libraries to perform common tasks and operations.
Use ROCm and HIP memory spaces, such as global, shared, constant, and local, to optimize data transfers and memory accesses.
Use ROCm and HIP execution models to control the threads, blocks, and grids that define the parallelism.
Debug and test ROCm and HIP programs using tools such as ROCm Debugger and ROCm Profiler.
Optimize ROCm and HIP programs using techniques such as coalescing, caching, prefetching, and profiling.
CANN (Compute Architecture for Neural Networks) is Huawei's AI computing toolkit designed to compile, optimize, and deploy AI models on Ascend AI processors.
This instructor-led live training (available online or onsite) targets beginner-level AI developers seeking to understand the role of CANN within the model lifecycle, from training through deployment, and its integration with frameworks such as MindSpore, TensorFlow, and PyTorch.
Upon completion of this training, participants will be able to:
Comprehend the purpose and architectural design of the CANN toolkit.
Configure a development environment utilizing CANN and MindSpore.
Convert and deploy a basic AI model onto Ascend hardware.
Acquire foundational knowledge to support future CANN optimization or integration initiatives.
Course Format
Interactive lectures and discussions.
Practical hands-on labs featuring simple model deployment.
Step-by-step guidance through the CANN toolchain and integration points.
Customization Options
To arrange customized training for this course, please contact us.
Ascend, Biren, and Cambricon stand as premier AI hardware platforms in China, providing distinct acceleration and profiling solutions tailored for large-scale AI workloads in production.
This instructor-led live training (available online or onsite) targets advanced AI infrastructure and performance engineers who aim to optimize model inference and training processes across various Chinese AI chip ecosystems.
Upon completion of this training, participants will be equipped to:
Evaluate models on Ascend, Biren, and Cambricon platforms through benchmarking.
Diagnose system bottlenecks and identify inefficiencies in memory and compute resources.
Implement optimizations at the graph, kernel, and operator levels.
Refine deployment pipelines to enhance both throughput and reduce latency.
Course Format
Interactive lectures and discussions.
Practical application of profiling and optimization tools specific to each platform.
Guided exercises designed around real-world tuning scenarios.
Customization Options
For customized training tailored to your specific performance environment or model requirements, please contact us to arrange.
The CANN SDK (Compute Architecture for Neural Networks) serves as Huawei’s foundational AI compute platform, empowering developers to refine and optimize the performance of neural networks deployed on Ascend AI processors.
This instructor-led training session, available online or onsite, targets advanced AI developers and system engineers eager to maximize inference performance utilizing CANN’s sophisticated toolkit. Key components include the Graph Engine, TIK, and custom operator development.
Upon completing this training, participants will be equipped to:
Comprehend CANN's runtime architecture and its performance lifecycle.
Leverage profiling tools and the Graph Engine for thorough performance analysis and optimization.
Develop and optimize custom operators utilizing TIK and TVM.
Address memory bottlenecks and enhance model throughput.
Course Format
Engaging lectures paired with interactive discussions.
Practical labs featuring real-time profiling and operator tuning.
Optimization exercises based on edge-case deployment scenarios.
Customization Options
To request a customized version of this course, please contact us to make arrangements.
Chinese GPU architectures, including Huawei Ascend, Biren, and Cambricon MLUs, provide CUDA alternatives specifically designed for the local AI and HPC markets.
This instructor-led, live training session (available online or onsite) targets advanced-level GPU programmers and infrastructure specialists seeking to migrate and optimize existing CUDA applications for deployment on Chinese hardware platforms.
Upon completion of this training, participants will be equipped to:
Evaluate the compatibility of existing CUDA workloads with Chinese chip alternatives.
Port CUDA codebases to Huawei CANN, Biren SDK, and Cambricon BANGPy environments.
Compare performance metrics and identify optimization opportunities across different platforms.
Address practical challenges related to cross-architecture support and deployment.
Course Format
Interactive lectures and discussions.
Hands-on labs involving code translation and performance comparison.
Guided exercises focusing on multi-GPU adaptation strategies.
Customization Options
To request a customized training session tailored to your specific platform or CUDA project, please contact us to arrange it.
This instructor-led, live training in Varna (online or onsite) is aimed at beginner-level to intermediate-level developers who wish to use CUDA to program NVIDIA GPUs and exploit their parallelism.
By the end of this training, participants will be able to:
Set up a development environment that includes CUDA Toolkit, an NVIDIA GPU, and Visual Studio Code.
Create a basic CUDA program that performs vector addition on the GPU and retrieves the results from the GPU memory.
Use the CUDA API to query device information, allocate and deallocate device memory, copy data between host and device, launch kernels, and synchronize threads.
Use the CUDA C/C++ language to write kernels that execute on the GPU and manipulate data.
Use CUDA built-in functions, variables, and libraries to perform common tasks and operations.
Use CUDA memory spaces, such as global, shared, constant, and local, to optimize data transfers and memory accesses.
Use the CUDA execution model to control the threads, blocks, and grids that define the parallelism.
Debug and test CUDA programs using tools such as CUDA-GDB, CUDA-MEMCHECK, and NVIDIA Nsight.
Optimize CUDA programs using techniques such as coalescing, caching, prefetching, and profiling.
CANN (Compute Architecture for Neural Networks) represents Huawei's AI compute stack, designed for deploying and optimizing AI models on Ascend AI processors.
This instructor-led live training, available online or on-site, targets intermediate-level AI developers and engineers seeking to efficiently deploy trained AI models to Huawei Ascend hardware. The course focuses on utilizing the CANN toolkit alongside tools such as MindSpore, TensorFlow, or PyTorch.
Upon completion, participants will be able to:
Grasp the CANN architecture and its function within the AI deployment pipeline.
Convert and adapt models from popular frameworks into Ascend-compatible formats.
Utilize tools like ATC, OM model conversion, and MindSpore for both cloud and edge inference.
Troubleshoot deployment issues and optimize performance on Ascend hardware.
Course Format
Interactive lectures and demonstrations.
Hands-on lab exercises using CANN tools and Ascend simulators or devices.
Practical deployment scenarios based on real-world AI models.
Customization Options
For a customized training version of this course, please contact us to make arrangements.
Biren AI Accelerators are high-performance GPUs designed for AI and HPC workloads with support for large-scale training and inference.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level developers who wish to program and optimize applications using Biren’s proprietary GPU stack, with practical comparisons to CUDA-based environments.
By the end of this training, participants will be able to:
Understand Biren GPU architecture and memory hierarchy.
Set up the development environment and use Biren’s programming model.
Translate and optimize CUDA-style code for Biren platforms.
Apply performance tuning and debugging techniques.
Format of the Course
Interactive lecture and discussion.
Hands-on use of Biren SDK in sample GPU workloads.
Guided exercises focused on porting and performance tuning.
Course Customization Options
To request a customized training for this course based on your application stack or integration needs, please contact us to arrange.
Cambricon MLUs (Machine Learning Units) are specialized AI chips designed to optimize inference and training tasks in both edge computing and data center environments.
This instructor-led live training, available online or on-site, is designed for intermediate-level developers looking to build and deploy AI models leveraging the BANGPy framework and Neuware SDK on Cambricon MLU hardware.
Upon completing this training, participants will be able to:
Configure and set up the development environments for BANGPy and Neuware.
Develop and optimize Python- and C++-based models for Cambricon MLUs.
Deploy models to edge devices and data centers running the Neuware runtime.
Integrate ML workflows with acceleration features specific to MLU hardware.
Course Format
Interactive lectures and discussions.
Practical, hands-on exercises using BANGPy and Neuware for development and deployment.
Guided labs focusing on optimization, integration, and testing.
Customization Options
For a customized training session tailored to your specific Cambricon device model or use case, please contact us to arrange.
This instructor-led, live training in Varna (online or onsite) is designed for beginner-level system administrators and IT professionals who want to learn how to install, configure, manage, and troubleshoot CUDA environments.
By the end of this training, participants will be able to:
Understand the architecture, components, and capabilities of CUDA.
This instructor-led, live training in Varna (online or onsite) is aimed at beginner-level to intermediate-level developers who wish to use OpenCL to program heterogeneous devices and exploit their parallelism.
By the end of this training, participants will be able to:
Set up a development environment that includes OpenCL SDK, a device that supports OpenCL, and Visual Studio Code.
Create a basic OpenCL program that performs vector addition on the device and retrieves the results from the device memory.
Use OpenCL API to query device information, create contexts, command queues, buffers, kernels, and events.
Use OpenCL C language to write kernels that execute on the device and manipulate data.
Use OpenCL built-in functions, extensions, and libraries to perform common tasks and operations.
Use OpenCL host and device memory models to optimize data transfers and memory accesses.
Use OpenCL execution model to control the work-items, work-groups, and ND-ranges.
Debug and test OpenCL programs using tools such as CodeXL, Intel VTune, and NVIDIA Nsight.
Optimize OpenCL programs using techniques such as vectorization, loop unrolling, local memory, and profiling.
This instructor-led, live training in Varna (online or onsite) is designed for intermediate-level developers who want to utilize CUDA to construct Python applications that execute in parallel on NVIDIA GPUs.
Upon completion of this training, participants will be capable of:
Leveraging the Numba compiler to enhance the performance of Python applications running on NVIDIA GPUs.
Creating, compiling, and launching custom CUDA kernels.
Handling GPU memory resources.
Transforming a CPU-based application into one accelerated by the GPU.
This instructor-led, live training course in Varna covers how to program GPUs for parallel computing, how to use various platforms, how to work with the CUDA platform and its features, and how to perform various optimization techniques using CUDA. Some of the applications include deep learning, analytics, image processing and engineering applications.
Read more...
Last Updated:
Testimonials (2)
Very interactive with various examples, with a good progression in complexity between the start and the end of the training.
Jenny - Andheo
Course - GPU Programming with CUDA and Python
Trainers energy and humor.
Tadeusz Kaluba - Nokia Solutions and Networks Sp. z o.o.
Online GPU (Graphics Processing Unit) training in Varna, GPU (Graphics Processing Unit) training courses in Varna, Weekend Graphics Processing Unit (GPU) courses in Varna, Evening GPU training in Varna, GPU instructor-led in Varna, GPU instructor-led in Varna, Graphics Processing Unit trainer in Varna, GPU classes in Varna, GPU (Graphics Processing Unit) one on one training in Varna, GPU (Graphics Processing Unit) coaching in Varna, Graphics Processing Unit instructor in Varna, Online Graphics Processing Unit training in Varna, Weekend GPU training in Varna, Evening Graphics Processing Unit courses in Varna, GPU (Graphics Processing Unit) boot camp in Varna, Graphics Processing Unit (GPU) on-site in Varna, GPU private courses in Varna