Artificial Intelligence (AI) for Mechatronics Training Course
Mechatronics (also known as mechatronic engineering) integrates mechanical engineering, electronics, and computer science.
This instructor-led live training, available either online or onsite, is designed for engineers who want to explore how artificial intelligence can be applied to mechatronic systems.
Upon completion of this training, participants will be able to:
- Obtain a comprehensive overview of artificial intelligence, machine learning, and computational intelligence.
- Grasp the concepts of neural networks and various learning methodologies.
- Select the most appropriate artificial intelligence approaches for solving real-world problems.
- Implement AI applications within the field of mechatronic engineering.
Format of the Course
- Interactive lectures and discussions.
- Extensive exercises and practice sessions.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request customized training for this course, please contact us to arrange your preferences.
Course Outline
Introduction
Overview of Artificial Intelligence (AI)
- Machine learning
- Computational intelligence
Understanding the Concepts of Neural Networks
- Generative networks
- Deep neural networks
- Convolution neural networks
Understanding Various Learning Methods
- Supervised learning
- Unsupervised learning
- Reinforcement learning
- Semi-supervised learning
Other Computational Intelligence Algorithms
- Fuzzy systems
- Evolutionary algorithms
Exploring Artificial Intelligence Approaches to Optimization
- Choosing AI Approaches Effectively
Learning about Stochastic Dynamic Programming
- Relationship with AI
Implementing Mechatronic Applications with AI
- Medicine
- Rescue
- Defense
- Industry-agnostic trend
Case Study: The Intelligent Robotic Car
Programming the Major Systems of a Robot
- Planning the Project
Implementing AI Capabilities
- Searching and Motion Control
- Localization and Mapping
- Tracking and Controlling
Summary and Next Steps
Requirements
- Basic understanding of computer science and engineering
Audience
- Engineers
Open Training Courses require 5+ participants.
Artificial Intelligence (AI) for Mechatronics Training Course - Booking
Artificial Intelligence (AI) for Mechatronics Training Course - Enquiry
Artificial Intelligence (AI) for Mechatronics - Consultancy Enquiry
Testimonials (2)
Supply of the materials (virtual machine) to get straight into the excersises, and the explanation of the Ros2 core. Why things work a certain way.
Arjan Bakema
Course - Autonomous Navigation & SLAM with ROS 2
its knowledge and utilization of AI for Robotics in the Future.
Ryle - PHILIPPINE MILITARY ACADEMY
Course - Artificial Intelligence (AI) for Robotics
Upcoming Courses
Related Courses
Artificial Intelligence (AI) for Robotics
21 HoursThe intersection of Artificial Intelligence (AI) and Robotics leverages machine learning, control systems, and sensor fusion to build intelligent machines capable of autonomous perception, reasoning, and action. By utilizing contemporary tools such as ROS 2, TensorFlow, and OpenCV, engineers can design robots that intelligently navigate, plan routes, and interact with physical environments.
This instructor-led live training, available either online or on-site, targets intermediate-level engineers looking to develop, train, and deploy AI-powered robotic systems using modern open-source technologies and frameworks.
Upon completion of this training, participants will be able to:
- Utilize Python and ROS 2 to construct and simulate robotic behaviors.
- Deploy Kalman and Particle Filters for precise localization and tracking.
- Apply computer vision techniques via OpenCV for perception and object detection.
- Employ TensorFlow for motion prediction and learning-based control mechanisms.
- Integrate SLAM (Simultaneous Localization and Mapping) to enable autonomous navigation.
- Create reinforcement learning models to enhance robotic decision-making capabilities.
Course Format
- Interactive lectures and discussions.
- Practical implementation exercises using ROS 2 and Python.
- Hands-on practice with both simulated and real-world robotic environments.
Customization Options
For information on arranging a customized training session for this course, please contact us directly.
AI and Robotics for Nuclear - Extended
120 HoursIn this instructor-led, live training held in Bulgaria (online or onsite), participants will explore the technologies, frameworks, and techniques necessary for programming various robots for use in nuclear technology and environmental systems.
The course spans six weeks, meeting five days a week. Each day involves four hours of lectures, discussions, and hands-on robot development in a live lab environment. Participants will complete real-world projects applicable to their work to practice their acquired knowledge.
The course targets hardware simulated in 3D via simulation software. Programming the robots will utilize the ROS (Robot Operating System) open-source framework, C++, and Python.
By the end of this training, participants will be able to:
- Understand the key concepts used in robotic technologies.
- Understand and manage the interaction between software and hardware in a robotic system.
- Understand and implement the software components that underpin robotics.
- Build and operate a simulated mechanical robot that can see, sense, process, navigate, and interact with humans through voice.
- Understand the necessary elements of artificial intelligence (machine learning, deep learning, etc.) applicable to building a smart robot.
- Implement filters (Kalman and Particle) to enable the robot to locate moving objects in its environment.
- Implement search algorithms and motion planning.
- Implement PID controls to regulate a robot's movement within an environment.
- Implement SLAM algorithms to enable a robot to map out an unknown environment.
- Extend a robot's ability to perform complex tasks through Deep Learning.
- Test and troubleshoot a robot in realistic scenarios.
AI and Robotics for Nuclear
80 HoursThis instructor-led live training, delivered in Bulgaria (online or onsite), teaches participants the technologies, frameworks, and techniques for programming various robotic systems used in nuclear technology and environmental systems.
The course spans four weeks, held five days a week. Each four-hour session includes lectures, discussions, and hands-on robot development in a live lab. Participants will complete real-world projects applicable to their work to practice their acquired knowledge.
The target hardware for this course is simulated in 3D using simulation software. The code is then loaded onto physical hardware (such as Arduino) for final deployment testing. Programming is performed using the ROS (Robot Operating System) open-source framework, C++, and Python.
By the end of this training, participants will be able to:
- Understand the key concepts used in robotic technologies.
- Understand and manage the interaction between software and hardware in a robotic system.
- Understand and implement the software components that underpin robotics.
- Build and operate a simulated mechanical robot that can see, sense, process, navigate, and interact with humans through voice.
- Understand the necessary elements of artificial intelligence (machine learning, deep learning, etc.) applicable to building a smart robot.
- Implement filters (Kalman and Particle) to enable the robot to locate moving objects in its environment.
- Implement search algorithms and motion planning.
- Implement PID controls to regulate a robot's movement within an environment.
- Implement SLAM algorithms to enable a robot to map out an unknown environment.
- Test and troubleshoot a robot in realistic scenarios.
Autonomous Navigation & SLAM with ROS 2
21 HoursROS 2 (Robot Operating System 2) is an open-source framework designed to support the development of complex and scalable robotic applications.
This instructor-led, live training (online or onsite) is aimed at intermediate-level robotics engineers and developers who wish to implement autonomous navigation and SLAM (Simultaneous Localization and Mapping) using ROS 2.
By the end of this training, participants will be able to:
- Set up and configure ROS 2 for autonomous navigation applications.
- Implement SLAM algorithms for mapping and localization.
- Integrate sensors such as LiDAR and cameras with ROS 2.
- Simulate and test autonomous navigation in Gazebo.
- Deploy navigation stacks on physical robots.
Format of the Course
- Interactive lecture and discussion.
- Hands-on practice using ROS 2 tools and simulation environments.
- Live-lab implementation and testing on virtual or physical robots.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Developing Intelligent Bots with Azure
14 HoursAzure Bot Service unites the strengths of the Microsoft Bot Framework and Azure Functions, offering a robust platform for rapidly constructing intelligent chatbots.
During this instructor-led live training, attendees will learn how to effectively create smart bots using Microsoft Azure.
Upon completing the training, participants will be capable of:
Grasping the fundamental concepts behind intelligent bots.
Developing intelligent bots through cloud-based applications.
Acquiring practical expertise in the Microsoft Bot Framework, the Bot Builder SDK, and Azure Bot Service.
Implementing established bot design patterns in real-world scenarios.
Creating and deploying their first intelligent bot using Microsoft Azure.
Target Audience
This course is tailored for developers, hobbyists, engineers, and IT professionals with an interest in bot development.
Course Format
The training blends lectures and discussions with exercises, placing a strong emphasis on hands-on practice.
Computer Vision for Robotics: Perception with OpenCV & Deep Learning
21 HoursOpenCV serves as an open-source library for computer vision, facilitating real-time image processing, while deep learning frameworks like TensorFlow supply the necessary tools for intelligent perception and decision-making within robotic systems.
This instructor-led, live training (available online or onsite) targets intermediate-level robotics engineers, computer vision specialists, and machine learning engineers aiming to apply computer vision and deep learning methodologies to enhance robotic perception and autonomy.
Upon completion of this training, participants will be equipped to:
- Build computer vision pipelines utilizing OpenCV.
- Integrate deep learning models for object detection and recognition tasks.
- Leverage vision-based data to control and navigate robots.
- Merge classical vision algorithms with deep neural networks.
- Deploy computer vision solutions on embedded and robotic platforms.
Format of the Course
- Interactive lectures and discussions.
- Practical exercises using OpenCV and TensorFlow.
- Live-lab implementation on either simulated or physical robotic systems.
Course Customization Options
- To request a customized training session for this course, please reach out to us to make arrangements.
Developing a Bot
14 HoursA bot, or chatbot, functions as a virtual assistant designed to automate user interactions across various messaging platforms. This allows tasks to be completed more quickly without requiring direct communication with a human agent.
During this instructor-led live training, participants will learn how to begin developing a bot by stepping through the creation of sample chatbots using specific development tools and frameworks.
By the conclusion of this training, participants will be able to:
- Comprehend the various uses and applications of bots
- Understand the complete bot development lifecycle
- Explore the different tools and platforms utilized in building bots
- Construct a sample chatbot for Facebook Messenger
- Construct a sample chatbot using the Microsoft Bot Framework
Target Audience
- Developers interested in building their own bot
Course Format
- A combination of lectures, discussions, exercises, and extensive hands-on practice
Edge AI for Robots: TinyML, On-Device Inference & Optimization
21 HoursEdge AI allows artificial intelligence models to operate directly on embedded or resource-limited devices, which reduces latency and power usage while enhancing autonomy and privacy in robotic systems.
This instructor-led, live training (available online or onsite) targets intermediate-level embedded developers and robotics engineers aiming to implement machine learning inference and optimization techniques directly on robotic hardware using TinyML and edge AI frameworks.
Upon completing this training, participants will be able to:
- Grasp the fundamentals of TinyML and edge AI in robotics.
- Convert and deploy AI models for on-device inference.
- Optimize models for speed, size, and energy efficiency.
- Integrate edge AI systems into robotic control architectures.
- Evaluate performance and accuracy in real-world scenarios.
Course Format
- Interactive lectures and discussions.
- Hands-on practice with TinyML and edge AI toolchains.
- Practical exercises on embedded and robotic hardware platforms.
Course Customization Options
- To request customized training for this course, please contact us to arrange it.
Human-Centric Physical AI: Collaborative Robots and Beyond
14 HoursThis instructor-led live training in Bulgaria (online or onsite) targets intermediate-level participants interested in exploring the role of collaborative robots (cobots) and other human-centric AI systems in modern workplaces.
Upon completing this training, participants will be capable of:
- Grasping the principles of Human-Centric Physical AI and its practical applications.
- Exploring how collaborative robots contribute to enhanced workplace productivity.
- Recognizing and resolving challenges related to human-machine interactions.
- Developing workflows that maximize collaboration between humans and AI-driven systems.
- Fostering a culture of innovation and adaptability within AI-integrated workplaces.
Human-Robot Interaction (HRI): Voice, Gesture & Collaborative Control
21 HoursHuman-Robot Interaction (HRI): Voice, Gesture & Collaborative Control is a practical course aimed at introducing participants to the design and development of intuitive interfaces for human–robot communication. The training integrates theory, design principles, and programming practice to create natural and responsive interaction systems utilizing speech, gesture, and shared control methods. Participants will learn to integrate perception modules, develop multimodal input systems, and design robots that collaborate safely with humans.
This instructor-led, live training (available online or onsite) targets beginner to intermediate-level participants seeking to design and implement human–robot interaction systems that improve usability, safety, and user experience.
Upon completion of this training, participants will be able to:
- Grasp the foundations and design principles of human–robot interaction.
- Develop voice-based control and response mechanisms for robots.
- Implement gesture recognition using computer vision techniques.
- Design collaborative control systems for safe and shared autonomy.
- Evaluate HRI systems based on usability, safety, and human factors.
Format of the Course
- Interactive lectures and demonstrations.
- Hands-on coding and design exercises.
- Practical experiments in simulation or real robotic environments.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Industrial Robotics Automation: ROS-PLC Integration & Digital Twins
28 HoursIndustrial Robotics Automation: Integrating ROS with PLCs and Digital Twins is a practical, hands-on course designed to bridge the gap between industrial automation and contemporary robotics frameworks. Participants will learn how to seamlessly integrate ROS-based robotic systems with PLCs to achieve synchronized operations. The course also explores digital twin environments, enabling learners to simulate, monitor, and optimize production processes. A strong emphasis is placed on interoperability, real-time control, and predictive analysis utilizing digital replicas of physical systems.
This instructor-led live training (available online or onsite) targets intermediate-level professionals seeking to develop practical skills in connecting ROS-controlled robots with PLC environments and implementing digital twins to enhance automation and manufacturing efficiency.
Upon completion of this training, participants will be able to:
- Grasp the communication protocols used between ROS and PLC systems.
- Implement real-time data exchange mechanisms between robots and industrial controllers.
- Create digital twins for monitoring, testing, and simulating processes.
- Integrate sensors, actuators, and robotic manipulators into industrial workflows.
- Design and validate industrial automation systems using hybrid simulation environments.
Course Format
- Interactive lectures and architectural walkthroughs.
- Hands-on exercises focused on integrating ROS and PLC systems.
- Implementation of simulation and digital twin projects.
Customization Options
- For a tailored training experience, please contact us to arrange your customized course.
Multi-Robot Systems and Swarm Intelligence
28 HoursMulti-Robot Systems and Swarm Intelligence is an advanced training program designed to delve into the architecture, coordination, and management of robotic teams, drawing inspiration from biological swarm dynamics. Participants will acquire skills in modeling interactions, executing distributed decision-making processes, and optimizing collaborative efforts across various agents. This course integrates theoretical foundations with practical simulation exercises to equip learners with the expertise needed for applications in logistics, defense, search and rescue operations, and autonomous exploration.
Delivered by an instructor, this live training is available both online and on-site, targeting advanced professionals who aim to design, simulate, and deploy multi-robot and swarm-based systems utilizing open-source frameworks and algorithms.
Upon completion of this training, participants will be capable of:
- Grasping the core principles and dynamics governing swarm intelligence and cooperative robotics.
- Developing communication and coordination strategies tailored for multi-robot environments.
- Executing distributed decision-making processes and consensus algorithms.
- Simulating collective behaviors including formation control, flocking, and coverage tasks.
- Applying swarm-based methodologies to real-world challenges and optimization problems.
Format of the Course
- Advanced lectures featuring deep dives into algorithms.
- Practical coding and simulation exercises using ROS 2 and Gazebo.
- A collaborative project focused on applying swarm intelligence principles.
Course Customization Options
- To arrange a customized training session for this course, please contact us.
Multimodal AI in Robotics
21 HoursThis instructor-led, live training in Bulgaria (online or onsite) is designed for advanced robotics engineers and AI researchers aiming to utilize Multimodal AI. The objective is to integrate various sensory inputs to develop highly autonomous and efficient robots capable of seeing, hearing, and touching.
By the end of this training, participants will be able to:
- Implement multimodal sensing in robotic systems.
- Develop AI algorithms for sensor fusion and decision-making.
- Create robots that can perform complex tasks in dynamic environments.
- Address challenges in real-time data processing and actuation.
Smart Robots for Developers
84 HoursA Smart Robot is an Artificial Intelligence (AI) system capable of learning from its environment and experiences to expand its capabilities based on that acquired knowledge. These robots can collaborate with humans, working alongside them and learning from their behavior. They are equipped not only for manual labor but also for cognitive tasks. In addition to physical robots, Smart Robots can be entirely software-based, residing in a computer as an application without moving parts or physical interaction with the real world.
In this instructor-led live training, participants will explore the various technologies, frameworks, and techniques required to program different types of mechanical Smart Robots, then apply this knowledge to complete their own Smart Robot projects.
The course is divided into 4 sections, each comprising three days of lectures, discussions, and hands-on robot development in a live lab environment. Each section concludes with a practical hands-on project, allowing participants to practice and demonstrate their newly acquired knowledge.
The target hardware for this course will be simulated in 3D using simulation software. The ROS (Robot Operating System) open-source framework, along with C++ and Python, will be used for programming the robots.
By the end of this training, participants will be able to:
- Understand the key concepts used in robotic technologies
- Understand and manage the interaction between software and hardware in a robotic system
- Understand and implement the software components that underpin Smart Robots
- Build and operate a simulated mechanical Smart Robot capable of seeing, sensing, processing, grasping, navigating, and interacting with humans through voice
- Extend a Smart Robot's ability to perform complex tasks through Deep Learning
- Test and troubleshoot a Smart Robot in realistic scenarios
Audience
- Developers
- Engineers
Format of the course
- Part lecture, part discussion, exercises, and heavy hands-on practice
Note
- To customize any part of this course (programming language, robot model, etc.), please contact us to arrange.
Smart Robotics in Manufacturing: AI for Perception, Planning, and Control
21 HoursSmart Robotics involves incorporating artificial intelligence into robotic systems to enhance perception, decision-making, and autonomous control.
This instructor-led, live training (online or onsite) is aimed at advanced-level robotics engineers, systems integrators, and automation leads who wish to implement AI-driven perception, planning, and control in smart manufacturing environments.
By the end of this training, participants will be able to:
- Understand and apply AI techniques for robotic perception and sensor fusion.
- Develop motion planning algorithms for collaborative and industrial robots.
- Deploy learning-based control strategies for real-time decision making.
- Integrate intelligent robotic systems into smart factory workflows.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.