Hadoop Training Courses

Hadoop Training Courses

Онлайн или на място, ръководени от инструктор курсове на живо за Apache Hadoop демонстрират чрез интерактивна практическа практика основните компоненти на екосистемата Hadoop и как тези технологии могат да се използват за решаване на широкомащабни проблеми. Hadoop обучението се предлага като „онлайн обучение на живо“ или „обучение на живо на място“. Онлайн обучението на живо (известно още като „дистанционно обучение на живо“) се извършва чрез интерактивен отдалечен работен плот . Обучението на живо на място може да се проведе локално в помещенията на клиента в България или в корпоративните центрове за обучение на NobleProg в България. NobleProg -- Вашият местен доставчик на обучение

Machine Translated

Hadoop Course Outlines

Име на Kурса
Продължителност
Общ преглед
Име на Kурса
Продължителност
Общ преглед
21 hours
Python е скалиращ, гъвкав и широко използван език за програмиране за компютърна наука и машинно обучение. Spark е двигател за обработка на данни, използван за търсене, анализ и трансформация на големи данни, докато Hadoop е софтуерна библиотека рамка за съхранение и обработка на данни в голям мащаб. Това обучение, ръководено от инструктори, е насочено към разработчици, които искат да използват и интегрират Spark, Hadoop, и Python за обработка, анализ и трансформация на големи и сложни набори от данни. В края на обучението участниците ще могат да:
    Създайте необходимата среда, за да започнете обработката на големи данни с Spark, Hadoop, и Python. Разберете характеристиките, основните компоненти и архитектурата на Spark и Hadoop. Научете как да интегрирате Spark, Hadoop, и Python за обработка на големи данни. Разгледайте инструментите в екосистемата Spark (Spark MlLib, Spark Streaming, Kafka, Sqoop, Kafka и Flume). Създайте съвместни системи за филтриране, подобни на Netflix, YouTube, Amazon, Spotify и Google. Използвайте Apache Mahout, за да скалирате алгоритмите за машинно обучение.
Формат на курса
    Интерактивна лекция и дискусия. Много упражнения и упражнения. Изпълнение на ръката в живо лабораторна среда.
Опции за персонализиране на курса
    За да поискате персонализирано обучение за този курс, моля, свържете се с нас, за да организирате.
7 hours
This course covers how to use Hive SQL language (AKA: Hive HQL, SQL on Hive, HiveQL) for people who extract data from Hive
14 hours
Datameer is a business intelligence and analytics platform built on Hadoop. It allows end-users to access, explore and correlate large-scale, structured, semi-structured and unstructured data in an easy-to-use fashion. In this instructor-led, live training, participants will learn how to use Datameer to overcome Hadoop's steep learning curve as they step through the setup and analysis of a series of big data sources. By the end of this training, participants will be able to:
  • Create, curate, and interactively explore an enterprise data lake
  • Access business intelligence data warehouses, transactional databases and other analytic stores
  • Use a spreadsheet user-interface to design end-to-end data processing pipelines
  • Access pre-built functions to explore complex data relationships
  • Use drag-and-drop wizards to visualize data and create dashboards
  • Use tables, charts, graphs, and maps to analyze query results
Audience
  • Data analysts
Format of the course
  • Part lecture, part discussion, exercises and heavy hands-on practice
7 hours
Alluxio е виртуална система за разпределено съхранение с отворен код, която обединява различните системи за съхранение и позволява на приложенията да взаимодействат с данни при скорост на паметта. Използва се от компании като Intel, Baidu и Alibaba. В този инструктор-управляван, на живо обучение, участниците ще научат как да използват Alluxio за мост различни изчислителни рамки с системи за съхранение и ефективно да управляват мулти-петабайт скала данни, тъй като те стъпват през създаването на приложение с Alluxio. В края на обучението участниците ще могат да:
    Разработване на приложение с Alluxio Свържете големите системи за данни и приложения, като същевременно запазвате един именен пространство Ефективно извлича стойност от големите данни във всеки формат за съхранение Подобряване на работното натоварване Разпределяне и управление Alluxio самостоятелно или кластерирано
публиката
    Данни учен Разработчик Администратор на системата
Формат на курса
    Частна лекция, частна дискусия, упражнения и тежка практика
35 hours
Audience: The course is intended for IT specialists looking for a solution to store and process large data sets in a distributed system environment Goal: Deep knowledge on Hadoop cluster administration.
21 hours

This course is intended for developers, architects, data scientists or any profile that requires access to data either intensively or on a regular basis. The major focus of the course is data manipulation and transformation. Among the tools in the Hadoop ecosystem this course includes the use of Pig and Hive both of which are heavily used for data transformation and manipulation. This training also addresses performance metrics and performance optimisation. The course is entirely hands on and is punctuated by presentations of the theoretical aspects.
21 hours
Big data analytics involves the process of examining large amounts of varied data sets in order to uncover correlations, hidden patterns, and other useful insights. The health industry has massive amounts of complex heterogeneous medical and clinical data. Applying big data analytics on health data presents huge potential in deriving insights for improving delivery of healthcare. However, the enormity of these datasets poses great challenges in analyses and practical applications to a clinical environment. In this instructor-led, live training (remote), participants will learn how to perform big data analytics in health as they step through a series of hands-on live-lab exercises. By the end of this training, participants will be able to:
  • Install and configure big data analytics tools such as Hadoop MapReduce and Spark
  • Understand the characteristics of medical data
  • Apply big data techniques to deal with medical data
  • Study big data systems and algorithms in the context of health applications
Audience
  • Developers
  • Data Scientists
Format of the Course
  • Part lecture, part discussion, exercises and heavy hands-on practice.
Note
  • To request a customized training for this course, please contact us to arrange.
21 hours
The course is dedicated to IT specialists that are looking for a solution to store and process large data sets in distributed system environment Course goal: Getting knowledge regarding Hadoop cluster administration
21 hours
Apache Hadoop is the most popular framework for processing Big Data on clusters of servers. In this three (optionally, four) days course, attendees will learn about the business benefits and use cases for Hadoop and its ecosystem, how to plan cluster deployment and growth, how to install, maintain, monitor, troubleshoot and optimize Hadoop. They will also practice cluster bulk data load, get familiar with various Hadoop distributions, and practice installing and managing Hadoop ecosystem tools. The course finishes off with discussion of securing cluster with Kerberos. “…The materials were very well prepared and covered thoroughly. The Lab was very helpful and well organized”
— Andrew Nguyen, Principal Integration DW Engineer, Microsoft Online Advertising Audience Hadoop administrators Format Lectures and hands-on labs, approximate balance 60% lectures, 40% labs.
21 hours
Apache Hadoop is the most popular framework for processing Big Data. Hadoop provides rich and deep analytics capability, and it is making in-roads in to tradional BI analytics world. This course will introduce an analyst to the core components of Hadoop eco system and its analytics Audience Business Analysts Duration three days Format Lectures and hands on labs.
28 hours
Apache Hadoop is the most popular framework for processing Big Data on clusters of servers. This course will introduce a developer to various components (HDFS, MapReduce, Pig, Hive and HBase) Hadoop ecosystem.  
21 hours
Apache Hadoop is one of the most popular frameworks for processing Big Data on clusters of servers. This course delves into data management in HDFS, advanced Pig, Hive, and HBase.  These advanced programming techniques will be beneficial to experienced Hadoop developers. Audience: developers Duration: three days Format: lectures (50%) and hands-on labs (50%).  
21 hours
Hadoop is the most popular Big Data processing framework.
14 hours
As more and more software and IT projects migrate from local processing and data management to distributed processing and big data storage, Project Managers are finding the need to upgrade their knowledge and skills to grasp the concepts and practices relevant to Big Data projects and opportunities. This course introduces Project Managers to the most popular Big Data processing framework: Hadoop.   In this instructor-led training in, participants will learn the core components of the Hadoop ecosystem and how these technologies can be used to solve large-scale problems. By learning these foundations, participants will  improve their ability to communicate with the developers and implementers of these systems as well as the data scientists and analysts that many IT projects involve. Audience
  • Project Managers wishing to implement Hadoop into their existing development or IT infrastructure
  • Project Managers needing to communicate with cross-functional teams that include big data engineers, data scientists and business analysts
Format of the course
  • Part lecture, part discussion, exercises and heavy hands-on practice
28 hours
Audience: This course is intended to demystify big data/hadoop technology and to show it is not difficult to understand.
28 hours
Hadoop е популярна Big Data обработваща рамка. Python е високо ниво на програмиране език, известен със своята ясна синтеза и читаемост на кодове. В този инструктор-управляван, живо обучение, участниците ще научат как да работят с Hadoop, MapReduce, Pig, и Spark използване Python като те стъпват през няколко примера и използване случаи. В края на обучението участниците ще могат да:
    Разберете основните концепции зад Hadoop, MapReduce, Pig, и Spark Използвайте Python с Hadoop Distributed File System (HDFS), MapReduce, Pig и Spark Използвайте Snakebite за програмиран достъп до HDFS в рамките на Python Използвайте mrjob, за да пишете MapReduce работни места в Python Напишете Spark програми с Python Разширяване на функционалността на свинете с помощта на Python UDFs Управление на MapReduce работни места и свински скрипти с помощта на Luigi
публиката
    Разработчиците Това са професионалисти
Формат на курса
    Частна лекция, частна дискусия, упражнения и тежка практика
35 hours
Apache Hadoop е популярна рамка за обработка на данни за обработка на големи набори от данни на много компютри. Това обучение, ръководено от инструктори (онлайн или онлайн) е насочено към системни администратори, които искат да научат как да създават, разпространяват и управляват Hadoop кластери в рамките на своята организация. В края на обучението участниците ще могат да:
    Инсталиране и конфигуриране на Apache Hadoop. Разберете четирите основни компоненти в Hadoop екосистемата: HDFS, MapReduce, YARN и Hadoop Common. Използвайте Distributed File System (HDFS) за скалиране на кластер до стотици или хиляди възли. •   Инсталирайте HDFS, за да работи като двигател за съхранение за разпространението на Spark. Настройване на Spark за достъп до алтернативни решения за съхранение като Amazon S3 и NoSQL бази данни системи като Redis, Elasticsearch, Couchbase, Aerospike, и т.н. Извършване на административни задачи като предоставяне, управление, мониторинг и сигурност на Apache Hadoop кластер.
Формат на курса
    Интерактивна лекция и дискусия. Много упражнения и упражнения. Изпълнение на ръката в живо лабораторна среда.
Опции за персонализиране на курса
    За да поискате персонализирано обучение за този курс, моля, свържете се с нас, за да организирате.
21 hours
This course introduces HBase – a NoSQL store on top of Hadoop.  The course is intended for developers who will be using HBase to develop applications,  and administrators who will manage HBase clusters. We will walk a developer through HBase architecture and data modelling and application development on HBase. It will also discuss using MapReduce with HBase, and some administration topics, related to performance optimization. The course  is very  hands-on with lots of lab exercises.
Duration : 3 days Audience : Developers  & Administrators
14 hours
Audience
  • Developers
Format of the Course
  • Lectures, hands-on practice, small tests along the way to gauge understanding
21 hours
Apache NiFi (Hortonworks DataFlow) is a real-time integrated data logistics and simple event processing platform that enables the moving, tracking and automation of data between systems. It is written using flow-based programming and provides a web-based user interface to manage dataflows in real time. In this instructor-led, live training (onsite or remote), participants will learn how to deploy and manage Apache NiFi in a live lab environment. By the end of this training, participants will be able to:
  • Install and configure Apachi NiFi.
  • Source, transform and manage data from disparate, distributed data sources, including databases and big data lakes.
  • Automate dataflows.
  • Enable streaming analytics.
  • Apply various approaches for data ingestion.
  • Transform Big Data and into business insights.
Format of the Course
  • Interactive lecture and discussion.
  • Lots of exercises and practice.
  • Hands-on implementation in a live-lab environment.
Course Customization Options
  • To request a customized training for this course, please contact us to arrange.
7 hours
Apache NiFi (Hortonworks DataFlow) is a real-time integrated data logistics and simple event processing platform that enables the moving, tracking and automation of data between systems. It is written using flow-based programming and provides a web-based user interface to manage dataflows in real time. In this instructor-led, live training, participants will learn the fundamentals of flow-based programming as they develop a number of demo extensions, components and processors using Apache NiFi. By the end of this training, participants will be able to:
  • Understand NiFi's architecture and dataflow concepts.
  • Develop extensions using NiFi and third-party APIs.
  • Custom develop their own Apache Nifi processor.
  • Ingest and process real-time data from disparate and uncommon file formats and data sources.
Format of the Course
  • Interactive lecture and discussion.
  • Lots of exercises and practice.
  • Hands-on implementation in a live-lab environment.
Course Customization Options
  • To request a customized training for this course, please contact us to arrange.
14 hours
Apache Samza is an open-source near-realtime, asynchronous computational framework for stream processing.  It uses Apache Kafka for messaging, and Apache Hadoop YARN for fault tolerance, processor isolation, security, and resource management. This instructor-led, live training introduces the principles behind messaging systems and distributed stream processing, while walking participants through the creation of a sample Samza-based project and job execution. By the end of this training, participants will be able to:
  • Use Samza to simplify the code needed to produce and consume messages.
  • Decouple the handling of messages from an application.
  • Use Samza to implement near-realtime asynchronous computation.
  • Use stream processing to provide a higher level of abstraction over messaging systems.
Audience
  • Developers
Format of the course
  • Part lecture, part discussion, exercises and heavy hands-on practice
14 hours
Sqoop is an open source software tool for transfering data between Hadoop and relational databases or mainframes. It can be used to import data from a relational database management system (RDBMS) such as MySQL or Oracle or a mainframe into the Hadoop Distributed File System (HDFS). Thereafter, the data can be transformed in Hadoop MapReduce, and then re-exported back into an RDBMS. In this instructor-led, live training, participants will learn how to use Sqoop to import data from a traditional relational database to Hadoop storage such HDFS or Hive and vice versa. By the end of this training, participants will be able to:
  • Install and configure Sqoop
  • Import data from MySQL to HDFS and Hive
  • Import data from HDFS and Hive to MySQL
Audience
  • System administrators
  • Data engineers
Format of the Course
  • Part lecture, part discussion, exercises and heavy hands-on practice
Note
  • To request a customized training for this course, please contact us to arrange.
14 hours
Tigon is an open-source, real-time, low-latency, high-throughput, native YARN, stream processing framework that sits on top of HDFS and HBase for persistence. Tigon applications address use cases such as network intrusion detection and analytics, social media market analysis, location analytics, and real-time recommendations to users. This instructor-led, live training introduces Tigon's approach to blending real-time and batch processing as it walks participants through the creation a sample application. By the end of this training, participants will be able to:
  • Create powerful, stream processing applications for handling large volumes of data
  • Process stream sources such as Twitter and Webserver Logs
  • Use Tigon for rapid joining, filtering, and aggregating of streams
Audience
  • Developers
Format of the course
  • Part lecture, part discussion, exercises and heavy hands-on practice
21 hours
Cloudera Impala е машина за заявки за масивна паралелна обработка (MPP) SQL с отворен код за Apache Hadoop клъстери.Impala позволява на потребителите да изпращат заявки с ниска латентност SQL към данни, съхранявани в Hadoop Distributed File System и Apache Hbase, без да се изисква движение или трансформация на данни.ПубликаТози курс е насочен към анализатори и специалисти по данни, извършващи анализ на данни, съхранявани в Hadoop чрез Business Intelligence или SQL инструменти.След този курс делегатите ще могат
    Извлечете значима информация от Hadoop клъстери с Impala. Напишете конкретни програми за улесняване на бизнес разузнаването на Impala SQL Dialect. Отстраняване на неизправности в Impala.
21 hours
Apache Ambari is an open-source management platform for provisioning, managing, monitoring and securing Apache Hadoop clusters. In this instructor-led live training participants will learn the management tools and practices provided by Ambari to successfully manage Hadoop clusters. By the end of this training, participants will be able to:
  • Set up a live Big Data cluster using Ambari
  • Apply Ambari's advanced features and functionalities to various use cases
  • Seamlessly add and remove nodes as needed
  • Improve a Hadoop cluster's performance through tuning and tweaking
Audience
  • DevOps
  • System Administrators
  • DBAs
  • Hadoop testing professionals
Format of the course
  • Part lecture, part discussion, exercises and heavy hands-on practice
21 hours
Hortonworks Data Platform (HDP) is an open-source Apache Hadoop support platform that provides a stable foundation for developing big data solutions on the Apache Hadoop ecosystem. This instructor-led, live training (online or onsite) introduces Hortonworks Data Platform (HDP) and walks participants through the deployment of Spark + Hadoop solution. By the end of this training, participants will be able to:
  • Use Hortonworks to reliably run Hadoop at a large scale.
  • Unify Hadoop's security, governance, and operations capabilities with Spark's agile analytic workflows.
  • Use Hortonworks to investigate, validate, certify and support each of the components in a Spark project.
  • Process different types of data, including structured, unstructured, in-motion, and at-rest.
Format of the Course
  • Interactive lecture and discussion.
  • Lots of exercises and practice.
  • Hands-on implementation in a live-lab environment.
Course Customization Options
  • To request a customized training for this course, please contact us to arrange.

Last Updated:

Online Hadoop courses, Weekend Hadoop courses, Evening Hadoop training, Hadoop boot camp, Hadoop instructor-led, Weekend Hadoop training, Evening Hadoop courses, Hadoop coaching, Hadoop instructor, Hadoop trainer, Hadoop training courses, Hadoop classes, Hadoop on-site, Hadoop private courses, Hadoop one on one training

Специални оферти

No course discounts for now.

Абонамент за специалните оферти

Ние се отнасяме с Вашите данни поверително и не ги предоставяме на трети страни. Можете да промените настройките си по всяко време или да се отпишете изцяло.

НЯКОИ ОТ НАШИТЕ КЛИЕНТИ

is growing fast!

We are looking for a good mixture of IT and soft skills in Bulgaria!

As a NobleProg Trainer you will be responsible for:

  • delivering training and consultancy Worldwide
  • preparing training materials
  • creating new courses outlines
  • delivering consultancy
  • quality management

At the moment we are focusing on the following areas:

  • Statistic, Forecasting, Big Data Analysis, Data Mining, Evolution Alogrithm, Natural Language Processing, Machine Learning (recommender system, neural networks .etc...)
  • SOA, BPM, BPMN
  • Hibernate/Spring, Scala, Spark, jBPM, Drools
  • R, Python
  • Mobile Development (iOS, Android)
  • LAMP, Drupal, Mediawiki, Symfony, MEAN, jQuery
  • You need to have patience and ability to explain to non-technical people

To apply, please create your trainer-profile by going to the link below:

Apply now!

This site in other countries/regions