Almost every large company you might want to work at uses Hadoop in some way, including Amazon, eBay, Facebook, Google, LinkedIn, IBM, Spotify, Twitter, and Yahoo! And it’s not just technology companies that need Hadoop; even the New York Times uses Hadoop for processing images.
The Udemy course “The Ultimate Hands-On Hadoop – Tame your Big Data!” is comprehensive, covering over 25 different technologies in over 14 hours of video lectures. It’s filled with hands-on activities and exercises, so you get some real experience in using Hadoop – it’s not just theory.
You’ll find a range of activities in this course for people at every level. If you’re a project manager who just wants to learn the buzzwords, there are web URIs for many of the activities in the course that require no programming knowledge. If you’re comfortable with command lines, we’ll show you how to work with them too. And if you’re a programmer, I’ll challenge you with writing real scripts on a Hadoop system using Scala, Pig Latin, and Python.
You’ll walk away from this course with a real, deep understanding of Hadoop and its associated distributed systems, and you can apply Hadoop to real-world problems. Plus a valuable completion certificate is waiting for you at the end!
The Ultimate Hands-On Hadoop – Tame your Big Data!
Learn and master the most popular big data technologies in this comprehensive course, taught by a former engineer and senior manager from Amazon and IMDb. Install and work with a real Hadoop installation right on your desktop with Hortonworks (now part of Cloudera) and the Ambari UI. Manage big data on a cluster with HDFS and MapReduce. Write programs to analyze data on Hadoop with Pig and Spark. Store and query your data with Sqoop, Hive, MySQL, HBase, Cassandra, MongoDB, Drill, Phoenix, and Presto. Design real-world systems using the Hadoop ecosystem. Learn how your cluster is managed with YARN, Mesos, Zookeeper, Oozie, Zeppelin, and Hue. Handle streaming data in real-time with Kafka, Flume, Spark Streaming, Flink, and Storm
The course comprises 101 lectures organized into the following sections:
- Learn all the buzzwords! And install the Hortonworks Data Platform Sandbox.
- Using Hadoop’s Core: HDFS and MapReduce
- Programming Hadoop with Pig
- Programming Hadoop with Spark
- Using relational data stores with Hadoop
- Using non-relational data stores with Hadoop
- Querying your Data Interactively
- Managing your Cluster
- Feeding Data to your Cluster
- Analyzing Streams of Data
- Designing Real-World Systems
- Learning More
Who this course is for:
- Software engineers and programmers who want to understand the larger Hadoop ecosystem, and use it to store, analyze, and vend “big data” at scale.
- Project, program, or product managers who want to understand the lingo and high-level architecture of Hadoop.
- Data analysts and database administrators who are curious about Hadoop and how it relates to their work.
- System architects who need to understand the components available in the Hadoop ecosystem, and how they fit together.
- You will need access to a PC running 64-bit Windows, MacOS, or Linux with an Internet connection if you want to participate in the hands-on activities and exercises. You must have at least 8GB of free RAM on your system; 10GB or more is recommended. If your PC does not meet these requirements, you can still follow along in the course without doing hands-on activities.
- Some activities will require some prior programming experience, preferably in Python or Scala.
- A basic familiarity with the Linux command line will be very helpful.
Summary of Course Main Features
- Sundog Education by Frank Kane – Training the World in Big Data and Machine Learning
- Frank Kane – Founder, Sundog Education
- Lectures: 101
- On-demand video: 14.5 hours
- Article: 5
- Downloadable resources: 2
- Full lifetime access
- 30-Day Money-Back Guarantee
- Access on mobile and TV
- Certificate of Completion
Click the link below for details of any special offers on the price of this course