S P L A S H

Apache Kafka & Zookeeper Training in Pallikaranai, Chennai


what is use of Kafka in Hadoop Big Data.?
  • Kafka used in Big Data platform to deal with building pipelines for real-time streaming data. Data in Kafka cluster may ingest to Hadoop Hdfs for further analysis.
What is Kafka.?
  • Kafka mainly used to collect streaming events from sensor, appplication logs, network system,social media feeds, live tweet from Twitter. Producer - Publish the data
  • Real-time data will be collected and stored persistently across Kafka Cluster Broker
  • Data in Kafaka cluster ingest into N number of end device ( Kafka to No-SQL Mongodb, Kafka to Hadoo-HDFS , Kafka to MySQL)for processing. Consumer - Subscribe data
  • Data in Kafka Cluster processed by itself using Kafka Streaming and KSQL Streaming process to filter
Apache Kafka streaming training in chennai, pallikaranai
Kafka VS Hadoop - How Kafka Similar to Hadoop HDFS
  • Kafka has storage abstrct that is similar to HDFS.
  • Kafka has replacation which is similar to HDFS replication
  • Kafka has Partition again its similar to Blocks in Hadoop HDFS
Kafka VS Hadoop - What are the Differences
  • Kafka : Any streaming data (Publisher) can be moved to Kafka cluster - N number of consumer can consume data from kafka cluster.
  • Hadoop : Mostly used for Batch data then streaming. Apache Flume native support for streaming data in Hadoop eco-systems.

  • Kafka : Requires ZOOKEEPER to co-ordinate, maintain the state of kafka Brokers (cluster nodes).
  • Hadoop : its well oversee by YARN

  • Kafka : Streaming API & KSQL only streaming engine to process real-time data stored in Apache Kafka
  • Hadoop : Eco-system (pig, hive, spark, Map-Reduce) used to solve the big data problems.

Apache Kafka & ZooKeeper - Training Syllabus

 

Module 1 – Kafka Overview

  • What is Apache Kafka
  • Kafka Features and terminologies
  • High level kafka architecture
  • Real life Kafka Case Studies
  • Hadoop VS Kafka
   

Module 2 – Introduction to Zookeeper

  • Use of Zookeeper in Kafka
  • Zookeeper Architecture
  • Leader Election by Zookeeper
  • Applications using Zookeeper
  • Zookeeper configuration and installation

Module 3 – Kafka Internals

  • Understanding Broker
  • Understanding Producer
  • Understanding Consumer
  • Understanding Streaming

Module 4 – Setting up Kafka Brokers

  • Kafka Broker Clusster Configuration
  • Kafka Topic Replication
  • Wotking with Cluster from ZooKeeper
  • Kafka Commands - Overview
  • Kafka cluster administratio

Module 5 - Working with Kafka - real-time use case with hands-on

  • Collcting streaming log to Kafka cluster
  • Commands - publish data to Topic
  • Working with Multiple consumer for one Topic
  • Reading data from Kafka

Module 6 - Kafak Integration with Other frameworks/tools

  • Deatailing How to integrate Flume with Kafka
  • Kafka connector for Mongo DB NoSQL databse
  • Kafaks connector for MySQL
  • Kafka Operations
  • Set Parameters for Performance tuning

Corporates moving towards building Big data lake for their operational efficiency.

collction of very data sources moving into Hadoop HDFS distibuted storage and processing using Apache Hive or other distributed frameworks like Sparsk, Storm.

there is huge demand/trends in job market for Big data Lake experts. Messgae Queue plays vital role (RabbitMQ, ZeroMQ, ActiveMQ ) in bigdata lake. Kafka and Flume stands in Top when its dealing with volume of data and readiness to integrate with Big data environmnet. Again Kafka one of the most choice by experts as it has rich set of features likes distributed persistent data storage, ..

SPLASH - A Data Training Institute. Experts in the filed of DATA. We have 12+ years of experience in RDBMS and 4+ yaers of experience in Big Data, Have implemnted morethan 5 Big Data projects. Our Training course and syllabus more cutomized towards the job market.