Free Apache Kafka Tutorial

Apache Kafka is a distributed streaming platform that allows applications to publish and subscribe to streams of records, stored in a fault-tolerant, durable way. Kafka was created by the creators of Apache Hadoop and is used by a variety of companies in a variety of different industries.

Kafka is built on a distributed log system. Logs are sequences of records, and Kafka stores these records in topics. Each topic is made up of one or more partitions, which are the actual data stores. Each partition is an ordered, immutable sequence of records. Kafka is based on a publish/subscribe model. Applications can publish messages to topics, and other applications can subscribe to these topics and consume the messages. Kafka is often used as a message bus, and the topics can be used to represent different types of messages.

Table of Contents

Audience

The intended audience of this Apache Kafka Tutorial is software engineers, data scientists, and data engineers who are interested in understanding the fundamentals of Apache Kafka, as well as its core components, architecture, and capabilities. It is also suitable for experienced Apache Kafka users who are looking to gain a deeper understanding of the platform and its features.

Prerequisites

1. Java: Apache Kafka requires a Java Runtime Environment (JRE) version 8 or higher.

2. Zookeeper: Apache Kafka uses Zookeeper to store offsets of messages consumed for a specific topic and partition.

3. Apache Kafka Connect: Apache Kafka Connect is a framework that helps to move large amounts of data between databases, applications, and other data sources.

4. Apache Kafka Streams: Apache Kafka Streams is a library that enables stream processing of data within a Kafka cluster.

5. Brokers: Apache Kafka requires a cluster of brokers in order to function. A broker is a server that receives messages from producers and routes them to consumers.


Apache Kafka – Introduction

Apache Kafka is a distributed streaming platform used for building real-time data pipelines and streaming applications. It can be used to process, store, and forward high volumes of data from various sources, including websites, applications, and sensors. Kafka’s features include a scalable, fault-tolerant, durable, and real-time message-oriented middleware, which makes it an ideal choice for large-scale distributed systems. Kafka also supports the concept of publish-subscribe messaging, where producers publish messages to topics and consumers subscribe to topics to receive messages. Kafka is written in Java and Scala and is open-source software.

What is a Messaging System?

A messaging system in Apache Kafka is a distributed streaming platform that allows applications to exchange messages in a fault-tolerant, highly available, and highly scalable way. It is designed for building real-time streaming data pipelines and applications. It enables organizations to move data from any source system to any target system, in a reliable and scalable way.

Point to Point Messaging System

Apache Kafka is an open-source distributed streaming platform that allows clients to publish and subscribe to streams of records in a fault-tolerant and highly available manner. It is a distributed, partitioned, and replicated commit log service that provides a durable, fault-tolerant, and high-throughput messaging system.

Kafka’s point-to-point messaging system makes it a great choice for applications that require reliable and fault-tolerant messaging. It is well-suited for use cases such as building real-time streaming data pipelines, processing streaming data in real-time, and powering a distributed commit log for applications.

Publish-Subscribe Messaging System

Publish-subscribe messaging is a messaging pattern where senders of messages, called publishers, do not send messages directly to a specific receiver or receiver but instead categorize messages into topics. Subscribers or receivers can then subscribe to those topics and receive messages that have been published to that topic. It is a form of asynchronous messaging system. Publishers are decoupled from the subscribers, which allows multiple subscribers to receive the same message without the publisher being aware of the number of subscribers or their identities.

What is Kafka?

Kafka is an open source distributed streaming platform that enables real-time ingest and processing of streaming data. It provides a high throughput, low latency platform for handling real-time data feeds. Kafka is commonly used for building real-time streaming data pipelines and applications that transform or react to the streams of data. It is also used for messaging, website activity tracking, metrics collection, and log aggregation.

Benefits of Apache Kafka

1. High throughput: Apache Kafka is capable of handling high throughputs, which makes it suitable for applications that require real-time processing and high-volume data streaming.

2. Scalability: Apache Kafka is highly scalable and can be used to stream data across multiple clusters and cloud environments. It is also easy to scale up or down as needed.

3. Durability: Apache Kafka stores messages in a distributed and fault-tolerant manner, which ensures data is never lost even in the event of server failure.

4. Low latency: Apache Kafka is designed to process data in a low-latency manner, making it ideal for real-time applications that require quick responses.

5. Ease of use: Apache Kafka is easy to use and offers a wide range of features that make streaming data easier. It also integrates easily with other systems.

6. Security: Apache Kafka comes with a range of security features, such as authentication and authorization, that help protect the data from unauthorized access.

Use Cases

1. Log Aggregation: Kafka can be used for log aggregation and centralized log management. This is useful for collecting, analyzing and monitoring all kinds of log data from different sources, such as web servers, applications, databases, etc.

2. Metrics and Monitoring: Kafka can also be used to collect and store metrics and monitoring data from different sources in order to track and analyze system health.

3. Stream Processing: Kafka can be used for stream processing and real-time analytics. This is useful for processing and analyzing large volumes of data in real-time, such as clickstream data, financial transactions and IoT data.

4. Message Brokering: Kafka can be used for message brokering, which is useful for communication between different applications and services. It can be used as a middleware to provide reliable message delivery and buffering.

Need for Kafka

Kafka is an increasingly popular open-source streaming platform that is used for building real-time data pipelines and streaming applications. It is often used in scenarios where data needs to be consumed and processed in near real-time. Kafka is used to stream data in near real-time between applications, services, and data stores. It can be used as a messaging system, an event streaming platform, a database, and more. Kafka is highly scalable and can handle millions of messages per second. It is also fault tolerant and can be used for high availability. Kafka can be used to ingest data from various sources, process data streams, and publish the data downstream to other applications and data stores.


Apache Kafka – Fundamentals

Apache Kafka is an open-source distributed streaming platform that enables applications to process and exchange data in real-time. Kafka was originally developed at LinkedIn and later became part of the Apache Software Foundation. Kafka is designed for building real-time data pipelines, streaming applications, and message brokers. It is a distributed and scalable publish-subscribe messaging system that allows for the exchange of data in a highly fault-tolerant and distributed environment.

Kafka is composed of three key components: brokers, topics, and consumers. Brokers are the servers that store and manage the data within Kafka topics. Topics are logical containers for messages, which can be organized into partitions. Consumers are applications that read the messages from the topics and process them.

Kafka is highly scalable and fault-tolerant and can be used for a wide range of applications, such as building real-time data pipelines, streaming applications, and message brokers. It can also be used for event sourcing, data integration, and microservices. Kafka also provides APIs for integrating with other applications and services, such as Apache Storm and Apache Hadoop.

Components and Description

1. Apache Kafka Brokers: The Apache Kafka brokers are the backbone of the cluster, providing the distributed streaming platform that Kafka is known for. They are responsible for maintaining the state of the Kafka topics, accepting incoming messages, and serving them up to consumers.

2. Apache Kafka Topics: Topics are the central concept in Kafka and they are used to group related messages together. Each topic consists of one or more partitions that store messages within a topic.

3. Apache Kafka Producers: Producers are responsible for sending messages to the Kafka cluster. All messages sent to the cluster must go through a producer.

4. Apache Kafka Consumers: Consumers are responsible for consuming messages from the Kafka cluster. They can subscribe to one or more topics and will receive messages from those topics as they arrive in the cluster.

5. Apache Kafka Connectors: Connectors are components that can be used to connect external systems to the Kafka cluster. They can be used to import data from external sources into Kafka, or to export messages from Kafka to external systems.

6. Apache Kafka Streams: Streams is a library that allows you to process and analyze data stored in Kafka topics. It provides a way to transform and aggregate data from multiple topics into a single stream of output.


Apache Kafka – Cluster Architecture

Apache Kafka is an open-source streaming platform that is used for building real-time streaming applications. It is designed to provide high throughput, low latency, and scalability. Kafka is based on a distributed, partitioned, and replicated commit log service.

At the heart of Apache Kafka is a cluster of brokers. The brokers are responsible for maintaining a distributed log of records and providing access to the log. Each broker can handle hundreds of thousands of reads and writes per second and can handle massive amounts of data.

The Kafka cluster is composed of multiple brokers. Each broker is responsible for one or more partitions of topics. A topic is a logical stream of records, and a partition is a physical log of the records. The cluster is composed of one or more brokers, each running one or more partitions. Each partition is replicated across multiple brokers for fault-tolerance.

Kafka has a leader-follower architecture. The leader of a partition is responsible for all read and write requests for that partition. The followers replicate the leader’s state and can take over if the leader fails.  Kafka also provides fault tolerance and scalability through consumer groups. Each consumer group maintains its own view of the partitions assigned to it, and the consumers in the group can read from any of the partitions. This allows for horizontal scaling of consumers and allows for greater throughput.

In summary, Apache Kafka is a powerful streaming platform that provides scalability, fault tolerance, and low latency. Its cluster architecture is composed of brokers, partitions, and consumer groups, and it supports replication and fault-tolerance through its replication protocol.

Components and Description in Apache Kafka – Cluster Architecture

1. Apache Zookeeper: Apache Zookeeper is a distributed coordination service used to manage large clusters of nodes. It is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services.

2. Apache Kafka Broker: Apache Kafka Broker is a server that stores and forwards messages in a distributed manner. A Kafka cluster consists of multiple brokers that coordinate to manage the storage and replication of messages.

3. Apache Kafka Producers: Apache Kafka Producers are used to publish messages to Kafka topics. Producers can be written in any language and can be integrated with external systems.

4. Apache Kafka Consumers: Apache Kafka Consumers are used to consume messages from Kafka topics. Consumers can be written in any language and can be integrated with external systems.


Apache Kafka – WorkFlow

Apache Kafka is an open-source stream-processing software platform developed by the Apache Software Foundation. It is used for building real-time data pipelines and streaming applications. It is horizontally scalable, fault-tolerant, and extremely fast.

The typical Apache Kafka workflow consists of three main components: producers, brokers, and consumers. Producers are applications that generate data, such as web servers. Brokers are Apache Kafka clusters that store and forward data. Consumers are applications that process the data, such as mobile applications. Producers send data to brokers, which store it on disk and forward it to consumers. Consumers then process the data and generate results.

Workflow of Pub-Sub Messaging

In Apache Kafka, Pub-Sub messaging works by having two main components, a producer and a consumer. The producer is responsible for sending messages to the Kafka cluster which can then be consumed by the consumer. The consumer subscribes to one or more topics and then receives all messages published to that topic as they arrive. The messages are stored in a distributed commit log and can be read or written to. Kafka also provides message retention and replication for added reliability. It supports a wide range of message formats, including text, binary and JSON. The Pub-Sub messaging system in Apache Kafka is highly scalable, reliable, and available for real-time streaming applications.

1. Producer: A producer is an application or service that sends messages to a Kafka topic.

2. Broker: A Kafka broker is a server that runs Kafka. It stores the messages in topics and replicates them across multiple nodes.

3. Consumer: Consumers are applications that read and process messages from topics.

4. Zookeeper: Zookeeper is used to manage the Kafka cluster. It ensures that all the nodes in the cluster are in sync and can communicate with each other.

5. Topic: Topics are logical partitions of data within Kafka. They are used to store messages sent by producers and consumed by consumers.

6. Partition: A partition is a physical storage unit for a topic. It is used to store messages in order and can be replicated across multiple nodes for redundancy.

7. Offset: An offset is a pointer to a specific message in a partition. Consumers use offsets to keep track of which messages have already been read.

Stepwise workflow

Step 1: The publisher sends a message to a topic.

Step 2: The topic is then broadcasted to all the subscribers who have subscribed to it.

Step 3: The subscribers receive the message and process it accordingly.

Step 4: The subscribers then send an acknowledgement to the publisher, confirming the receipt of the message.

Step 5: The publisher then sends a response to the subscribers, confirming the receipt of the acknowledgement.

Workflow of Queue Messaging / Consumer Group

1. Producer: A producer is a Kafka client that publishes records to a Kafka topic. Producers can send data to multiple topics and can use either a synchronous or asynchronous API.

2. Queue Messaging: Queue messaging is a messaging system that allows messages to be sent and received in a queue-based manner. Messages are buffered in the queue and are processed by one or more consumers in the order they are received.

3. Consumer Group: A consumer group is a group of consumers that consume data from the same topic. Each consumer in the group is assigned a unique ID, and each message is delivered to one consumer in the group. The consumer group ensures that each message is only processed once, even if multiple consumers are consuming from the same topic.

4. Apache Kafka: Apache Kafka is a distributed streaming platform that is used to send and receive data in real time. Kafka is designed to be fault tolerant, scalable, and distributed. It is used to process stream data from sources like websites, applications, sensors, and mobile devices.

5. Topic Partitions: Topics are divided into multiple partitions, which are used to store and replicate data. Each partition is assigned a unique identifier and the messages are stored in order in the partition.

6. Consumer Offsets: Consumer offsets are used to track the position of the consumer in the topic partition. The consumer offset is updated each time the consumer reads a message from the partition, and the consumer can start consuming from the message with the highest offset.

7. Message Queue: Messages are stored in a message queue and are processed by the consumer in the order they are received. The consumer can read messages from the queue and process them as needed.

8. Message Processing: The consumer processes the message based on the data contained in it. The consumer can perform different operations on the message, such as filtering, transforming, or enriching the data.

9. Message Acknowledgement: The consumer acknowledges each message it processes by sending an acknowledgment to the producer. This ensures that the producer knows that the message has been processed successfully.

Stepwise workflow

1. A producer sends a message to a topic in Apache Kafka.

2. The message is stored in the topic in a distributed fashion.

3. A consumer group is created to consume the messages from the topic.

4. The consumer group consists of multiple consumers.

5. Each consumer in the group subscribes to the topic.

6. The messages are then distributed among the consumers in the group in a round-robin fashion.

7. Each consumer can then process the message from the topic.

8. After the message is processed, the consumer sends an acknowledgement to the topic.

9. This ensures that the message is not sent to the same consumer again.

10. The consumed messages are then removed from the topic.

Role of ZooKeeper

Apache Kafka is a distributed streaming platform that is used for building real-time streaming applications. ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services.

ZooKeeper is used by Kafka for coordinating the Kafka cluster. It is used for maintaining configuration information, naming, providing distributed synchronization, and providing group services. It is also used for storing the offset of the messages that the Kafka broker sends out to the consumers. ZooKeeper provides an interface to synchronize processes and maintain configuration information. It also ensures that the Kafka cluster is operating properly by monitoring the health of the nodes in the cluster. It also helps in maintaining the configuration of the nodes in the cluster. Additionally, it also helps in providing synchronization between producers and consumers, which helps in achieving better performance.


Apache Kafka – Installation Steps

1. Install Java: Apache Kafka requires Java 8 or higher version to be installed in the system.

2. Download Kafka: Download the latest version of Kafka from its official site.

3. Extract the files: Unzip the downloaded Kafka files.

4. Configure Zookeeper: Configure the Zookeeper instance by editing the configuration file.

5. Start Zookeeper: Start the Zookeeper instance.

6. Start Kafka Server: Start the Kafka server by using the command line.

7. Create Topics: Create topics using the create command.

8. Run the Producer: Run the Kafka producer to publish messages to the Kafka cluster.

9. Run the Consumer: Run the Kafka consumer to consume messages from the Kafka cluster.

Apache Kafka – Basic Operations

Apache Kafka is an open-source distributed streaming platform that enables you to publish and subscribe to streams of records. It is possible to perform basic operations with Apache Kafka, such as creating topics, sending messages, and consuming messages.

1. Creating a Topic:

A topic is a category or feed name to which records are published. To create a topic, use the Kafka command-line tool, kafka-topics.sh, with the create option.

For example:

$ bin/kafka-topics.sh –create –zookeeper localhost:2181 –replication-factor 1 –partitions 1 –topic test

2. Sending Messages:

To send messages to a topic, use the Kafka command-line tool, kafka-console-producer.sh, with the topic option.

For example:

$ bin/kafka-console-producer.sh –broker-list localhost:9092 –topic test

3. Consuming Messages:

To consume messages from a topic, use the Kafka command-line tool, kafka-console-consumer.sh, with the topic option.

For example:

$ bin/kafka-console-consumer.sh –bootstrap-server localhost:9092 –topic test

Start ZooKeeper

To start ZooKeeper, you would need to run the bin/zkServer.sh script located in the ZooKeeper installation directory. You can specify a configuration file as an argument to the script to configure the ZooKeeper server. This configuration file can be used to specify various settings, such as the ports to be used, the data directory and the tick time. The command to start ZooKeeper will look like this:

bin/zkServer.sh <path_to_configuration_file>


Single Node-Single Broker Configuration

In a single node-single broker configuration, a single server hosts both the database and the broker. This configuration is suitable for a small-scale deployment, as it is easy to deploy and manage, and it offers a cost-effective solution. The main advantage of this configuration is that it allows for the immediate scaling of the system if more resources are required. However, this configuration is not suitable for large-scale deployments, as it does not provide the flexibility and scalability of a distributed system. Additionally, it does not provide redundancy in the event of a hardware or software failure.

List of Topics

To get a list of topics in Kafka server,

You can use the `list` command in the Kafka command-line interface (CLI) to get a list of topics in a Kafka server. The command is as follows:

“`bash

bin/kafka-topics.sh –list –zookeeper <zookeeper-hostname>:<port>

“`

This command will list all the topics available in the Kafka server connected to the given zookeeper host.

You can also use the Kafka API to get a list of topics. The API endpoint is `/topics`, and you can use a REST client like `curl` to get the list of topics. The command is as follows:

“`bash

curl -X GET http://<hostname>:<port>/topics

“`

This command will return a list of topics in JSON format.

Start Producer to Send Messages

Producer code to send messages to a Kafka topic can be written in any language that has a Kafka client driver. For example, in Java, the code to send messages to a Kafka topic may look like this:

// Create the producer properties Properties props = new Properties(); props.put(“bootstrap.servers”, “localhost:9092”); props.put(“key.serializer”, “org.apache.kafka.common.serialization.StringSerializer”); props.put(“value.serializer”, “org.apache.kafka.common.serialization.StringSerializer”); // Create the producer KafkaProducer<String, String> producer = new KafkaProducer<>(props); // Send messages to topic String topicName = “test-topic”; for (int i = 0; i < 10; i++) { ProducerRecord<String, String> record = new ProducerRecord<>(topicName, “Message ” + i); producer.send(record); } // Close the producer producer.close();

In this code, the producer creates the producer properties, creates the producer, sends the messages to the topic, and then closes the producer.

1. If you are running Apache Kafka on your local machine, open a new command prompt and type this command to start the producer:

./bin/kafka-console-producer.sh –broker-list localhost:9092 –topic <topic name>

2. If you are running Apache Kafka on a remote server, use the following command:

./bin/kafka-console-producer.sh –broker-list <ip address>:9092 –topic <topic name>

3. Once the producer is started, you can type in the messages you want to send to the topic and press enter. The messages will be sent to the topic and any consumer listening to the topic will receive it.

Start Consumer to Receive Messages in Apache Kafka

he consumer must be configured to connect to the Kafka cluster and subscribe to the relevant topics before it can begin consuming messages. The consumer can then be started, which will begin consuming messages from the topics it is subscribed to.

1. Start your Apache Kafka server:

Open a terminal window and run the following command to start your Apache Kafka server:

$ bin/kafka-server-start.sh config/server.properties

2. Create a topic:

To create a topic in Apache Kafka, run the following command:

$ bin/kafka-topics.sh –create –zookeeper localhost:2181 –replication-factor 1 –partitions 1 –topic <topic-name>

3. Start the Consumer:

To start the consumer, run the following command:

$ bin/kafka-console-consumer.sh –bootstrap-server localhost:9092 –topic <topic-name> –from-beginning


Single Node-Multiple Brokers Configuration

A single node-multiple brokers configuration in Kafka is a setup in which multiple Kafka brokers are running on a single node or server. This type of configuration is often used for development or testing purposes, or for small-scale production deployments. In a single node-multiple brokers configuration, each broker is configured with its own unique broker ID and the same cluster ID. This allows the brokers to communicate with each other, enabling them to handle more load than a single broker alone. Additionally, the use of multiple brokers may result in improved performance, as each broker can take on a portion of the workload.

config/server-one.properties

config/server-one.properties in Single Node-Multiple Brokers Configuration

In Single Node-Multiple Brokers Configuration, the server-one.properties file contains configuration information for the first broker. It can include the broker’s unique identifier, port numbers, log directories, and other settings.

For example, the following configuration parameters can be set in the server-one.properties file:

# The id of the broker.

broker.id=1

# The port the broker will listen on.

port=9092

# The log directory for the broker’s log files.

log.dir=/var/log/kafka/broker-1

# The Zookeeper connection string.

zookeeper.connect=localhost:2181

# The number of threads the broker will use for various tasks.

num.network.threads=3

num.io.threads=8

# The number of messages to retain.

log.retention.hours=168

log.retention.bytes=1073741824

log.segment.bytes=1073741824

log.retention.check.interval.ms=300000

config/server-two.properties

In a single node-multiple brokers setup, server-two.properties might contain the following configuration options:

# Broker ID

broker.id=2

# Listeners

listeners=PLAINTEXT://:9091

# Log directories

log.dirs=/tmp/kafka-logs-2

# ZooKeeper connection string

zookeeper.connect=localhost:2181

Creating a Topic in kafka

1.  Log into the Kafka command line interface (CLI).

2. Create a new topic, specifying the name and the number of partitions.

Example:

bin/kafka-topics.sh –create –zookeeper localhost:2181 –replication-factor 1 –partitions 3 –topic my-topic

Start Producer to Send Messages

Producer syntax in Kafka:

The syntax for creating a producer in Kafka is as follows:

Producer producer = new Producer(<properties>);

Example:

Producer producer = new Producer(new Properties()

    .put(“bootstrap.servers”, “localhost:9092”)

    .put(“acks”, “all”)

    .put(“retries”, 0)

    .put(“batch.size”, 16384)

    .put(“linger.ms”, 1)

    .put(“key.serializer”, “org.apache.kafka.common.serialization.StringSerializer”)

    .put(“value.serializer”, “org.apache.kafka.common.serialization.StringSerializer”));

Once the producer is created, you can use the following syntax to send messages:

ProducerRecord<String, String> record = new ProducerRecord<String, String>(<topicName>, <key>, <value>);

producer.send(record);

Example:

ProducerRecord<String, String> record = new ProducerRecord<String, String>(“my-topic”, “key1”, “Hello World!”);

producer.send(record);

Modifying a Topic

Modifying a topic of Kafka involves making changes to the topic settings such as the number of partitions, replication factor, and configs. To do this, you first need to access the Kafka configuration file and make the desired modifications. Then you need to restart the Kafka broker and Zookeeper processes. Finally, you need to run the command line tool to create or update the topic. For example, to change the number of partitions of a topic, you can use the following command:

bin/kafka-topics.sh –alter –topic <topicName> –partitions <numPartitions>

Once the changes have been made, you should be able to see them reflected in the Kafka cluster.

Deleting a Topic

To delete a topic in Kafka, you must use the delete topic command, which is part of the Kafka Admin tools. You can access this command by entering the following into your terminal:

bin/kafka-topics.sh –zookeeper localhost:2181 –delete –topic <topic_name>

Make sure to replace “<topic_name>” with the name of the topic you want to delete. This command will delete the topic from both the Kafka cluster and the Zookeeper cluster.


Apache Kafka – Simple Producer Example

This example demonstrates how to use Apache Kafka to produce messages to a Kafka topic.

Prerequisites

In order to run this example, you will need the following:

• Apache Kafka installed and running

• A Kafka topic created

• Java 8

Step 1: Create a Producer

The first step is to create a KafkaProducer object. This object will be used to send messages to the Kafka topic. You will need to provide the Kafka server address and the topic name.

Properties props = new Properties(); props.put(“bootstrap.servers”, “localhost:9092”); props.put(“key.serializer”, “org.apache.kafka.common.serialization.StringSerializer”); props.put(“value.serializer”, “org.apache.kafka.common.serialization.StringSerializer”);

KafkaProducer producer = new KafkaProducer(props);

Step 2: Send Messages

Now that you have a producer, you can send messages to the Kafka topic. Each message is represented by a ProducerRecord object.

String message = “Hello Kafka!”; ProducerRecord<String, String> record = new ProducerRecord<String, String>(“my-topic”, message);

producer.send(record);

Step 3: Close the Producer

Once you are done sending messages, you should close the producer.

producer.close();

Producer API

The Kafka Producer API allows applications to send streams of data to topics in the Apache Kafka cluster. It provides the functionality of a messaging system, but with a unique design. It enables applications to connect to and communicate with the Apache Kafka cluster in a fault-tolerant and fast way. It provides an abstraction layer for developers to write applications without having to worry about the details of the implementation. It provides both synchronous and asynchronous messaging capabilities, allowing applications to send data to topics in both directions. The Producer API also provides a variety of configuration options to allow applications to tune their performance and throughput.

ProducerRecord API

The ProducerRecord API is a class used to construct and send messages to Apache Kafka topics. It provides a way to specify the topic, the message’s key, the message’s value, and optional fields such as the message’s partition and timestamp. It is used by developers to create and send messages to Kafka topics for various purposes including processing data, streaming data, and logging.

ProducerRecord is an API in Apache Kafka used to send data to Kafka topics.  It takes three arguments:

ProducerRecord(<topic>, <partition>, <key>, <value>)

<topic> is a string which represents the topic name to which the message is sent.

<partition> is an integer which represents the partition number in the topic to which the message is sent.

<key> is an optional object that represents the key for the message.

<value> is an optional object that represents the value for the message.

ProducerRecord class constructor is used to create a record to be sent to a Kafka topic. The constructor takes the following parameters:

1. Topic – The topic to which the record will be produced

2. Key – The key to be included in the record

4. Value – The value to be included in the record

SimpleProducer application.

This application is a sample application that shows how to use

the Apache Kafka 0.8 SimpleConsumer API to read messages from

a Kafka broker.

Usage:

producer.py <broker_list> <topic>

<broker_list> is a list of one or more Kafka brokers

<topic> is a kafka topic to consume from

“””

from kafka import KafkaConsumer

import sys

import time

# Consumer configuration

# See https://kafka-python.readthedocs.org/en/stable/api.html#kafka.KafkaConsumer

topic = sys.argv[1]

consumer = KafkaConsumer(bootstrap_servers=’localhost:9092′,

                         auto_offset_reset=’earliest’,

                         consumer_timeout_ms=1000)

# Subscribe to the topic

consumer.subscribe([topic])

try:

    while True:

        msg = consumer.next()

        # Wait for a message to arrive

        print msg

except KeyboardInterrupt:

    sys.stderr.write(‘%% Aborted by user\n’)

finally:

    # Close down consumer to commit final offsets.

    consumer.close()

ConsumerRecord API

The ConsumerRecord API is an interface in Apache Kafka that provides information about a single record that has been read from a Kafka topic. It provides access to the key and value of the record, as well as the partition, offset, and topic that the record was read from. It also provides access to the timestamp of the message and the serialized format of the message. The API is mainly used to read data from Kafka topics and process it in some way.

public ConsumerRecords(Map<TopicPartition, List<ConsumerRecord<K,V>>> records)

This constructor creates a new instance of ConsumerRecords from the given map of topic/partition to records. The records should be non-null and non-empty.

Methods and Description

ConsumerRecords is an API that allows users to access comprehensive consumer records. It provides data such as public records, property records, and financial records. The API also provides data on businesses, including information on business owners, business locations, and financial details.

The API is used by developers to access consumer records and business information. It is also used by businesses to build useful applications that help them understand their customers better.

The API is simple to use and has a few methods that provide access to consumer records. The methods available include:

• Get Records: This method allows users to retrieve consumer records from the API. It takes a list of parameters as input and returns a list of records in a response.

• Search Records: This method allows users to search for specific records. It takes a search query as input and returns a list of records that match the query.

• Get Business Information: This method allows users to retrieve business information from the API. It takes a list of parameters as input and returns a list of business information in a response.

• Create Record: This method allows users to create new consumer records. It takes a list of parameters as input and returns the created record in a response.

• Update Record: This method allows users to update existing consumer records. It takes a list of parameters as input and returns the updated record in a response.

• Delete Record: This method allows users to delete existing consumer records. It takes a list of parameters as input and returns a success/failure response.

Configuration Settings of API

The settings of an API may include authentication settings, authorization settings, data settings, response settings, request settings, and so on. Authentication settings may include the type of authentication method used, the credentials required, and the authentication process. Authorization settings may include the type of authorization required, the permissions available, and the authorization process. Data settings may include the format of the data, the data types, and the data structure. Response settings may include the response format, the response codes, and the response headers. Request settings may include the request type, the request parameters, and the request headers.

SimpleConsumer Application

SimpleConsumer is a command line tool used to consume messages from a Kafka cluster. It is a part of the Apache Kafka project and is used to read and store messages from given topics. It can be used to consume from one or more topics and can be used to produce messages to one or more topics.

SimpleConsumer is written in Java and is distributed as part of the Apache Kafka project. It can be used to read and store messages from Kafka topics. It is easy to use and provides a simple way to read and write messages to Kafka topics. It is best suited for applications that need to read and store messages from Kafka topics in a reliable way.


Apache Kafka – Consumer Group Example

Apache Kafka consumer groups allow for the processing of messages from multiple topics in parallel. This is a powerful feature that can be used for various types of applications.

For example, consider a web application that needs to process requests from different sources. Instead of having a single consumer that reads from a single topic, it’s more efficient to have multiple consumers in different consumer groups, each reading from a different topic. This allows the application to process requests from different sources in parallel, resulting in faster response times.

Another example is a data processing application that reads from multiple topics and writes the results to a single output topic. This can be done using a single consumer group with multiple consumers reading from different topics and writing to the same output topic. This allows for better scalability and more efficient resource utilization.

Finally, consumer groups can also be used for load balancing. For example, if a single topic is receiving a large amount of traffic, it can be split up into multiple topics and each assigned to a separate consumer group. This allows for better resource utilization and improved response times.

Re-balancing of a Consumer

Re-balancing of a consumer in Apache Kafka is a process that allows for the redistribution of partitions that are being consumed by a particular consumer instance to other consumer instances in the consumer group. This is necessary for ensuring that all the consumer instances in a consumer group are able to evenly receive their share of the load from the topic partitions. This is particularly useful when there are a large number of consumer instances in a consumer group and one of them fails or is taken offline, as the partitions that were being consumed by that consumer instance can be redistributed to the remaining consumer instances. Re-balancing also ensures that consumer instances are able to maintain their throughput in the event that new consumer instances are added to the consumer group.


Apache Kafka – Integration With Storm

Apache Storm and Apache Kafka can be integrated to achieve real-time streaming analytics. Kafka acts as a buffer between Storm and the real-time sources of data like social media streams, sensors and so on. Storm can then do further processing on the data and store the result in a database or send it to another streaming system. Storm and Kafka integration can be used for various applications like log processing, event processing, real-time analytics, click stream analytics and online machine learning. The integration of these two technologies can help to build a high-performance and low latency streaming data pipeline.

Storm

 Storm is an open-source, distributed, real-time processing system that can be used to build and manage streaming applications that process large volumes of data. It can be used with Apache Kafka to build complex, event-driven applications that can process data in real-time. Storm allows users to process data in a fault-tolerant manner, and can be used to build applications that can process, analyze, and act on data as it arrives in near real-time. Storm also provides a scalable way to process data from multiple sources, and can quickly scale up or down as needed. Storm’s parallel processing capabilities make it ideal for managing large-scale streaming applications.

Integration with Storm

Apache Kafka is a distributed streaming platform that enables applications to publish and subscribe to streams of records. Storm is a distributed real-time computation system that enables applications to process streaming data.

The integration of Apache Kafka and Storm enables applications to take advantage of the scalability and reliability of the Kafka platform to stream data into Storm, process it in real-time, and then push the results out to other systems or downstream applications. This type of integration allows for continuous data processing and enrichment, which is ideal for a variety of use cases such as fraud detection, customer segmentation, and predictive analytics. Additionally, the integration of Kafka and Storm allows for a unified platform for both streaming and batch processing. This is useful for applications that need to ingest a large volume of data and process it rapidly.

BrokerHosts – ZkHosts & StaticHosts

ZkHosts is a type of Hosts which is used to connect to a ZooKeeper server. This type of Hosts makes use of a ZooKeeper cluster for discovering and managing brokers for a Kafka cluster.

StaticHosts is a type of Hosts which is used to connect to a Kafka Broker directly. This type of Hosts requires direct hostname or IP Address information to connect to the Kafka Broker.

KafkaConfig API

The KafkaConfig API is a Java-based library that provides a programmatic interface for configuring Apache Kafka clusters. It enables developers to define and configure Kafka clusters, set up topics and consumer groups, and manage the overall Kafka environment. It also provides a set of APIs for monitoring and managing Kafka clusters, allowing developers to retrieve the status of Kafka services, set and retrieve configuration parameters, and manage consumer offsets. This library makes it easier for developers to deploy and manage Kafka clusters without having to manually configure the cluster and its components.

SpoutConfig API

The Storm SpoutConfig API is a Java interface used to configure and deploy Storm topologies. It provides a set of methods to define configuration parameters, such as the number of tasks, the source of the data, how the data should be split, and how the data should be processed. It also provides support for setting up a fault-tolerant system by replaying previous messages, and monitoring the health of the topology. In addition, it provides methods to manage the output of a topology, such as setting the output stream, defining the encoding of the output, and setting up notification and alerting.

SchemeAsMultiScheme

SchemeAsMultiScheme is an Apache Flume component that allows users to specify multiple schemes in a single configuration. It works by routing events to the specified scheme depending on the values of certain headers or fields. This allows users to easily route events to different destinations based on their content.

KafkaSpout API

KafkaSpout is a type of Spout used in Apache Storm. It reads data from Apache Kafka topics and streams it into the topology. It is a reliable, fault tolerant, and scalable streaming solution that can be used to ingest data from multiple sources and process it in real-time. KafkaSpout has configurable parameters to control the number of messages it reads and how often it polls for new messages. It also provides support for partitioning, rebalancing, and fault tolerance.

sample code to create a simple Kafka spout.

import org.apache.storm.kafka.SpoutConfig;

import org.apache.storm.kafka.StringScheme;

import org.apache.storm.kafka.ZkHosts;

import org.apache.storm.spout.SchemeAsMultiScheme;

//Configure Kafka Spout

ZkHosts zkHosts = new ZkHosts(“192.168.1.1:2181”);

String topicName = “test”;

String zkRoot = “/kafka-spout”;

String consumerGroupId = “myGroupId”;

SpoutConfig kafkaSpoutConfig = new SpoutConfig(zkHosts, topicName, zkRoot, consumerGroupId);

kafkaSpoutConfig.scheme = new SchemeAsMultiScheme(new StringScheme());

//Create Kafka Spout

KafkaSpout kafkaSpout = new KafkaSpout(kafkaSpoutConfig);

Bolt Creation

Creating a bolt requires a few steps. First, the bolt must be designed and the necessary measurements taken. The bolt should then be cut from a piece of metal or other material using a saw, shear, or other cutting tool. The threads should then be cut into the bolt using a die or threading machine. Any holes or slots should be drilled into the bolt with a drill or milling machine. Finally, the bolt should be heat treated to give it the desired strength and durability.

The IRichBolt interface defines the following methods:

1. prepare(): This method is called when a new topology is submitted and is used to initialize the bolt.

2. execute(): This method is called when a tuple is received and is used to process the tuple and possibly emit new tuples.

3. cleanup(): This method is called when the topology is killed and is used to close any resources used.

4. declareOutputFields(): This method is used to declare the output fields for the bolt.

5. getComponentConfiguration(): This method is used to get the configuration for the bolt.

create SplitBolt.java, which implements the logic to split a sentence into words and CountBolt.java, which implements logic to separate unique words and count its occurrence.

SplitBolt.java

import java.util.Map;

import org.apache.storm.task.OutputCollector;

import org.apache.storm.task.TopologyContext;

import org.apache.storm.topology.OutputFieldsDeclarer;

import org.apache.storm.topology.base.BaseRichBolt;

import org.apache.storm.tuple.Fields;

import org.apache.storm.tuple.Tuple;

import org.apache.storm.tuple.Values;

public class SplitBolt extends BaseRichBolt{

            private OutputCollector collector;

            @Override

            public void prepare(Map conf, TopologyContext context, OutputCollector collector) {

                        this.collector = collector;       

            }

            @Override

            public void execute(Tuple tuple) {

                        String sentence = tuple.getString(0);

                        String[] words = sentence.split(” “);

                        for(String word: words){

                                    word = word.trim();

                                    if(!word.isEmpty()){

                                                word = word.toLowerCase();

                                                collector.emit(new Values(word));

                                    }

                        }          

            }

            @Override

            public void declareOutputFields(OutputFieldsDeclarer declarer) {

                        declarer.declare(new Fields(“word”));

            }

}

CountBolt.java

import java.util.HashMap;

import java.util.Map;

import org.apache.storm.task.OutputCollector;

import org.apache.storm.task.TopologyContext;

import org.apache.storm.topology.OutputFieldsDeclarer;

import org.apache.storm.topology.base.BaseRichBolt;

import org.apache.storm.tuple.Fields;

import org.apache.storm.tuple.Tuple;

import org.apache.storm.tuple.Values;

public class CountBolt extends BaseRichBolt{

            Integer id;

            String name;

            Map<String, Integer> counters;

            private OutputCollector collector;

            @Override

            public void prepare(Map conf, TopologyContext context, OutputCollector collector) {

                        this.counters = new HashMap<String, Integer>();

                        this.collector = collector;

                        this.name = context.getThisComponentId();

                        this.id = context.getThisTaskId();       

            }

            @Override

            public void execute(Tuple tuple) {

                        String str = tuple.getString(0);

                        if(!counters.containsKey(str)){

                                    counters.put(str, 1);

                        }else{

                                    Integer c = counters.get(str) + 1;

                                    counters.put(str, c);

                        }

                        // emit the word and count

                        collector.emit(new Values(str, counters.get(str)));   

            }

            @Override

            public void declareOutputFields(OutputFieldsDeclarer declarer) {

                        declarer.declare(new Fields(“word”, “count”));

            }

}


Submitting Topology

To submit a topology to Storm in Kafka, you will need to create a Kafka producer and a Storm topology. The Kafka producer will read from the Kafka topic and send messages to the Storm topology. The Storm topology will then process the messages and send them to the appropriate output streams. Once the topology is successfully submitted, you can monitor the progress of the topology from the Storm UI.

Local Cluster– A local cluster in a topology is a group of nodes, or computing devices, that are all connected to each other in some way, usually through a local network. These nodes can communicate with each other and share data and resources. Local clusters are often used to increase the processing power of a single computer, or to form a distributed computing system.

KafkaStormSample.java

import backtype.storm.Config;

import backtype.storm.LocalCluster;

import backtype.storm.StormSubmitter;

import backtype.storm.generated.AlreadyAliveException;

import backtype.storm.generated.InvalidTopologyException;

import backtype.storm.spout.SchemeAsMultiScheme;

import backtype.storm.topology.TopologyBuilder;

import storm.kafka.BrokerHosts;

import storm.kafka.KafkaSpout;

import storm.kafka.SpoutConfig;

import storm.kafka.ZkHosts;

public class KafkaStormSample {

            public static void main(String[] args) throws AlreadyAliveException,

                                    InvalidTopologyException {

                        // Kafka properties

                        String topic = “test1”;

                        String zkRoot = “/kafka-storm”;

                        String spoutId = “kafkaSpout”;

                        BrokerHosts brokerHosts = new ZkHosts(“localhost:2181”);

                        // Create Kafka Spout config

                        SpoutConfig kafkaSpoutConfig = new SpoutConfig(brokerHosts, topic, zkRoot,

                                                spoutId);

                        kafkaSpoutConfig.scheme = new SchemeAsMultiScheme(

                                                new KafkaMessageScheme());

                        // Create Kafka Spout

                        KafkaSpout kafkaSpout = new KafkaSpout(kafkaSpoutConfig);

                        // Create topology

                        TopologyBuilder builder = new TopologyBuilder();

                        builder.setSpout(“kafkaSpout”, kafkaSpout);

                        builder.setBolt(“print-bolt”, new PrintBolt()).shuffleGrouping(

                                                “kafkaSpout”);

                        // Create Config instance for cluster configuration

                        Config config = new Config();

                        config.setDebug(true);

                        if (args != null && args.length > 0) {

                                    // Run it in a live cluster

                                    config.setNumWorkers(3);

                                    StormSubmitter.submitTopology(args[0], config,

                                                            builder.createTopology());

                        } else {

                                    // Run it in a simulated local cluster

                                    LocalCluster cluster = new LocalCluster();

                                    cluster.submitTopology(“KafkaStormSample”, config,

                                                            builder.createTopology());

                                    Thread.sleep(60000);

                                    cluster.shutdown();

                        }

            }

}


Apache Kafka – Integration With Spark

Apache Kafka and Spark are two of the most popular tools in today’s big data landscape. Apache Kafka is a distributed, fault-tolerant, high-throughput, and scalable messaging platform that enables real-time data processing and analytics. Apache Spark is an open-source distributed analytics engine that helps to process large datasets quickly and efficiently.

Kafka and Spark can be integrated together to form a powerful data processing pipeline. Data can be ingested into Kafka from sources such as log files, databases, and applications, and then processed and analyzed using Spark. The integrated system can be used to create real-time streaming applications that can process and analyze data in near real-time.

The integration of Kafka and Spark can be done using the Spark Streaming library, which provides a high-level API for integrating Kafka and Spark. With Spark Streaming, you can read data from a Kafka topic and process it in real-time. The data can then be written back to Kafka for further consumption or used for other purposes such as analytics or machine learning.

Using Kafka and Spark together can help to create powerful real-time applications that can process large amounts of data quickly and efficiently. With the help of these technologies, organizations can gain valuable insights from their data in near real-time.

About Spark

Apache Kafka is an open-source distributed streaming platform used for building real-time streaming data pipelines and applications. Apache Spark is a distributed processing engine for large-scale data processing. Spark is often used in conjunction with Apache Kafka to process and analyze streaming data. Kafka provides a highly scalable messaging system that allows developers to send and receive messages in real-time. Spark can be used to process the incoming data from Kafka and provide real-time insights. Spark can also be used to store the data in databases or data warehouses. In addition, Spark can be used to create machine learning models and applications.

Integration with Spark

Apache Kafka and Apache Spark have become two of the most popular open-source big data technologies. They are both powerful tools for real-time data processing. Apache Kafka provides a high-throughput distributed messaging system that is designed to be fast, scalable, and durable. Apache Spark is a fast, in-memory data processing engine that provides a unified platform for batch, streaming, and interactive analytics. The combination of these two technologies creates a powerful platform for streaming data processing. Apache Kafka acts as a real-time messaging system that can ingest high volumes of data and Spark can be used to process the data and generate insights. The integration of Apache Kafka and Apache Spark provides a powerful platform for stream processing and real-time analytics. This integration enables organizations to quickly and efficiently process large amounts of data in real-time and generate actionable insights from the data.

SparkConf API

The SparkConf API allows developers to configure the Spark application. It is used to set various Spark parameters, such as the application name, any custom configuration options, the memory and cores to use for the application, and much more. It also allows developers to set environment variables, such as the number of cores and the amount of memory to use. The SparkConf API is essential for those who want to customize their Spark application and make it run as efficiently as possible.

StreamingContext API

The StreamingContext API is a programming interface for creating and managing distributed streaming applications. It allows developers to leverage the power of distributed computing to process large data sets in real-time. The API provides a way to set up and manage distributed streaming applications using various frameworks such as Apache Flink, Apache Spark, Apache Kafka and Apache Storm. It provides an abstraction layer on top of these frameworks, allowing developers to quickly and easily create distributed streaming applications with minimal coding. Additionally, it provides a range of tools for monitoring and managing streaming applications, and allows developers to integrate their applications with other services.

KafkaUtils API

The KafkaUtils API is a collection of Java classes and methods used to interact with Kafka, a distributed streaming platform. It provides a high-level API for producing and consuming messages from Kafka clusters, as well as for managing topics, partitions, and consumer offsets. It also provides tools for monitoring, managing, and analyzing Kafka clusters.

import org.apache.kafka.clients.consumer.KafkaConsumer;

import org.apache.kafka.clients.consumer.ConsumerRecords;

import org.apache.kafka.clients.consumer.ConsumerRecord;

import java.util.Arrays;

import java.util.Properties;

// Kafka consumer configuration

Properties props = new Properties();

props.put(“bootstrap.servers”, “localhost:9092”);

props.put(“group.id”, “test”);

props.put(“enable.auto.commit”, “true”);

props.put(“auto.commit.interval.ms”, “1000”);

props.put(“key.deserializer”, “org.apache.kafka.common.serialization.StringDeserializer”);

props.put(“value.deserializer”, “org.apache.kafka.common.serialization.StringDeserializer”);

KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);

// Subscribe to topic

consumer.subscribe(Arrays.asList(“topic1”, “topic2”));

// Poll for new messages

while(true) {

    ConsumerRecords<String, String> records = consumer.poll(100);

    for (ConsumerRecord<String, String> record : records) {

        // process message

    }

}

// Close the consumer

consumer.close();

Build Script

A Build Script is a set of instructions that tells a computer how to compile, package, and deploy a Spark and Kafka application. It contains commands that tell the computer which libraries to use, which files to compile, and how to package the application for deployment. It is typically written in a scripting language such as Bash or Python. With a Build Script, developers can automate the building, deployment, and maintenance of their applications.

Compilation / Packaging in spark

Spark is a distributed computing system that uses a unified engine to process data in parallel. It provides an easy-to-use programming interface for developers to write distributed applications that can be compiled and packaged for deployment.

Submitting to Spark

Once your Spark code is ready, you can submit it to the cluster. To submit your code, you will need to use the spark-submit command. This command allows you to specify the application to be run, the resources to use, and other options you may need. You can use the following command to submit your Spark code to the cluster:

spark-submit –class YourMainClassName –master yarn –deploy-mode cluster YourApp.jar

The –class parameter is used to specify the main class of your application. The –master parameter is used to specify the type of cluster manager to use (in this case, YARN). The –deploy-mode parameter is used to specify how the application should be deployed to the cluster (in this case, cluster mode). Finally, the YourApp.jar parameter is used to specify the path to the application jar.

Once you submit your Spark code to the cluster, it will be executed by the cluster manager. The cluster manager will allocate resources to your application, execute the application, and then return the results.


Real Time Application example of  Kafka

Kafka is a real-time application that is used to process streaming data. It is an open-source distributed streaming platform that is used to publish and subscribe to streams of records. Kafka is used in many applications for real-time streaming data processing.

For example, in a retail store, Kafka can be used to store customer purchase data in real-time. This data can then be used to generate business insights, such as customer buying trends and customer segmentation.

Another example is in the healthcare industry, where Kafka can be used to stream patient data from medical devices in real-time. This data can then be used to monitor patient health and detect any anomalies.

Kafka is also used in the financial industry to process transactional data in real-time. This data can then be used to detect fraud, detect money laundering, and monitor risk exposure.

Overall, Kafka is a powerful real-time application that is used to process streaming data. It is used in many applications for real-time streaming data processing and can help organizations gain valuable insights from their data.


Apache Kafka – Tools

1. Kafka Management Tools: Kafka Management tools are designed to help manage, monitor and analyze data movement in Kafka clusters. Examples of such tools include Kafka Manager, Kafka Monitor, and Kafka Toolkit.

2. Kafka Monitoring Tools: Kafka monitoring tools provide real-time visibility into the performance, health, and usage of your Kafka clusters. Examples of such tools include Confluent Control Center, Datadog, and Prometheus.

3. Kafka Migration Tools: Kafka migration tools provide support for migrating data between Kafka clusters, including data transformation and replication. Examples of such tools include Kafka Connect and Stream Reactor.

4. Kafka Analytics Tools: Kafka analytics tools provide support for analyzing data in Kafka topics, including extracting insights from data streams. Examples of such tools include Apache Flink and Apache Spark.

5. Kafka Management APIs: Kafka management APIs provide programmatic access to Kafka clusters for managing topics, consumers, and other components. Examples of such tools include Kafka REST and Kafka Admin APIs.

Replication Tool

Kafka Multi-Broker Replication Tool (KMRT) is a tool for replicating Kafka topics across multiple Kafka brokers. It provides an easy way to replicate topics between different Kafka clusters and gives administrators the ability to manage and monitor replication tasks. KMRT is designed to provide an efficient way to replicate data between clusters while maintaining data integrity and minimizing downtime. KMRT also allows administrators to specify replication policies to ensure that data is always available.


Apache Kafka – Applications

Kafka supports many of today’s best industrial applications like

1. Event Streaming: Kafka can be used to build real-time streaming data pipelines that reliably get data between systems or applications.

2. Messaging: Kafka provides a high-throughput, low-latency platform for handling real-time data feeds.

3. Metrics: Kafka is often used for operational monitoring data. This involves aggregating statistics from distributed applications to produce centralized feeds of operational data.

4. Log Aggregation: Kafka can be used to collect logs from multiple services and make them available in a standard format to multiple consumers.

5. Commit Log: Kafka can be used as a commit log for a distributed system.

6. Stream Processing: Kafka Streams is a client library for building applications and microservices, where the input and output data are stored in a Kafka cluster.

7. Caching: Kafka can act as a buffer between a data source and the system consuming the data, allowing the consumer to read data at its own pace.

Leave a Reply

Your email address will not be published. Required fields are marked *

error: Content is protected !!