You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 12 Next »

Overview

Apache Kafka is an open source project for a distributed publish-subscribe messaging system rethought as a distributed commit log.

Kafka stores messages in topics that are partitioned and replicated across multiple brokers in a cluster. Producers send messages to topics from which consumers read.


Created by LinkedIn and is now an Open Source project maintained by Confluent.


Kafka Use Cases

Some use cases for using Kafka:

  • Messaging System
  • Activity Tracking
  • Gathering metrics from many different sources
  • Application Logs gathering
  • Stream processing (with the Kafka Streams API or Spark for example)
  • De-coupling of system dependencies
  • Integration with Spark, Flink, Storm, Hadoop and many other Big Data technologies


Architecture



  1. Source Connectors pull data from sources
  2. Data is sent to Kafka cluster
  3. Transformation of topic data into another topic can be done with Streams
  4. Sink Connectors in Connect cluster pull data from Kafka
  5. Sink Connectors push data to sinks


Kafka 

Topics and Partitions


Topics: a particular stream of data

  • similar to a table in a database(without constraints)
  • you can have as many topics as you want
  • a topic is identified by it's name


Topics are split into partitions

  • each partition is ordered
  • each message with a partition gets an incremental id, called offset.
  • offsets are only relevant for a particular partition
  • order is guaranteed only in a partition (not across partitions)
  • data is assigned to a random partition unless a key is provided
  • you can have as many partitions per topic as you want
  • specifying a key, ensures that your message is written to the same partition (which ensures order).


Kafka Brokers

Consumers (still relevant? - moved to Kafka Connect?) 

Data Replication

  • topics should have a replication factor greater than 1 (usually 2 or 3)
  • this ensures that if a broker goes down, another broker can serve the data
  • ISR - in-synch replica

Kafka Connect

Overview

  • Source connectors to get data from common data sources
  • Sink connectors to publish that data in common data sources
  • Make it easy for non-experienced dev to quickly get their data reliably into Kafka
  • Re-usable code!


Installation on Kubernetes

See https://github.com/confluentinc/cp-helm-charts


# more to come

References


  • No labels