Add KRaft (Kafka without Zookeeper) implementation + Document use (#235)

* README update to document using Kafka without Zookeeper (KRaft)

Signed-off-by: Jesse Whitham <jesse.whitham@gmail.com>

* Do not auto-allocate broker id when in KRaft Mode

Signed-off-by: Jesse Whitham <jesse.whitham@gmail.com>

* Add warnings and errors for KRaft config in validate

- Others have good warnings/errors already from Kafka

Signed-off-by: Jesse Whitham <jesse.whitham@gmail.com>

* Add a KRaft initialize step when enabled

- Formats the storage directories to add metadata
- Generates a cluster id if not already set

Signed-off-by: Jesse Whitham <jesse.whitham@gmail.com>

* Edit KRaft setup to match implementation

Signed-off-by: Jesse Whitham <jesse.whitham@gmail.com>

* Add KAFKA_ENABLE_CRAFT to configuration options

Signed-off-by: Jesse Whitham <jesse.whitham@gmail.com>

* Move KRaft changes into new 3.1 setup

Signed-off-by: Jesse Whitham <jesse.whitham@gmail.com>

* Remove command now built into enabling KRaft

Signed-off-by: Jesse Whitham <jesse.whitham@gmail.com>

* Apply suggestions from code review

All look good - though still require changes to kafka-env.sh

Co-authored-by: Marcos Bjoerkelund <marcosbjorkelund@gmail.com>
Signed-off-by: Jesse Whitham <jesse.whitham@gmail.com>

* Fix the indentation issues and apply original suggestions to all versions

Signed-off-by: Jesse Whitham <jesse.whitham@gmail.com>

* Put in the required envs as suggested

Signed-off-by: Jesse Whitham <jesse.whitham@gmail.com>

* Put back cluster id output to info

- If its being generated we really should show it in logs as will be
  required if anyone wants to join brokers to this cluster

Signed-off-by: Jesse Whitham <jesse.whitham@gmail.com>

Co-authored-by: Marcos Bjoerkelund <marcosbjorkelund@gmail.com>
This commit is contained in:
Jesse
2022-04-25 22:05:57 +12:00
committed by Bitnami Containers
parent bfaedc8586
commit 3a19297da6
10 changed files with 222 additions and 6 deletions

View File

@@ -188,6 +188,7 @@ The configuration can easily be setup with the Bitnami Apache Kafka Docker image
* `KAFKA_INTER_BROKER_PASSWORD`: Apache Kafka inter broker communication password. Default: **bitnami**.
* `KAFKA_CERTIFICATE_PASSWORD`: Password for certificates. No defaults.
* `KAFKA_HEAP_OPTS`: Apache Kafka's Java Heap size. Default: **-Xmx1024m -Xms1024m**.
* `KAFKA_ENABLE_KRAFT`: Enable KRaft (Kafka without Zookeeper). Default: **no**.
* `KAFKA_ZOOKEEPER_PROTOCOL`: Authentication protocol for Zookeeper connections. Allowed protocols: **PLAINTEXT**, **SASL**, **SSL**, and **SASL_SSL**. Defaults: **PLAINTEXT**.
* `KAFKA_ZOOKEEPER_USER`: Apache Kafka Zookeeper user for SASL authentication. No defaults.
* `KAFKA_ZOOKEEPER_PASSWORD`: Apache Kafka Zookeeper user password for SASL authentication. No defaults.
@@ -253,6 +254,47 @@ To deploy it, run the following command in the directory where the `docker-compo
docker-compose up -d
```
### Kafka without Zookeeper (KRaft)
Apache Kafka Raft (KRaft) makes use of a new quorum controller service in Kafka which replaces the previous controller and makes use of an event-based variant of the Raft consensus protocol.
This greatly simplifies Kafkas architecture by consolidating responsibility for metadata into Kafka itself, rather than splitting it between two different systems: ZooKeeper and Kafka.
More Info can be found here: https://developer.confluent.io/learn/kraft/
***Note: KRaft is in early access and should be used in development only. It is not suitable for production.***
Configuration here has been crafted from the [Kraft Repo](https://github.com/apache/kafka/tree/trunk/config/kraft).
```diff
version: "3"
services:
- zookeeper:
- image: 'bitnami/zookeeper:latest'
- ports:
- - '2181:2181'
- environment:
- - ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: 'bitnami/kafka:latest'
ports:
- '9092:9092'
environment:
+ - KAFKA_ENABLE_KRAFT=yes
+ - KAFKA_CFG_PROCESS_ROLES=broker,controller
+ - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
- - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
+ - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093
+ - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092
- KAFKA_BROKER_ID=1
+ - KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@127.0.0.1:9093
- - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- depends_on:
- - zookeeper
```
### Accessing Apache Kafka with internal and external clients
In order to use internal and external clients to access Apache Kafka brokers you need to configure one listener for each kind of clients.