[bitnami/README] Fixed typos and update tutorials for Kafka image (#16897)

* Fixed some typos (a Apache -> an Apache)
* Update --broker-list to --boostrap-server
* The option `--broker-list` is deprecated, we should use `--boostrap-server` instead.
* Refactor all `--broker-list` to `--bootstrap-server`

Signed-off-by: CHEN Zhongpu <chenloveit@gmail.com>
This commit is contained in:
zhongpu
2022-12-19 22:36:49 +08:00
committed by GitHub
parent 466cb012ab
commit 2ecf7a06af
4 changed files with 23 additions and 23 deletions

View File

@@ -97,13 +97,13 @@ cassandra:
## Connecting to other containers
Using [Docker container networking](https://docs.docker.com/engine/userguide/networking/), a Apache Cassandra server running inside a container can easily be accessed by your application containers.
Using [Docker container networking](https://docs.docker.com/engine/userguide/networking/), an Apache Cassandra server running inside a container can easily be accessed by your application containers.
Containers attached to the same network can communicate with each other using the container name as the hostname.
### Using the Command Line
In this example, we will create a Apache Cassandra client instance that will connect to the server instance that is running on the same docker network as the client.
In this example, we will create an Apache Cassandra client instance that will connect to the server instance that is running on the same docker network as the client.
#### Step 1: Create a network

View File

@@ -94,13 +94,13 @@ kafka:
## Connecting to other containers
Using [Docker container networking](https://docs.docker.com/engine/userguide/networking/), a Apache Kafka server running inside a container can easily be accessed by your application containers.
Using [Docker container networking](https://docs.docker.com/engine/userguide/networking/), an Apache Kafka server running inside a container can easily be accessed by your application containers.
Containers attached to the same network can communicate with each other using the container name as the hostname.
### Using the Command Line
In this example, we will create a Apache Kafka client instance that will connect to the server instance that is running on the same docker network as the client.
In this example, we will create an Apache Kafka client instance that will connect to the server instance that is running on the same docker network as the client.
#### Step 1: Create a network
@@ -329,7 +329,7 @@ And expose the external port:
These clients, from the same host, will use `localhost` to connect to Apache Kafka.
```console
kafka-console-producer.sh --broker-list 127.0.0.1:9093 --topic test
kafka-console-producer.sh --bootstrap-server 127.0.0.1:9093 --topic test
kafka-console-consumer.sh --bootstrap-server 127.0.0.1:9093 --topic test --from-beginning
```
@@ -340,7 +340,7 @@ If running these commands from another machine, change the address accordingly.
These clients, from other containers on the same Docker network, will use the kafka container service hostname to connect to Apache Kafka.
```console
kafka-console-producer.sh --broker-list kafka:9092 --topic test
kafka-console-producer.sh --bootstrap-server kafka:9092 --topic test
kafka-console-consumer.sh --bootstrap-server kafka:9092 --topic test --from-beginning
```
@@ -363,7 +363,7 @@ In order to configure authentication, you must configure the Apache Kafka listen
Let's see an example to configure Apache Kafka with `SASL_SSL` authentication for communications with clients, and `SSL` authentication for inter-broker communication.
The environment variables below should be define to configure the listeners, and the SASL credentials for client communications:
The environment variables below should be defined to configure the listeners, and the SASL credentials for client communications:
```console
KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:SSL,CLIENT:SASL_SSL
@@ -389,7 +389,7 @@ Keep in mind the following notes:
* When prompted to enter a password, use the same one for all.
* Set the Common Name or FQDN values to your Apache Kafka container hostname, e.g. `kafka.example.com`. After entering this value, when prompted "What is your first and last name?", enter this value as well.
* As an alternative, you can disable host name verification setting the environment variable `KAFKA_CFG_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM` to an empty string.
* When setting up a Apache Kafka Cluster (check [this section](#setting-up-a-kafka-cluster) for more information), each Apache Kafka broker and logical client needs its own keystore. You will have to repeat the process for each of the brokers in the cluster.
* When setting up an Apache Kafka Cluster (check [this section](#setting-up-a-kafka-cluster) for more information), each Apache Kafka broker and logical client needs its own keystore. You will have to repeat the process for each of the brokers in the cluster.
The following docker-compose file is an example showing how to mount your JKS certificates protected by the password `certificatePassword123`. Additionally it is specifying the Apache Kafka container hostname and the credentials for the client and zookeeper users.
@@ -438,7 +438,7 @@ Use this to generate messages using a secure setup:
```console
export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/bitnami/kafka/conf/kafka_jaas.conf"
kafka-console-producer.sh --broker-list 127.0.0.1:9092 --topic test --producer.config /opt/bitnami/kafka/conf/producer.properties
kafka-console-producer.sh --bootstrap-server 127.0.0.1:9092 --topic test --producer.config /opt/bitnami/kafka/conf/producer.properties
```
Use this to consume messages using a secure setup
@@ -459,7 +459,7 @@ When configuring your broker to use `SASL` or `SASL_SSL` for inter-broker commun
#### Apache Kafka client configuration
When configuring Apache Kafka with `SASL` or `SASL_SSL` for communications with clients, you can provide your the SASL credentials using this environment variables:
When configuring Apache Kafka with `SASL` or `SASL_SSL` for communications with clients, you can provide the SASL credentials using this environment variables:
* `KAFKA_CLIENT_USERS`: Apache Kafka client user. Default: **user**
* `KAFKA_CLIENT_PASSWORDS`: Apache Kafka client user password. Default: **bitnami**
@@ -498,9 +498,9 @@ In order to authenticate Apache Kafka against a Zookeeper server with `SASL_SSL`
> Note: You **must** also use your own certificates for SSL. You can mount your Java Key Stores (`zookeeper.keystore.jks` and `zookeeper.truststore.jks`) or PEM files (`zookeeper.keystore.pem`, `zookeeper.keystore.key` and `zookeeper.truststore.pem`) into `/opt/bitnami/kafka/conf/certs`. If client authentication is `none` or `want` in Zookeeper, the cert files are optional.
### Setting up a Apache Kafka Cluster
### Setting up an Apache Kafka Cluster
A Apache Kafka cluster can easily be setup with the Bitnami Apache Kafka Docker image using the following environment variables:
An Apache Kafka cluster can easily be setup with the Bitnami Apache Kafka Docker image using the following environment variables:
- `KAFKA_CFG_ZOOKEEPER_CONNECT`: Comma separated host:port pairs, each corresponding to a Zookeeper Server.
@@ -561,7 +561,7 @@ $ docker run --name kafka3 \
bitnami/kafka:latest
```
You now have a Apache Kafka cluster up and running. You can scale the cluster by adding/removing slaves without incurring any downtime.
You now have an Apache Kafka cluster up and running. You can scale the cluster by adding/removing slaves without incurring any downtime.
With Docker Compose, topic replication can be setup using:

View File

@@ -149,11 +149,11 @@ Additionally, SSL configuration can be easily activated following the next steps
2. You need to mount your spark keystore and truststore files to `/opt/bitnami/spark/conf/certs`. Please note they should be called `spark-keystore.jks` and `spark-truststore.jks` and they should be in JKS format.
### Setting up a Apache Spark Cluster
### Setting up an Apache Spark Cluster
A Apache Spark cluster can easily be setup with the default docker-compose.yml file from the root of this repo. The docker-compose includes two different services, `spark-master` and `spark-worker.`
An Apache Spark cluster can easily be setup with the default docker-compose.yml file from the root of this repo. The docker-compose includes two different services, `spark-master` and `spark-worker.`
By default, when you deploy the docker-compose file you will get a Apache Spark cluster with 1 master and 1 worker.
By default, when you deploy the docker-compose file you will get an Apache Spark cluster with 1 master and 1 worker.
If you want N workers, all you need to do is start the docker-compose deployment with the following command:

View File

@@ -104,13 +104,13 @@ services:
## Connecting to other containers
Using [Docker container networking](https://docs.docker.com/engine/userguide/networking/), a Apache ZooKeeper server running inside a container can easily be accessed by your application containers.
Using [Docker container networking](https://docs.docker.com/engine/userguide/networking/), an Apache ZooKeeper server running inside a container can easily be accessed by your application containers.
Containers attached to the same network can communicate with each other using the container name as the hostname.
### Using the Command Line
In this example, we will create a Apache ZooKeeper client instance that will connect to the server instance that is running on the same docker network as the client.
In this example, we will create an Apache ZooKeeper client instance that will connect to the server instance that is running on the same docker network as the client.
#### Step 1: Create a network
@@ -182,7 +182,7 @@ The configuration can easily be setup with the Bitnami Apache ZooKeeper Docker i
- `ZOO_SNAPCOUNT`: The number of transactions recorded in the transaction log before a snapshot can be taken (and the transaction log rolled). Default: **100000**
- `ZOO_INIT_LIMIT`: Apache ZooKeeper uses to limit the length of time the Apache ZooKeeper servers in quorum have to connect to a leader. Default: **10**
- `ZOO_SYNC_LIMIT`: How far out of date a server can be from a leader. Default: **5**
- `ZOO_MAX_CNXNS`: Limits the total number of concurrent connections that can be made to a Apache ZooKeeper server. Setting it to 0 entirely removes the limit. Default: **0**
- `ZOO_MAX_CNXNS`: Limits the total number of concurrent connections that can be made to an Apache ZooKeeper server. Setting it to 0 entirely removes the limit. Default: **0**
- `ZOO_MAX_CLIENT_CNXNS`: Limits the number of concurrent connections that a single client may make to a single member of the Apache ZooKeeper ensemble. Default: **60**
- `ZOO_4LW_COMMANDS_WHITELIST`: List of whitelisted [4LW](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_4lw) commands. Default: **srvr, mntr**
- `ZOO_SERVERS`: Comma, space or semi-colon separated list of servers. Example: zoo1:2888:3888,zoo2:2888:3888 or if specifying server IDs zoo1:2888:3888::1,zoo2:2888:3888::2. Default: No defaults.
@@ -322,11 +322,11 @@ services:
...
```
### Setting up a Apache ZooKeeper ensemble
### Setting up an Apache ZooKeeper ensemble
A Apache ZooKeeper (https://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html) cluster can easily be setup with the Bitnami Apache ZooKeeper Docker image using the following environment variables:
An Apache ZooKeeper (https://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html) cluster can easily be setup with the Bitnami Apache ZooKeeper Docker image using the following environment variables:
- `ZOO_SERVERS`: Comma, space or semi-colon separated list of servers.This can be done with or without specifying the ID of the server in the ensemble. No defaults. Examples:
- `ZOO_SERVERS`: Comma, space or semi-colon separated list of servers. This can be done with or without specifying the ID of the server in the ensemble. No defaults. Examples:
- without Server ID - zoo1:2888:3888,zoo2:2888:3888
- with Server ID - zoo1:2888:3888::1,zoo2:2888:3888::2
- without Server ID and Observers - zoo1:2888:3888,zoo2:2888:3888:observer
@@ -346,7 +346,7 @@ $ docker network create app-tier --driver bridge
#### Step 1: Create the first node
The first step is to create one Apache ZooKeeper instance.
The first step is to create one Apache ZooKeeper instance.
```console
$ docker run --name zookeeper1 \