[bitnami/kafka] Add support for deploying extra resources (#3101)

Signed-off-by: juan131 <juanariza@vmware.com>
This commit is contained in:
Juan Ariza Toledano
2020-07-13 13:17:12 +02:00
committed by GitHub
parent d2b35dc934
commit 07a0dad896
6 changed files with 94 additions and 1 deletions

View File

@@ -1,6 +1,6 @@
apiVersion: v1 apiVersion: v1
name: kafka name: kafka
version: 11.3.4 version: 11.4.0
appVersion: 2.5.0 appVersion: 2.5.0
description: Apache Kafka is a distributed streaming platform. description: Apache Kafka is a distributed streaming platform.
keywords: keywords:

View File

@@ -63,6 +63,7 @@ The following tables lists the configurable parameters of the Kafka chart and th
| `nameOverride` | String to partially override kafka.fullname | `nil` | | `nameOverride` | String to partially override kafka.fullname | `nil` |
| `fullnameOverride` | String to fully override kafka.fullname | `nil` | | `fullnameOverride` | String to fully override kafka.fullname | `nil` |
| `clusterDomain` | Default Kubernetes cluster domain | `cluster.local` | | `clusterDomain` | Default Kubernetes cluster domain | `cluster.local` |
| `extraDeploy` | Array of extra objects to deploy with the release | `nil` (evaluated as a template) |
### Kafka parameters ### Kafka parameters
@@ -530,6 +531,83 @@ sidecars:
containerPort: 1234 containerPort: 1234
``` ```
### Deploying extra resources
There are cases where you may want to deploy extra objects, such as Kafka Connect. For covering this case, the chart allows adding the full specification of other objects using the `extraDeploy` parameter. The following example would create a deployment including a Kafka Connect deployment so you can connnect Kafka with MongoDB:
```yaml
## Extra objects to deploy (value evaluated as a template)
##
extraDeploy: |-
- apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "kafka.fullname" . }}-connect
labels: {{- include "kafka.labels" . | nindent 6 }}
app.kubernetes.io/component: connector
spec:
replicas: 1
selector:
matchLabels: {{- include "kafka.matchLabels" . | nindent 8 }}
app.kubernetes.io/component: connector
template:
metadata:
labels: {{- include "kafka.labels" . | nindent 10 }}
app.kubernetes.io/component: connector
spec:
containers:
- name: connect
image: KAFKA-CONNECT-IMAGE
imagePullPolicy: IfNotPresent
ports:
- name: connector
containerPort: 8083
volumeMounts:
- name: configuration
mountPath: /opt/bitnami/kafka/config
volumes:
- name: configuration
configMap:
name: {{ include "kafka.fullname" . }}-connect
- apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "kafka.fullname" . }}-connect
labels: {{- include "kafka.labels" . | nindent 6 }}
app.kubernetes.io/component: connector
data:
connect-standalone.properties: |-
bootstrap.servers = {{ include "kafka.fullname" . }}-0.{{ include "kafka.fullname" . }}-headless.{{ .Release.Namespace }}.svc.{{ .Values.clusterDomain }}:{{ .Values.service.port }}
...
mongodb.properties: |-
connection.uri=mongodb://root:password@mongodb-hostname:27017
...
- apiVersion: v1
kind: Service
metadata:
name: {{ include "kafka.fullname" . }}-connect
labels: {{- include "kafka.labels" . | nindent 6 }}
app.kubernetes.io/component: connector
spec:
ports:
- protocol: TCP
port: 8083
targetPort: connector
selector: {{- include "kafka.matchLabels" . | nindent 6 }}
app.kubernetes.io/component: connector
```
You can create the Kafka Connect image using the Dockerfile below:
```Dockerfile
FROM bitnami/kafka:latest
# Download MongoDB Connector for Apache Kafka https://www.confluent.io/hub/mongodb/kafka-connect-mongodb
RUN mkdir -p /opt/bitnami/kafka/plugins && \
cd /opt/bitnami/kafka/plugins && \
curl --remote-name --location --silent https://search.maven.org/remotecontent?filepath=org/mongodb/kafka/mongo-kafka-connect/1.2.0/mongo-kafka-connect-1.2.0-all.jar
CMD /opt/bitnami/kafka/bin/connect-standalone.sh /opt/bitnami/kafka/config/connect-standalone.properties /opt/bitnami/kafka/config/mongo.properties
```
## Persistence ## Persistence
The [Bitnami Kafka](https://github.com/bitnami/bitnami-docker-kafka) image stores the Kafka data at the `/bitnami/kafka` path of the container. The [Bitnami Kafka](https://github.com/bitnami/bitnami-docker-kafka) image stores the Kafka data at the `/bitnami/kafka` path of the container.

View File

@@ -0,0 +1,5 @@
{{- if .Values.extraDeploy }}
apiVersion: v1
kind: List
items: {{- include "kafka.tplValue" (dict "value" .Values.extraDeploy "context" $) | nindent 2 }}
{{- end }}

View File

@@ -189,6 +189,8 @@ spec:
key: client-password key: client-password
{{- end }} {{- end }}
{{- if .Values.auth.jaas.zookeeperUser }} {{- if .Values.auth.jaas.zookeeperUser }}
- name: KAFKA_ZOOKEEPER_PROTOCOL
value: "SASL"
- name: KAFKA_ZOOKEEPER_USER - name: KAFKA_ZOOKEEPER_USER
value: {{ .Values.auth.jaas.zookeeperUser | quote }} value: {{ .Values.auth.jaas.zookeeperUser | quote }}
- name: KAFKA_ZOOKEEPER_PASSWORD - name: KAFKA_ZOOKEEPER_PASSWORD

View File

@@ -209,6 +209,10 @@ extraEnvVars: []
extraVolumes: [] extraVolumes: []
extraVolumeMounts: [] extraVolumeMounts: []
## Extra objects to deploy (value evaluated as a template)
##
extraDeploy: []
## Authentication parameteres ## Authentication parameteres
## https://github.com/bitnami/bitnami-docker-kafka#security ## https://github.com/bitnami/bitnami-docker-kafka#security
## ##

View File

@@ -209,6 +209,10 @@ extraEnvVars: []
extraVolumes: [] extraVolumes: []
extraVolumeMounts: [] extraVolumeMounts: []
## Extra objects to deploy (value evaluated as a template)
##
extraDeploy: []
## Authentication parameteres ## Authentication parameteres
## https://github.com/bitnami/bitnami-docker-kafka#security ## https://github.com/bitnami/bitnami-docker-kafka#security
## ##