[bitnami/kafka] Restore value brokerRackAssignment (#26296)

* [bitnami/kafka] Restore value brokerRackAssignment

Signed-off-by: Miguel Ruiz <miruiz@vmware.com>

* Update CHANGELOG.md

Signed-off-by: Bitnami Containers <bitnami-bot@vmware.com>

* Update README.md with readme-generator-for-helm

Signed-off-by: Bitnami Containers <bitnami-bot@vmware.com>

* Update CHANGELOG.md

Signed-off-by: Bitnami Containers <bitnami-bot@vmware.com>

---------

Signed-off-by: Miguel Ruiz <miruiz@vmware.com>
Signed-off-by: Bitnami Containers <bitnami-bot@vmware.com>
Co-authored-by: Bitnami Containers <bitnami-bot@vmware.com>
This commit is contained in:
Miguel Ruiz
2024-05-27 11:47:36 +02:00
committed by GitHub
parent 1edae697a3
commit 62968c1bcf
5 changed files with 23 additions and 6 deletions

View File

@@ -1,8 +1,14 @@
# Changelog
## 29.0.3 (2024-05-24)
## 29.1.0 (2024-05-27)
* [bitnami/kafka] Fix linter rules after deprecating Kafka Exporter ([#26411](https://github.com/bitnami/charts/pull/26411))
* [bitnami/kafka] Restore value brokerRackAssignment ([#26296](https://github.com/bitnami/charts/pull/26296))
## <small>29.0.3 (2024-05-24)</small>
* [bitnami/kafka] Deprecate Kafka Exporter (#26395) ([bf9a653](https://github.com/bitnami/charts/commit/bf9a6535fabdd4c0ad3210920cdd6c4963c5511c)), closes [#26395](https://github.com/bitnami/charts/issues/26395)
* [bitnami/kafka] Fix linter rules after deprecating Kafka Exporter (#26411) ([69856e9](https://github.com/bitnami/charts/commit/69856e985f1325b3e72cd126b6990647d35f1cbb)), closes [#26411](https://github.com/bitnami/charts/issues/26411)
* [bitnami/kafka] Release 28.3.1 (#26403) ([0428ec7](https://github.com/bitnami/charts/commit/0428ec724a1e6b139b12e8c3a6ab489a6459660c)), closes [#26403](https://github.com/bitnami/charts/issues/26403)
## 28.3.0 (2024-05-21)

View File

@@ -40,4 +40,4 @@ maintainers:
name: kafka
sources:
- https://github.com/bitnami/charts/tree/main/bitnami/kafka
version: 29.0.3
version: 29.1.0

View File

@@ -452,6 +452,7 @@ You can enable this initContainer by setting `volumePermissions.enabled` to `tru
| `log4j` | An optional log4j.properties file to overwrite the default of the Kafka brokers | `""` |
| `existingLog4jConfigMap` | The name of an existing ConfigMap containing a log4j.properties file | `""` |
| `heapOpts` | Kafka Java Heap size | `-Xmx1024m -Xms1024m` |
| `brokerRackAssignment` | Set Broker Assignment for multi tenant environment Allowed values: `aws-az` | `""` |
| `interBrokerProtocolVersion` | Override the setting 'inter.broker.protocol.version' during the ZK migration. | `""` |
| `listeners.client.name` | Name for the Kafka client listener | `CLIENT` |
| `listeners.client.containerPort` | Port for the Kafka client listener | `9092` |

View File

@@ -379,6 +379,12 @@ data:
{{- if and .Values.tls.zookeeper.enabled .Values.tls.zookeeper.existingSecret }}
configure_zookeeper_tls
{{- end }}
{{- if eq .Values.brokerRackAssignment "aws-az" }}
# Broker rack awareness
echo "Obtaining broker.rack for aws-az rack assignment"
export BROKER_RACK=$(curl "http://169.254.169.254/latest/meta-data/placement/availability-zone-id")
kafka_conf_set "$KAFKA_CONFIG_FILE" "broker.rack" "$BROKER_RACK"
{{- end }}
if [ -f /secret-config/server-secret.properties ]; then
append_file_to_kafka_conf /secret-config/server-secret.properties $KAFKA_CONFIG_FILE
fi

View File

@@ -140,6 +140,10 @@ existingLog4jConfigMap: ""
## @param heapOpts Kafka Java Heap size
##
heapOpts: -Xmx1024m -Xms1024m
## @param brokerRackAssignment Set Broker Assignment for multi tenant environment Allowed values: `aws-az`
## ref: https://cwiki.apache.org/confluence/display/KAFKA/KIP-392%3A+Allow+consumers+to+fetch+from+closest+replica
##
brokerRackAssignment: ""
## @param interBrokerProtocolVersion Override the setting 'inter.broker.protocol.version' during the ZK migration.
## Ref. https://docs.confluent.io/platform/current/installation/migrate-zk-kraft.html
##
@@ -265,9 +269,9 @@ sasl:
## @param sasl.existingSecret Name of the existing secret containing credentials for clientUsers, interBrokerUser, controllerUser and zookeeperUser
## Create this secret running the command below where SECRET_NAME is the name of the secret you want to create:
## kubectl create secret generic SECRET_NAME --from-literal=client-passwords=CLIENT_PASSWORD1,CLIENT_PASSWORD2 --from-literal=inter-broker-password=INTER_BROKER_PASSWORD --from-literal=inter-broker-client-secret=INTER_BROKER_CLIENT_SECRET --from-literal=controller-password=CONTROLLER_PASSWORD --from-literal=controller-client-secret=CONTROLLER_CLIENT_SECRET --from-literal=zookeeper-password=ZOOKEEPER_PASSWORD
## The client secrets are only required when using oauthbearer as sasl mechanism.
## The client secrets are only required when using oauthbearer as sasl mechanism.
## Client, interbroker and controller passwords are only required if the sasl mechanism includes something other than oauthbearer.
##
##
existingSecret: ""
## @section Kafka TLS parameters
## Kafka TLS settings, required if SSL or SASL_SSL listeners are configured
@@ -1530,7 +1534,7 @@ externalAccess:
## @param externalAccess.autoDiscovery.containerSecurityContext.allowPrivilegeEscalation Set Kafka auto-discovery containers' Security Context allowPrivilegeEscalation
## @param externalAccess.autoDiscovery.containerSecurityContext.readOnlyRootFilesystem Set Kafka auto-discovery containers' Security Context readOnlyRootFilesystem
## @param externalAccess.autoDiscovery.containerSecurityContext.capabilities.drop Set Kafka auto-discovery containers' Security Context capabilities to be dropped
## @param externalAccess.autoDiscovery.containerSecurityContext.seccompProfile.type Set Kafka auto-discovery seccomp profile type
## @param externalAccess.autoDiscovery.containerSecurityContext.seccompProfile.type Set Kafka auto-discovery seccomp profile type
## e.g:
## containerSecurityContext:
## enabled: true