[bitnami/*] Adapt values.yaml of Spark, Spring Cloud Data Flow and SuiteCRM charts (#6951)

* spark: Adapt values.yaml to readme-generator

* spark: Organize values.yaml in sections

* spark: Minor fix

* spark: Generate README

* spring-cloud-dataflow: Adapt values.yaml to readme-generator

* spring-cloud-dataflow: Organize values.yaml in sections

* spring-cloud-dataflow: Minor fixes

* spring-cloud-dataflow: Generate README

* suitecrm: Adapt values.yaml to readme-generator

* suitecrm: Organize values.yaml in sections

* suitecrm: Generate README

* spring-cloud-dataflow: Fix linting issues

* Bump charts patch versions

* Add values.yaml paths to Github Actions workflow
This commit is contained in:
Pablo Galego
2021-07-15 12:39:28 +02:00
committed by GitHub
parent 97543ce596
commit 647d72d3b1
10 changed files with 1995 additions and 1728 deletions

View File

@@ -46,6 +46,9 @@ on:
- 'bitnami/prestashop/values.yaml' - 'bitnami/prestashop/values.yaml'
- 'bitnami/pytorch/values.yaml' - 'bitnami/pytorch/values.yaml'
- 'bitnami/rabbitmq/values.yaml' - 'bitnami/rabbitmq/values.yaml'
- 'bitnami/spark/values.yaml'
- 'bitnami/spring-cloud-dataflow/values.yaml'
- 'bitnami/suitecrm/values.yaml'
jobs: jobs:
generate-chart-readme: generate-chart-readme:

View File

@@ -22,4 +22,4 @@ name: spark
sources: sources:
- https://github.com/bitnami/bitnami-docker-spark - https://github.com/bitnami/bitnami-docker-spark
- https://spark.apache.org/ - https://spark.apache.org/
version: 5.6.1 version: 5.6.2

View File

@@ -45,198 +45,196 @@ The command removes all the Kubernetes components associated with the chart and
## Parameters ## Parameters
The following tables lists the configurable parameters of the Apache Spark chart and their default values.
### Global parameters ### Global parameters
| Parameter | Description | Default | | Name | Description | Value |
|---------------------------|-------------------------------------------------|---------------------------------------------------------| | ------------------------- | ----------------------------------------------- | ----- |
| `global.imageRegistry` | Global Docker image registry | `nil` | | `global.imageRegistry` | Global Docker image registry | `nil` |
| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | | `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` |
### Common parameters ### Common parameters
| Parameter | Description | Default | | Name | Description | Value |
|---------------------|-----------------------------------------------------------------------------------------------------------|--------------------------------| | ------------------ | -------------------------------------------------------------------------------------------- | ----- |
| `nameOverride` | String to partially override common.names.fullname template with a string (will prepend the release name) | `nil` | | `kubeVersion` | Force target Kubernetes version (using Helm capabilities if not set) | `nil` |
| `fullnameOverride` | String to fully override common.names.fullname template with a string | `nil` | | `nameOverride` | String to partially override common.names.fullname template (will maintain the release name) | `nil` |
| `commonLabels` | Labels to add to all deployed objects | `{}` | | `fullnameOverride` | String to fully override common.names.fullname template | `nil` |
| `commonAnnotations` | Annotations to add to all deployed objects | `{}` | | `extraDeploy` | Array of extra objects to deploy with the release | `[]` |
| `kubeVersion` | Force target Kubernetes version (using Helm capabilities if not set) | `nil` |
| `extraDeploy` | Array of extra objects to deploy with the release | `[]` (evaluated as a template) |
### Spark parameters ### Spark parameters
| Parameter | Description | Default | | Name | Description | Value |
|---------------------|-----------------------------------------------------------------------------------------|---------------------------------------------------------| | ------------------- | ------------------------------------------------ | --------------------- |
| `image.registry` | spark image registry | `docker.io` | | `image.registry` | Spark image registry | `docker.io` |
| `image.repository` | spark Image name | `bitnami/spark` | | `image.repository` | Spark image repository | `bitnami/spark` |
| `image.tag` | spark Image tag | `{TAG_NAME}` | | `image.tag` | Spark image tag (immutable tags are recommended) | `3.1.2-debian-10-r18` |
| `image.pullPolicy` | spark image pull policy | `IfNotPresent` | | `image.pullPolicy` | Spark image pull policy | `IfNotPresent` |
| `image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | | `image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
| `hostNetwork` | Use Host-Network for the PODs (if true, also dnsPolicy: ClusterFirstWithHostNet is set) | `false` | | `image.debug` | Enable image debug mode | `false` |
| `hostNetwork` | Enable HOST Network | `false` |
### Spark master parameters ### Spark master parameters
| Parameter | Description | Default | | Name | Description | Value |
|---------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------| | ------------------------------------------- | ------------------------------------------------------------------------------------------------------------- | ------ |
| `master.debug` | Specify if debug values should be set on the master | `false` | | `master.configurationConfigMap` | Set a custom configuration by using an existing configMap with the configuration file. | `nil` |
| `master.webPort` | Specify the port where the web interface will listen on the master | `8080` | | `master.webPort` | Specify the port where the web interface will listen on the master | `8080` |
| `master.clusterPort` | Specify the port where the master listens to communicate with workers | `7077` | | `master.clusterPort` | Specify the port where the master listens to communicate with workers | `7077` |
| `master.hostAliases` | Add deployment host aliases | `[]` | | `master.hostAliases` | Deployment pod host aliases | `[]` |
| `master.daemonMemoryLimit` | Set the memory limit for the master daemon | No default | | `master.daemonMemoryLimit` | Set the memory limit for the master daemon | `nil` |
| `master.configOptions` | Optional configuration if the form `-Dx=y` | No default | | `master.configOptions` | Use a string to set the config options for in the form "-Dx=y" | `nil` |
| `master.securityContext.enabled` | Enable security context | `true` | | `master.extraEnvVars` | Extra environment variables to pass to the master container | `nil` |
| `master.securityContext.fsGroup` | Group ID for the container | `1001` | | `master.securityContext.enabled` | Enable security context | `true` |
| `master.securityContext.runAsUser` | User ID for the container | `1001` | | `master.securityContext.fsGroup` | Group ID for the container | `1001` |
| `master.securityContext.runAsGroup` | Group ID for the container | `0` | | `master.securityContext.runAsUser` | User ID for the container | `1001` |
| `master.securityContext.seLinuxOptions` | SELinux options for the container | `{}` | | `master.securityContext.runAsGroup` | Group ID for the container | `0` |
| `master.podAnnotations` | Annotations for pods in StatefulSet | `{}` (The value is evaluated as a template) | | `master.securityContext.seLinuxOptions` | SELinux options for the container | `{}` |
| `master.extraPodLabels` | Extra labels for pods in StatefulSet | `{}` (The value is evaluated as a template) | | `master.podAnnotations` | Annotations for pods in StatefulSet | `{}` |
| `master.podAffinityPreset` | Spark master pod affinity preset. Ignored if `master.affinity` is set. Allowed values: `soft` or `hard` | `""` | | `master.extraPodLabels` | Extra labels for pods in StatefulSet | `{}` |
| `master.podAntiAffinityPreset` | Spark master pod anti-affinity preset. Ignored if `master.affinity` is set. Allowed values: `soft` or `hard` | `soft` | | `master.podAffinityPreset` | Spark master pod affinity preset. Ignored if `master.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `master.nodeAffinityPreset.type` | Spark master node affinity preset type. Ignored if `master.affinity` is set. Allowed values: `soft` or `hard` | `""` | | `master.podAntiAffinityPreset` | Spark master pod anti-affinity preset. Ignored if `master.affinity` is set. Allowed values: `soft` or `hard` | `soft` |
| `master.nodeAffinityPreset.key` | Spark master node label key to match Ignored if `master.affinity` is set. | `""` | | `master.nodeAffinityPreset.type` | Spark master node affinity preset type. Ignored if `master.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `master.nodeAffinityPreset.values` | Spark master node label values to match. Ignored if `master.affinity` is set. | `[]` | | `master.nodeAffinityPreset.key` | Spark master node label key to match Ignored if `master.affinity` is set. | `""` |
| `master.affinity` | Spark master affinity for pod assignment | `{}` (evaluated as a template) | | `master.nodeAffinityPreset.values` | Spark master node label values to match. Ignored if `master.affinity` is set. | `[]` |
| `master.nodeSelector` | Spark master node labels for pod assignment | `{}` (evaluated as a template) | | `master.affinity` | Spark master affinity for pod assignment | `{}` |
| `master.tolerations` | Spark master tolerations for pod assignment | `[]` (evaluated as a template) | | `master.nodeSelector` | Spark master node labels for pod assignment | `{}` |
| `master.resources` | CPU/Memory resource requests/limits | `{}` | | `master.tolerations` | Spark master tolerations for pod assignment | `[]` |
| `master.extraEnvVars` | Extra environment variables to pass to the master container | `{}` | | `master.resources.limits` | The resources limits for the container | `{}` |
| `master.extraVolumes` | Array of extra volumes to be added to the Spark master deployment (evaluated as template). Requires setting `master.extraVolumeMounts` | `nil` | | `master.resources.requests` | The requested resources for the container | `{}` |
| `master.extraVolumeMounts` | Array of extra volume mounts to be added to the Spark master deployment (evaluated as template). Normally used with `master.extraVolumes`. | `nil` | | `master.livenessProbe.enabled` | Enable livenessProbe | `true` |
| `master.livenessProbe.enabled` | Turn on and off liveness probe | `true` | | `master.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `180` |
| `master.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 10 | | `master.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `20` |
| `master.livenessProbe.periodSeconds` | How often to perform the probe | 10 | | `master.livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `5` |
| `master.livenessProbe.timeoutSeconds` | When the probe times out | 5 | | `master.livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` |
| `master.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 2 | | `master.livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
| `master.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | | `master.readinessProbe.enabled` | Enable readinessProbe | `true` |
| `master.readinessProbe.enabled` | Turn on and off readiness probe | `true` | | `master.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `30` |
| `master.readinessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 5 | | `master.readinessProbe.periodSeconds` | Period seconds for readinessProbe | `10` |
| `master.readinessProbe.periodSeconds` | How often to perform the probe | 10 | | `master.readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `5` |
| `master.readinessProbe.timeoutSeconds` | When the probe times out | 5 | | `master.readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` |
| `master.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 | | `master.readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
| `master.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | | `master.initContainers` | Add initContainers to the master pods. | `{}` |
### Spark worker parameters ### Spark worker parameters
| Parameter | Description | Default | | Name | Description | Value |
|---------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------| | ------------------------------------------- | ------------------------------------------------------------------------------------------------------------- | -------------- |
| `worker.debug` | Specify if debug values should be set on workers | `false` | | `worker.configurationConfigMap` | Set a custom configuration by using an existing configMap with the configuration file. | `nil` |
| `worker.webPort` | Specify the port where the web interface will listen on the worker | `8080` | | `worker.webPort` | Specify the port where the web interface will listen on the worker | `8081` |
| `worker.clusterPort` | Specify the port where the worker listens to communicate with the master | `7077` | | `worker.clusterPort` | Specify the port where the worker listens to communicate with the master | `nil` |
| `worker.extraPorts` | Specify the port where the running jobs inside the workers listens, [ContainerPort spec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#containerport-v1-core) | `[]` | | `worker.hostAliases` | Add deployment host aliases | `[]` |
| `worker.daemonMemoryLimit` | Set the memory limit for the worker daemon | No default | | `worker.extraPorts` | Specify the port where the running jobs inside the workers listens | `[]` |
| `worker.memoryLimit` | Set the maximum memory the worker is allowed to use | No default | | `worker.daemonMemoryLimit` | Set the memory limit for the worker daemon | `nil` |
| `worker.coreLimit` | Se the maximum number of cores that the worker can use | No default | | `worker.memoryLimit` | Set the maximum memory the worker is allowed to use | `nil` |
| `worker.dir` | Set a custom working directory for the application | No default | | `worker.coreLimit` | Se the maximum number of cores that the worker can use | `nil` |
| `worker.hostAliases` | Add deployment host aliases | `[]` | | `worker.dir` | Set a custom working directory for the application | `nil` |
| `worker.javaOptions` | Set options for the JVM in the form `-Dx=y` | No default | | `worker.javaOptions` | Set options for the JVM in the form `-Dx=y` | `nil` |
| `worker.configOptions` | Set extra options to configure the worker in the form `-Dx=y` | No default | | `worker.configOptions` | Set extra options to configure the worker in the form `-Dx=y` | `nil` |
| `worker.replicaCount` | Set the number of workers | `2` | | `worker.extraEnvVars` | An array to add extra env vars | `nil` |
| `worker.podManagementPolicy` | Statefulset Pod Management Policy Type | `OrderedReady` | | `worker.replicaCount` | Number of spark workers (will be the minimum number when autoscaling is enabled) | `2` |
| `worker.autoscaling.enabled` | Enable autoscaling depending on CPU | `false` | | `worker.podManagementPolicy` | Statefulset Pod Management Policy Type | `OrderedReady` |
| `worker.autoscaling.CpuTargetPercentage` | k8s hpa cpu targetPercentage | `50` | | `worker.securityContext.enabled` | Enable security context | `true` |
| `worker.autoscaling.replicasMax` | Maximum number of workers when using autoscaling | `5` | | `worker.securityContext.fsGroup` | Group ID for the container | `1001` |
| `worker.securityContext.enabled` | Enable security context | `true` | | `worker.securityContext.runAsUser` | User ID for the container | `1001` |
| `worker.securityContext.fsGroup` | Group ID for the container | `1001` | | `worker.securityContext.runAsGroup` | Group ID for the container | `0` |
| `worker.securityContext.runAsUser` | User ID for the container | `1001` | | `worker.securityContext.seLinuxOptions` | SELinux options for the container | `{}` |
| `worker.securityContext.runAsGroup` | Group ID for the container | `0` | | `worker.podAnnotations` | Annotations for pods in StatefulSet | `{}` |
| `worker.securityContext.seLinuxOptions` | SELinux options for the container | `{}` | | `worker.extraPodLabels` | Extra labels for pods in StatefulSet | `{}` |
| `worker.podAnnotations` | Annotations for pods in StatefulSet | `{}` | | `worker.podAffinityPreset` | Spark worker pod affinity preset. Ignored if `worker.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `worker.extraPodLabels` | Extra labels for pods in StatefulSet | `{}` (The value is evaluated as a template) | | `worker.podAntiAffinityPreset` | Spark worker pod anti-affinity preset. Ignored if `worker.affinity` is set. Allowed values: `soft` or `hard` | `soft` |
| `worker.podAffinityPreset` | Spark worker pod affinity preset. Ignored if `worker.affinity` is set. Allowed values: `soft` or `hard` | `""` | | `worker.nodeAffinityPreset.type` | Spark worker node affinity preset type. Ignored if `worker.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `worker.podAntiAffinityPreset` | Spark worker pod anti-affinity preset. Ignored if `worker.affinity` is set. Allowed values: `soft` or `hard` | `soft` | | `worker.nodeAffinityPreset.key` | Spark worker node label key to match Ignored if `worker.affinity` is set. | `""` |
| `worker.nodeAffinityPreset.type` | Spark worker node affinity preset type. Ignored if `worker.affinity` is set. Allowed values: `soft` or `hard` | `""` | | `worker.nodeAffinityPreset.values` | Spark worker node label values to match. Ignored if `worker.affinity` is set. | `[]` |
| `worker.nodeAffinityPreset.key` | Spark worker node label key to match Ignored if `worker.affinity` is set. | `""` | | `worker.affinity` | Spark worker affinity for pod assignment | `{}` |
| `worker.nodeAffinityPreset.values` | Spark worker node label values to match. Ignored if `worker.affinity` is set. | `[]` | | `worker.nodeSelector` | Spark worker node labels for pod assignment | `{}` |
| `worker.affinity` | Spark worker affinity for pod assignment | `{}` (evaluated as a template) | | `worker.tolerations` | Spark worker tolerations for pod assignment | `[]` |
| `worker.nodeSelector` | Spark worker node labels for pod assignment | `{}` (evaluated as a template) | | `worker.resources.limits` | The resources limits for the container | `{}` |
| `worker.tolerations` | Spark worker tolerations for pod assignment | `[]` (evaluated as a template) | | `worker.resources.requests` | The requested resources for the container | `{}` |
| `worker.resources` | CPU/Memory resource requests/limits | Memory: `256Mi`, CPU: `250m` | | `worker.livenessProbe.enabled` | Enable livenessProbe | `true` |
| `worker.livenessProbe.enabled` | Turn on and off liveness probe | `true` | | `worker.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `180` |
| `worker.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 10 | | `worker.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `20` |
| `worker.livenessProbe.periodSeconds` | How often to perform the probe | 10 | | `worker.livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `5` |
| `worker.livenessProbe.timeoutSeconds` | When the probe times out | 5 | | `worker.livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` |
| `worker.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 2 | | `worker.livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
| `worker.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | | `worker.readinessProbe.enabled` | Enable readinessProbe | `true` |
| `worker.readinessProbe.enabled` | Turn on and off readiness probe | `true` | | `worker.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `30` |
| `worker.readinessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 5 | | `worker.readinessProbe.periodSeconds` | Period seconds for readinessProbe | `10` |
| `worker.readinessProbe.periodSeconds` | How often to perform the probe | 10 | | `worker.readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `5` |
| `worker.readinessProbe.timeoutSeconds` | When the probe times out | 5 | | `worker.readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` |
| `worker.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 | | `worker.readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
| `worker.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | | `worker.initContainers` | Add initContainers to the master pods. | `{}` |
| `master.extraEnvVars` | Extra environment variables to pass to the worker container | `{}` | | `worker.autoscaling.enabled` | Enable replica autoscaling depending on CPU | `false` |
| `worker.extraVolumes` | Array of extra volumes to be added to the Spark worker deployment (evaluated as template). Requires setting `worker.extraVolumeMounts` | `nil` | | `worker.autoscaling.CpuTargetPercentage` | Kubernetes HPA CPU target percentage | `50` |
| `worker.extraVolumeMounts` | Array of extra volume mounts to be added to the Spark worker deployment (evaluated as template). Normally used with `worker.extraVolumes`. | `nil` | | `worker.autoscaling.replicasMax` | Maximum number of workers when using autoscaling | `5` |
### Security parameters ### Security parameters
| Parameter | Description | Default | | Name | Description | Value |
|--------------------------------------|-------------------------------------------------------------------------|------------| | ------------------------------------ | ----------------------------------------------------------------------------- | --------- |
| `security.passwordsSecretName` | Secret to use when using security configuration to set custom passwords | No default | | `security.passwordsSecretName` | Name of the secret that contains all the passwords | `nil` |
| `security.rpc.authenticationEnabled` | Enable the RPC authentication | `false` | | `security.rpc.authenticationEnabled` | Enable the RPC authentication | `false` |
| `security.rpc.encryptionEnabled` | Enable the encryption for RPC | `false` | | `security.rpc.encryptionEnabled` | Enable the encryption for RPC | `false` |
| `security.storageEncryptionEnabled` | Enable the encryption of the storage | `false` | | `security.storageEncryptionEnabled` | Enables local storage encryption | `false` |
| `security.ssl.enabled` | Enable the SSL configuration | `false` | | `security.certificatesSecretName` | Name of the secret that contains the certificates. | `nil` |
| `security.ssl.needClientAuth` | Enable the client authentication | `false` | | `security.ssl.enabled` | Enable the SSL configuration | `false` |
| `security.ssl.protocol` | Set the SSL protocol | `TLSv1.2` | | `security.ssl.needClientAuth` | Enable the client authentication | `false` |
| `security.ssl.existingSecret` | Set the name of the secret that contains the certificates | No default | | `security.ssl.protocol` | Set the SSL protocol | `TLSv1.2` |
| `security.ssl.keystorePassword` | Set the password of the JKS Keystore | No default | | `security.ssl.existingSecret` | Name of the existing secret containing the TLS certificates | `nil` |
| `security.ssl.existingSecret` | Set the password of the JKS Truststore | No default | | `security.ssl.autoGenerated` | Create self-signed TLS certificates. Currently only supports PEM certificates | `false` |
| `security.ssl.autoGenerated` | Generate automatically self-signed TLS certificates | `false` | | `security.ssl.keystorePassword` | Set the password of the JKS Keystore | `nil` |
| `security.ssl.resources.limits` | The resources limits for the TLS | `{}` | | `security.ssl.truststorePassword` | Truststore password. | `nil` |
| `security.ssl.resources.requests` | The requested resources for the TLS init | `{}` | | `security.ssl.resources.limits` | The resources limits for the container | `{}` |
| `security.ssl.resources.requests` | The requested resources for the container | `{}` |
### Exposure parameters
| Parameter | Description | Default | ### Traffic Exposure parameters
|----------------------------------|---------------------------------------------------------------|--------------------------------|
| `service.type` | Kubernetes Service type | `ClusterIP` | | Name | Description | Value |
| `service.webPort` | Spark client port | `80` | | --------------------------- | ------------------------------------------------------------------------------------------------------ | ------------------------ |
| `service.clusterPort` | Spark cluster port | `7077` | | `service.type` | Kubernetes Service type | `ClusterIP` |
| `service.nodePort` | Port to bind to for NodePort service type (client port) | `nil` | | `service.clusterPort` | Spark cluster port | `7077` |
| `service.nodePorts.cluster` | Kubernetes cluster node port | `""` | | `service.webPort` | Spark client port | `80` |
| `service.nodePorts.web` | Kubernetes web node port | `""` | | `service.nodePorts.cluster` | Kubernetes cluster node port | `""` |
| `service.annotations` | Annotations for spark service | {} | | `service.nodePorts.web` | Kubernetes web node port | `""` |
| `service.loadBalancerIP` | loadBalancerIP if spark service type is `LoadBalancer` | `nil` | | `service.loadBalancerIP` | Load balancer IP if spark service type is `LoadBalancer` | `nil` |
| `ingress.enabled` | Enable ingress controller resource | `false` | | `service.annotations` | Annotations for spark service | `{}` |
| `ingress.certManager` | Add annotations for cert-manager | `false` | | `ingress.enabled` | Enable ingress controller resource | `false` |
| `ingress.hostname` | Default host for the ingress resource | `spark.local` | | `ingress.certManager` | Set this to true in order to add the corresponding annotations for cert-manager | `false` |
| `ingress.path` | Default path for the ingress resource | `/` | | `ingress.pathType` | Ingress path type | `ImplementationSpecific` |
| `ingress.tls` | Create TLS Secret | `false` | | `ingress.apiVersion` | Force Ingress API version (automatically detected if not set) | `nil` |
| `ingress.annotations` | Ingress annotations | `[]` (evaluated as a template) | | `ingress.hostname` | Default host for the ingress resource | `spark.local` |
| `ingress.extraHosts[0].name` | Additional hostnames to be covered | `nil` | | `ingress.path` | The Path to Spark. You may need to set this to '/*' in order to use this with ALB ingress controllers. | `ImplementationSpecific` |
| `ingress.extraHosts[0].path` | Additional hostnames to be covered | `nil` | | `ingress.annotations` | Ingress annotations | `{}` |
| `ingress.extraPaths` | Additional arbitrary path/backend objects | `nil` | | `ingress.tls` | Enable TLS configuration for the hostname defined at ingress.hostname parameter | `false` |
| `ingress.extraTls[0].hosts[0]` | TLS configuration for additional hostnames to be covered | `nil` | | `ingress.extraHosts` | The list of additional hostnames to be covered with this ingress record. | `[]` |
| `ingress.extraTls[0].secretName` | TLS configuration for additional hostnames to be covered | `nil` | | `ingress.extraPaths` | Any additional arbitrary paths that may need to be added to the ingress under the main host. | `[]` |
| `ingress.secrets[0].name` | TLS Secret Name | `nil` | | `ingress.extraTls` | The tls configuration for additional hostnames to be covered with this ingress record. | `[]` |
| `ingress.secrets[0].certificate` | TLS Secret Certificate | `nil` | | `ingress.secrets` | If you're providing your own certificates, please use this to add the certificates as secrets | `[]` |
| `ingress.secrets[0].key` | TLS Secret Key | `nil` |
| `ingress.apiVersion` | Force Ingress API version (automatically detected if not set) | `` |
| `ingress.path` | Ingress path | `/` |
| `ingress.pathType` | Ingress path type | `ImplementationSpecific` |
### Metrics parameters ### Metrics parameters
| Parameter | Description | Default | | Name | Description | Value |
|--------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------| | ------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `metrics.enabled` | Start a side-car prometheus exporter | `false` | | `metrics.enabled` | Start a side-car prometheus exporter | `false` |
| `metrics.masterAnnotations` | Annotations for enabling prometheus to access the metrics endpoint of the master nodes | `{prometheus.io/scrape: "true", prometheus.io/path: "/metrics/", prometheus.io/port: "8080"}` | | `metrics.masterAnnotations` | Annotations for the Prometheus metrics on master nodes | `{}` |
| `metrics.workerAnnotations` | Annotations for enabling prometheus to access the metrics endpoint of the worker nodes | `{prometheus.io/scrape: "true", prometheus.io/path: "/metrics/", prometheus.io/port: "8081"}` | | `metrics.workerAnnotations` | Annotations for the Prometheus metrics on worker nodes | `{}` |
| `metrics.resources.limits` | The resources limits for the metrics exporter container | `{}` | | `metrics.podMonitor.enabled` | If the operator is installed in your cluster, set to true to create a PodMonitor Resource for scraping metrics using PrometheusOperator | `false` |
| `metrics.resources.requests` | The requested resources for the metrics exporter container | `{}` | | `metrics.podMonitor.extraMetricsEndpoints` | Add metrics endpoints for monitoring the jobs running in the worker nodes | `[]` |
| `metrics.podMonitor.enabled` | Create PodMonitor Resource for scraping metrics using PrometheusOperator | `false` | | `metrics.podMonitor.namespace` | Specify the namespace in which the podMonitor resource will be created | `""` |
| `metrics.podMonitor.extraMetricsEndpoints` | Add metrics endpoints for monitoring the jobs running in the worker nodes, [MetricsEndpoint](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#podmetricsendpoint) | `[]` | | `metrics.podMonitor.interval` | Specify the interval at which metrics should be scraped | `30s` |
| `metrics.podMonitor.namespace` | Namespace where podmonitor resource should be created | `nil` | | `metrics.podMonitor.scrapeTimeout` | Specify the timeout after which the scrape is ended | `nil` |
| `metrics.podMonitor.interval` | Specify the interval at which metrics should be scraped | `30s` | | `metrics.podMonitor.additionalLabels` | Additional labels that can be used so PodMonitors will be discovered by Prometheus | `{}` |
| `metrics.podMonitor.scrapeTimeout` | Specify the timeout after which the scrape is ended | `nil` | | `metrics.prometheusRule.enabled` | Set this to true to create prometheusRules for Prometheus | `false` |
| `metrics.podMonitor.additionalLabels` | Additional labels that can be used so PodMonitors will be discovered by Prometheus | `{}` | | `metrics.prometheusRule.namespace` | Namespace where the prometheusRules resource should be created | `""` |
| `metrics.prometheusRule.enabled` | Set this to true to create prometheusRules for Prometheus | `false` | | `metrics.prometheusRule.additionalLabels` | Additional labels that can be used so prometheusRules will be discovered by Prometheus | `{}` |
| `metrics.prometheusRule.additionalLabels` | Additional labels that can be used so prometheusRules will be discovered by Prometheus | `{}` | | `metrics.prometheusRule.rules` | Custom Prometheus [rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/) | `[]` |
| `metrics.prometheusRule.namespace` | namespace where prometheusRules resource should be created | the same namespace as spark |
| `metrics.prometheusRule.rules` | [rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/) to be created, check values for an example. | `[]` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example, Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,

View File

@@ -1,14 +1,44 @@
## @section Global parameters
## Global Docker image parameters ## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value ## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets ## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass
## @param global.imageRegistry Global Docker image registry
## @param global.imagePullSecrets Global Docker registry secret names as an array
## ##
# global: global:
# imageRegistry: myRegistryName imageRegistry:
# imagePullSecrets: ## E.g.
# - myRegistryKeySecretName ## imagePullSecrets:
## - myRegistryKeySecretName
##
imagePullSecrets: []
## @section Common parameters
## @param kubeVersion Force target Kubernetes version (using Helm capabilities if not set)
##
kubeVersion:
## @param nameOverride String to partially override common.names.fullname template (will maintain the release name)
##
nameOverride:
## @param fullnameOverride String to fully override common.names.fullname template
##
fullnameOverride:
## @param extraDeploy Array of extra objects to deploy with the release
##
extraDeploy: []
## @section Spark parameters
## Bitnami Spark image version ## Bitnami Spark image version
## ref: https://hub.docker.com/r/bitnami/spark/tags/ ## ref: https://hub.docker.com/r/bitnami/spark/tags/
## @param image.registry Spark image registry
## @param image.repository Spark image repository
## @param image.tag Spark image tag (immutable tags are recommended)
## @param image.pullPolicy Spark image pull policy
## @param image.pullSecrets Specify docker-registry secret names as an array
## @param image.debug Enable image debug mode
## ##
image: image:
registry: docker.io registry: docker.io
@@ -19,68 +49,61 @@ image:
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
## ##
pullPolicy: IfNotPresent pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Pull secret for this image ## Secrets must be manually created in the namespace.
# pullSecrets: ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
# - myRegistryKeySecretName ## e.g:
## pullSecrets:
## - myRegistryKeySecretName
##
pullSecrets: []
## Set to true if you would like to see extra information on logs ## Set to true if you would like to see extra information on logs
## It turns BASH and/or NAMI debugging in the image ## It turns BASH and/or NAMI debugging in the image
## ##
debug: false debug: false
## @param hostNetwork Enable HOST Network
## Enable HOST Network ## If hostNetwork is true, then dnsPolicy is set to ClusterFirstWithHostNet
## If hostNetwork true -> dnsPolicy is set to ClusterFirstWithHostNet
## ##
hostNetwork: false hostNetwork: false
## Force target Kubernetes version (using Helm capabilites if not set) ## @section Spark master parameters
##
kubeVersion:
## String to partially override common.names.fullname template (will maintain the release name)
##
# nameOverride:
## String to fully override common.names.fullname template
##
# fullnameOverride:
## Spark master specific configuration ## Spark master specific configuration
## ##
master: master:
## Set a custom configuration by using an existing configMap with the configuration file. ## @param master.configurationConfigMap Set a custom configuration by using an existing configMap with the configuration file.
## ##
# configurationConfigMap: configurationConfigMap:
## @param master.webPort Specify the port where the web interface will listen on the master
## Spark container ports
## ##
webPort: 8080 webPort: 8080
## @param master.clusterPort Specify the port where the master listens to communicate with workers
##
clusterPort: 7077 clusterPort: 7077
## @param master.hostAliases Deployment pod host aliases
## Deployment pod host aliases
## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/ ## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
## ##
hostAliases: [] hostAliases: []
## @param master.daemonMemoryLimit Set the memory limit for the master daemon
## Set the master daemon memory limit.
## ##
# daemonMemoryLimit: daemonMemoryLimit:
## @param master.configOptions Use a string to set the config options for in the form "-Dx=y"
## Use a string to set the config options for in the form "-Dx=y"
## ##
# configOptions: configOptions:
## @param master.extraEnvVars Extra environment variables to pass to the master container
## An array to add extra env vars
## For example: ## For example:
## extraEnvVars: ## extraEnvVars:
## - name: SPARK_DAEMON_JAVA_OPTS ## - name: SPARK_DAEMON_JAVA_OPTS
## value: -Dx=y ## value: -Dx=y
## ##
# extraEnvVars: extraEnvVars:
## Kubernetes Security Context ## Kubernetes Security Context
## https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ ## https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
## @param master.securityContext.enabled Enable security context
## @param master.securityContext.fsGroup Group ID for the container
## @param master.securityContext.runAsUser User ID for the container
## @param master.securityContext.runAsGroup Group ID for the container
## @param master.securityContext.seLinuxOptions SELinux options for the container
## ##
securityContext: securityContext:
enabled: true enabled: true
@@ -88,84 +111,80 @@ master:
runAsUser: 1001 runAsUser: 1001
runAsGroup: 0 runAsGroup: 0
seLinuxOptions: {} seLinuxOptions: {}
## @param master.podAnnotations Annotations for pods in StatefulSet
## Annotations to add to the statefulset
##
## ##
podAnnotations: {} podAnnotations: {}
## @param master.extraPodLabels Extra labels for pods in StatefulSet
## Labes to add to the statefulset
##
## ##
extraPodLabels: {} extraPodLabels: {}
## @param master.podAffinityPreset Spark master pod affinity preset. Ignored if `master.affinity` is set. Allowed values: `soft` or `hard`
## Spark master pod affinity preset
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
## Allowed values: soft, hard
## ##
podAffinityPreset: '' podAffinityPreset: ''
## @param master.podAntiAffinityPreset Spark master pod anti-affinity preset. Ignored if `master.affinity` is set. Allowed values: `soft` or `hard`
## Spark master pod anti-affinity preset
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
## Allowed values: soft, hard
## ##
podAntiAffinityPreset: soft podAntiAffinityPreset: soft
## Spark master node affinity preset ## Spark master node affinity preset
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
## Allowed values: soft, hard
## ##
nodeAffinityPreset: nodeAffinityPreset:
## Node affinity type ## @param master.nodeAffinityPreset.type Spark master node affinity preset type. Ignored if `master.affinity` is set. Allowed values: `soft` or `hard`
## Allowed values: soft, hard
## ##
type: '' type: ''
## Node label key to match ## @param master.nodeAffinityPreset.key Spark master node label key to match Ignored if `master.affinity` is set.
## E.g. ## E.g.
## key: "kubernetes.io/e2e-az-name" ## key: "kubernetes.io/e2e-az-name"
## ##
key: '' key: ''
## Node label values to match ## @param master.nodeAffinityPreset.values Spark master node label values to match. Ignored if `master.affinity` is set.
## E.g. ## E.g.
## values: ## values:
## - e2e-az1 ## - e2e-az1
## - e2e-az2 ## - e2e-az2
## ##
values: [] values: []
## @param master.affinity Spark master affinity for pod assignment
## Affinity for Spark master pods assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## Note: master.podAffinityPreset, master.podAntiAffinityPreset, and master.nodeAffinityPreset will be ignored when it's set ## Note: master.podAffinityPreset, master.podAntiAffinityPreset, and master.nodeAffinityPreset will be ignored when it's set
## ##
affinity: {} affinity: {}
## @param master.nodeSelector Spark master node labels for pod assignment
## Node labels for Spark master pods assignment
## ref: https://kubernetes.io/docs/user-guide/node-selection/ ## ref: https://kubernetes.io/docs/user-guide/node-selection/
## ##
nodeSelector: {} nodeSelector: {}
## @param master.tolerations Spark master tolerations for pod assignment
## Tolerations for Spark master pods assignment
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
## ##
tolerations: [] tolerations: []
## Container resource requests and limits
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/ ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
## We usually recommend not to specify default resources and to leave this as a conscious
## choice for the user. This also increases chances charts run on environments with little
## resources, such as Minikube. If you do want to specify resources, uncomment the following
## lines, adjust them as necessary, and remove the curly braces after 'resources:'.
## @param master.resources.limits The resources limits for the container
## @param master.resources.requests The requested resources for the container
## ##
resources: resources:
# We usually recommend not to specify default resources and to leave this as a conscious ## Example:
# choice for the user. This also increases chances charts run on environments with little ## limits:
# resources, such as Minikube. If you do want to specify resources, uncomment the following ## cpu: 250m
# lines, adjust them as necessary, and remove the curly braces after 'resources:'. ## memory: 256Mi
limits: {} limits: {}
# cpu: 250m ## Examples:
# memory: 256Mi ## requests:
## cpu: 250m
## memory: 256Mi
requests: {} requests: {}
# cpu: 250m ## Configure extra options for liveness probe
# memory: 256Mi ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes
## @param master.livenessProbe.enabled Enable livenessProbe
## Configure liveness and readiness probes ## @param master.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes) ## @param master.livenessProbe.periodSeconds Period seconds for livenessProbe
## @param master.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe
## @param master.livenessProbe.failureThreshold Failure threshold for livenessProbe
## @param master.livenessProbe.successThreshold Success threshold for livenessProbe
## ##
livenessProbe: livenessProbe:
enabled: true enabled: true
@@ -174,6 +193,15 @@ master:
timeoutSeconds: 5 timeoutSeconds: 5
failureThreshold: 6 failureThreshold: 6
successThreshold: 1 successThreshold: 1
## Configure extra options for readiness probe
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes
## @param master.readinessProbe.enabled Enable readinessProbe
## @param master.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe
## @param master.readinessProbe.periodSeconds Period seconds for readinessProbe
## @param master.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe
## @param master.readinessProbe.failureThreshold Failure threshold for readinessProbe
## @param master.readinessProbe.successThreshold Success threshold for readinessProbe
##
readinessProbe: readinessProbe:
enabled: true enabled: true
initialDelaySeconds: 30 initialDelaySeconds: 30
@@ -181,8 +209,7 @@ master:
timeoutSeconds: 5 timeoutSeconds: 5
failureThreshold: 6 failureThreshold: 6
successThreshold: 1 successThreshold: 1
## @param master.initContainers Add initContainers to the master pods.
## Add initContainers to the master pods.
## Example: ## Example:
## initContainers: ## initContainers:
## - name: your-image-name ## - name: your-image-name
@@ -194,73 +221,70 @@ master:
## ##
initContainers: {} initContainers: {}
## @section Spark worker parameters
## Spark worker specific configuration ## Spark worker specific configuration
## ##
worker: worker:
## Set a custom configuration by using an existing configMap with the configuration file. ## @param worker.configurationConfigMap Set a custom configuration by using an existing configMap with the configuration file.
## ##
# configurationConfigMap: configurationConfigMap:
## @param worker.webPort Specify the port where the web interface will listen on the worker
## Spark container ports
## ##
webPort: 8081 webPort: 8081
# clusterPort: ## @param worker.clusterPort Specify the port where the worker listens to communicate with the master
##
## Deployment pod host aliases clusterPort:
## @param worker.hostAliases Add deployment host aliases
## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/ ## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
## ##
hostAliases: [] hostAliases: []
## @param worker.extraPorts Specify the port where the running jobs inside the workers listens
## Add ports for exposing jobs running inside the worker nodes
## ref: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#containerport-v1-core ## ref: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#containerport-v1-core
## e.g:
## - name: myapp
## containerPort: 8000
## protocol: TCP
## ##
extraPorts: [] extraPorts: []
# - name: myapp ## @param worker.daemonMemoryLimit Set the memory limit for the worker daemon
# containerPort: 8000
# protocol: TCP
## Set the daemonMemoryLimit as the daemon max memory
## ##
# daemonMemoryLimit: daemonMemoryLimit:
## @param worker.memoryLimit Set the maximum memory the worker is allowed to use
## Set the worker memory limit
## ##
# memoryLimit: memoryLimit:
## @param worker.coreLimit Se the maximum number of cores that the worker can use
## Set the maximum number of cores
## ##
# coreLimit: coreLimit:
## @param worker.dir Set a custom working directory for the application
## Working directory for the application
## ##
# dir: dir:
## @param worker.javaOptions Set options for the JVM in the form `-Dx=y`
## Options for the JVM as "-Dx=y"
## ##
# javaOptions: javaOptions:
## @param worker.configOptions Set extra options to configure the worker in the form `-Dx=y`
## Configuration options in the form "-Dx=y"
## ##
# configOptions: configOptions:
## @param worker.extraEnvVars An array to add extra env vars
## An array to add extra env vars
## For example: ## For example:
## extraEnvVars: ## extraEnvVars:
## - name: SPARK_DAEMON_JAVA_OPTS ## - name: SPARK_DAEMON_JAVA_OPTS
## value: -Dx=y ## value: -Dx=y
# extraEnvVars: extraEnvVars:
## @param worker.replicaCount Number of spark workers (will be the minimum number when autoscaling is enabled)
## Number of spark workers (will be the min number when autoscaling is enabled)
## ##
replicaCount: 2 replicaCount: 2
## @param worker.podManagementPolicy Statefulset Pod Management Policy Type
## Pod management policy
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies
## ##
podManagementPolicy: OrderedReady podManagementPolicy: OrderedReady
## Kubernetes Security Context ## Kubernetes Security Context
## https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ ## https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
## @param worker.securityContext.enabled Enable security context
## @param worker.securityContext.fsGroup Group ID for the container
## @param worker.securityContext.runAsUser User ID for the container
## @param worker.securityContext.runAsGroup Group ID for the container
## @param worker.securityContext.seLinuxOptions SELinux options for the container
## ##
securityContext: securityContext:
enabled: true enabled: true
@@ -268,84 +292,80 @@ worker:
runAsUser: 1001 runAsUser: 1001
runAsGroup: 0 runAsGroup: 0
seLinuxOptions: {} seLinuxOptions: {}
## @param worker.podAnnotations Annotations for pods in StatefulSet
## Annotations to add to the statefulset
##
## ##
podAnnotations: {} podAnnotations: {}
## @param worker.extraPodLabels Extra labels for pods in StatefulSet
## Labes to add to the statefulset
##
## ##
extraPodLabels: {} extraPodLabels: {}
## @param worker.podAffinityPreset Spark worker pod affinity preset. Ignored if `worker.affinity` is set. Allowed values: `soft` or `hard`
## Spark worker pod affinity preset
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
## Allowed values: soft, hard
## ##
podAffinityPreset: '' podAffinityPreset: ''
## @param worker.podAntiAffinityPreset Spark worker pod anti-affinity preset. Ignored if `worker.affinity` is set. Allowed values: `soft` or `hard`
## Spark worker pod anti-affinity preset
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
## Allowed values: soft, hard
## ##
podAntiAffinityPreset: soft podAntiAffinityPreset: soft
## Spark worker node affinity preset ## Spark worker node affinity preset
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
## Allowed values: soft, hard
## ##
nodeAffinityPreset: nodeAffinityPreset:
## Node affinity type ## @param worker.nodeAffinityPreset.type Spark worker node affinity preset type. Ignored if `worker.affinity` is set. Allowed values: `soft` or `hard`
## Allowed values: soft, hard
## ##
type: '' type: ''
## Node label key to match ## @param worker.nodeAffinityPreset.key Spark worker node label key to match Ignored if `worker.affinity` is set.
## E.g. ## E.g.
## key: "kubernetes.io/e2e-az-name" ## key: "kubernetes.io/e2e-az-name"
## ##
key: '' key: ''
## Node label values to match ## @param worker.nodeAffinityPreset.values Spark worker node label values to match. Ignored if `worker.affinity` is set.
## E.g. ## E.g.
## values: ## values:
## - e2e-az1 ## - e2e-az1
## - e2e-az2 ## - e2e-az2
## ##
values: [] values: []
## @param worker.affinity Spark worker affinity for pod assignment
## Affinity for Spark worker pods assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## Note: worker.podAffinityPreset, worker.podAntiAffinityPreset, and worker.nodeAffinityPreset will be ignored when it's set ## Note: worker.podAffinityPreset, worker.podAntiAffinityPreset, and worker.nodeAffinityPreset will be ignored when it's set
## ##
affinity: {} affinity: {}
## @param worker.nodeSelector Spark worker node labels for pod assignment
## Node labels for Spark worker pods assignment
## ref: https://kubernetes.io/docs/user-guide/node-selection/ ## ref: https://kubernetes.io/docs/user-guide/node-selection/
## ##
nodeSelector: {} nodeSelector: {}
## @param worker.tolerations Spark worker tolerations for pod assignment
## Tolerations for Spark master worker assignment
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
## ##
tolerations: [] tolerations: []
## Container resource requests and limits
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/ ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
## We usually recommend not to specify default resources and to leave this as a conscious
## choice for the user. This also increases chances charts run on environments with little
## resources, such as Minikube. If you do want to specify resources, uncomment the following
## lines, adjust them as necessary, and remove the curly braces after 'resources:'.
## @param worker.resources.limits The resources limits for the container
## @param worker.resources.requests The requested resources for the container
## ##
resources: resources:
# We usually recommend not to specify default resources and to leave this as a conscious ## Example:
# choice for the user. This also increases chances charts run on environments with little ## limits:
# resources, such as Minikube. If you do want to specify resources, uncomment the following ## cpu: 250m
# lines, adjust them as necessary, and remove the curly braces after 'resources:'. ## memory: 256Mi
limits: {} limits: {}
# cpu: 250m ## Examples:
# memory: 256Mi ## requests:
## cpu: 250m
## memory: 256Mi
requests: {} requests: {}
# cpu: 250m ## Configure extra options for liveness probe
# memory: 256Mi ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes
## @param worker.livenessProbe.enabled Enable livenessProbe
## Configure liveness and readiness probes ## @param worker.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes) ## @param worker.livenessProbe.periodSeconds Period seconds for livenessProbe
## @param worker.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe
## @param worker.livenessProbe.failureThreshold Failure threshold for livenessProbe
## @param worker.livenessProbe.successThreshold Success threshold for livenessProbe
## ##
livenessProbe: livenessProbe:
enabled: true enabled: true
@@ -354,6 +374,15 @@ worker:
timeoutSeconds: 5 timeoutSeconds: 5
failureThreshold: 6 failureThreshold: 6
successThreshold: 1 successThreshold: 1
## Configure extra options for readiness probe
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes
## @param worker.readinessProbe.enabled Enable readinessProbe
## @param worker.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe
## @param worker.readinessProbe.periodSeconds Period seconds for readinessProbe
## @param worker.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe
## @param worker.readinessProbe.failureThreshold Failure threshold for readinessProbe
## @param worker.readinessProbe.successThreshold Success threshold for readinessProbe
##
readinessProbe: readinessProbe:
enabled: true enabled: true
initialDelaySeconds: 30 initialDelaySeconds: 30
@@ -361,8 +390,7 @@ worker:
timeoutSeconds: 5 timeoutSeconds: 5
failureThreshold: 6 failureThreshold: 6
successThreshold: 1 successThreshold: 1
## @param worker.initContainers Add initContainers to the master pods.
## Add initContainers to the master pods.
## Example: ## Example:
## initContainers: ## initContainers:
## - name: your-image-name ## - name: your-image-name
@@ -373,7 +401,6 @@ worker:
## containerPort: 1234 ## containerPort: 1234
## ##
initContainers: {} initContainers: {}
## Array to add extra volumes ## Array to add extra volumes
## ##
## extraVolumes: ## extraVolumes:
@@ -383,156 +410,162 @@ worker:
## Autoscaling parameters ## Autoscaling parameters
## ##
autoscaling: autoscaling:
## Enable replica autoscaling depending on CPU ## @param worker.autoscaling.enabled Enable replica autoscaling depending on CPU
## ##
enabled: false enabled: false
## @param worker.autoscaling.CpuTargetPercentage Kubernetes HPA CPU target percentage
##
CpuTargetPercentage: 50 CpuTargetPercentage: 50
## Max number of workers when using autoscaling ## @param worker.autoscaling.replicasMax Maximum number of workers when using autoscaling
## ##
replicasMax: 5 replicasMax: 5
## @section Security parameters
## Security configuration ## Security configuration
## ##
security: security:
## Name of the secret that contains all the passwords. This is optional, by default random passwords are generated. ## @param security.passwordsSecretName Name of the secret that contains all the passwords
## This is optional, by default random passwords are generated
## ##
# passwordsSecretName: passwordsSecretName:
## RPC configuration ## RPC configuration
## @param security.rpc.authenticationEnabled Enable the RPC authentication
## @param security.rpc.encryptionEnabled Enable the encryption for RPC
## ##
rpc: rpc:
authenticationEnabled: false authenticationEnabled: false
encryptionEnabled: false encryptionEnabled: false
## @param security.storageEncryptionEnabled Enables local storage encryption
## Enables local storage encryption
## ##
storageEncryptionEnabled: false storageEncryptionEnabled: false
## @param security.certificatesSecretName Name of the secret that contains the certificates.
## Name of the secret that contains the certificates.
## It should contains two keys called "spark-keystore.jks" and "spark-truststore.jks" with the files in JKS format. ## It should contains two keys called "spark-keystore.jks" and "spark-truststore.jks" with the files in JKS format.
## DEPRECATED. Use `security.ssl.existingSecret` instead ## DEPRECATED. Use `security.ssl.existingSecret` instead
## ##
# certificatesSecretName: certificatesSecretName:
## SSL configuration ## SSL configuration
## ##
ssl: ssl:
## @param security.ssl.enabled Enable the SSL configuration
##
enabled: false enabled: false
## @param security.ssl.needClientAuth Enable the client authentication
##
needClientAuth: false needClientAuth: false
## @param security.ssl.protocol Set the SSL protocol
##
protocol: TLSv1.2 protocol: TLSv1.2
## Name of the existing secret containing the TLS certificates. ## @param security.ssl.existingSecret Name of the existing secret containing the TLS certificates
## It should contains two keys called "spark-keystore.jks" and "spark-truststore.jks" with the files in JKS format. ## It should contains two keys called "spark-keystore.jks" and "spark-truststore.jks" with the files in JKS format.
## ##
existingSecret: existingSecret:
## Create self-signed TLS certificates. Currently only supports PEM certificates. ## @param security.ssl.autoGenerated Create self-signed TLS certificates. Currently only supports PEM certificates
## The Spark container will generate a JKS keystore and trustore using the PEM certificates. ## The Spark container will generate a JKS keystore and trustore using the PEM certificates.
## ##
autoGenerated: false autoGenerated: false
## Key, Keystore and Truststore passwords. ## @param security.ssl.keystorePassword Set the password of the JKS Keystore
## ##
keystorePassword: keystorePassword:
## @param security.ssl.truststorePassword Truststore password.
##
truststorePassword: truststorePassword:
## Container resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
## We usually recommend not to specify default resources and to leave this as a conscious
## choice for the user. This also increases chances charts run on environments with little
## resources, such as Minikube. If you do want to specify resources, uncomment the following
## lines, adjust them as necessary, and remove the curly braces after 'resources:'.
## @param security.ssl.resources.limits The resources limits for the container
## @param security.ssl.resources.requests The requested resources for the container
##
resources: resources:
## We usually recommend not to specify default resources and to leave this as a conscious ## Example:
## choice for the user. This also increases chances charts run on environments with little ## limits:
## resources, such as Minikube. If you do want to specify resources, uncomment the following ## cpu: 100m
## lines, adjust them as necessary, and remove the curly braces after 'resources:'. ## memory: 128Mi
##
limits: {} limits: {}
## cpu: 100m ## Examples:
## memory: 128Mi ## requests:
## ## cpu: 100m
## memory: 128Mi
requests: {} requests: {}
## cpu: 100m
## memory: 128Mi ## @section Traffic Exposure parameters
##
## Service parameters ## Service parameters
## ##
service: service:
## Kubernetes service type ## @param service.type Kubernetes Service type
## ##
type: ClusterIP type: ClusterIP
## @param service.clusterPort Spark cluster port
## Cluster Service port
## ##
clusterPort: 7077 clusterPort: 7077
## @param service.webPort Spark client port
## Web Service port
## ##
webPort: 80 webPort: 80
## Specify the nodePort(s) value(s) for the LoadBalancer and NodePort service types. ## Specify the nodePort(s) value(s) for the LoadBalancer and NodePort service types.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
## @param service.nodePorts.cluster Kubernetes cluster node port
## @param service.nodePorts.web Kubernetes web node port
## ##
nodePorts: nodePorts:
cluster: '' cluster: ''
web: '' web: ''
## @param service.loadBalancerIP Load balancer IP if spark service type is `LoadBalancer`
## Set the LoadBalancer service type to internal only. ## Set the LoadBalancer service type to internal only
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
## ##
# loadBalancerIP: loadBalancerIP:
## @param service.annotations Annotations for spark service
## Provide any additional annotations which may be required. This can be used to ## This can be used to set the LoadBalancer service type to internal only.
## set the LoadBalancer service type to internal only.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
## ##
annotations: {} annotations: {}
## Configure the ingress resource that allows you to access the ## Configure the ingress resource that allows you to access the
## Spark installation. Set up the URL ## Spark installation. Set up the URL
## ref: http://kubernetes.io/docs/user-guide/ingress/ ## ref: http://kubernetes.io/docs/user-guide/ingress/
## ##
ingress: ingress:
## Set to true to enable ingress record generation ## @param ingress.enabled Enable ingress controller resource
## ##
enabled: false enabled: false
## @param ingress.certManager Set this to true in order to add the corresponding annotations for cert-manager
## Set this to true in order to add the corresponding annotations for cert-manager
## ##
certManager: false certManager: false
## @param ingress.pathType Ingress path type
## Ingress Path type
## ##
pathType: ImplementationSpecific pathType: ImplementationSpecific
## @param ingress.apiVersion Force Ingress API version (automatically detected if not set)
## Override API Version (automatically detected if not set)
## ##
apiVersion: apiVersion:
## @param ingress.hostname Default host for the ingress resource
## When the ingress is enabled, a host pointing to this will be created
## ##
hostname: spark.local hostname: spark.local
## @param ingress.path The Path to Spark. You may need to set this to '/*' in order to use this with ALB ingress controllers.
## The Path to Spark. You may need to set this to '/*' in order to use this
## with ALB ingress controllers.
## ##
path: / path: /
## @param ingress.annotations Ingress annotations
## Ingress annotations done as key:value pairs
## For a full list of possible ingress annotations, please see ## For a full list of possible ingress annotations, please see
## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
## ##
## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set ## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set
## ##
annotations: {} annotations: {}
## @param ingress.tls Enable TLS configuration for the hostname defined at ingress.hostname parameter
## Enable TLS configuration for the hostname defined at ingress.hostname parameter
## TLS certificates will be retrieved from a TLS secret with name: {{- printf "%s-tls" .Values.ingress.hostname }} ## TLS certificates will be retrieved from a TLS secret with name: {{- printf "%s-tls" .Values.ingress.hostname }}
## You can use the ingress.secrets parameter to create this TLS secret or relay on cert-manager to create it ## You can use the ingress.secrets parameter to create this TLS secret or relay on cert-manager to create it
## ##
tls: false tls: false
## @param ingress.extraHosts The list of additional hostnames to be covered with this ingress record.
## The list of additional hostnames to be covered with this ingress record.
## Most likely the hostname above will be enough, but in the event more hosts are needed, this is an array ## Most likely the hostname above will be enough, but in the event more hosts are needed, this is an array
## extraHosts: ## extraHosts:
## - name: spark.local ## - name: spark.local
## path: / ## path: /
## ##
extraHosts: []
## Any additional arbitrary paths that may need to be added to the ingress under the main host. ## @param ingress.extraPaths Any additional arbitrary paths that may need to be added to the ingress under the main host.
## For example: The ALB ingress controller requires a special rule for handling SSL redirection. ## For example: The ALB ingress controller requires a special rule for handling SSL redirection.
## extraPaths: ## extraPaths:
## - path: /* ## - path: /*
@@ -540,16 +573,16 @@ ingress:
## serviceName: ssl-redirect ## serviceName: ssl-redirect
## servicePort: use-annotation ## servicePort: use-annotation
## ##
extraPaths: []
## The tls configuration for additional hostnames to be covered with this ingress record. ## @param ingress.extraTls The tls configuration for additional hostnames to be covered with this ingress record.
## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls ## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
## extraTls: ## extraTls:
## - hosts: ## - hosts:
## - spark.local ## - spark.local
## secretName: spark.local-tls ## secretName: spark.local-tls
## ##
extraTls: []
## If you're providing your own certificates, please use this to add the certificates as secrets ## @param ingress.secrets If you're providing your own certificates, please use this to add the certificates as secrets
## key and certificate should start with -----BEGIN CERTIFICATE----- or ## key and certificate should start with -----BEGIN CERTIFICATE----- or
## -----BEGIN RSA PRIVATE KEY----- ## -----BEGIN RSA PRIVATE KEY-----
## ##
@@ -558,67 +591,78 @@ ingress:
## ##
## It is also possible to create and manage the certificates outside of this helm chart ## It is also possible to create and manage the certificates outside of this helm chart
## Please see README.md for more information ## Please see README.md for more information
## ## e.g:
secrets: []
## - name: spark.local-tls ## - name: spark.local-tls
## key: ## key:
## certificate: ## certificate:
## ##
secrets: []
## @section Metrics parameters
## Metrics configuration ## Metrics configuration
## ##
metrics: metrics:
## @param metrics.enabled Start a side-car prometheus exporter
##
enabled: false enabled: false
## @param metrics.masterAnnotations [object] Annotations for the Prometheus metrics on master nodes
## Annotations for the Prometheus metrics on master nodes
## ##
masterAnnotations: masterAnnotations:
prometheus.io/scrape: 'true' prometheus.io/scrape: 'true'
prometheus.io/path: '/metrics/' prometheus.io/path: '/metrics/'
prometheus.io/port: '{{ .Values.master.webPort }}' prometheus.io/port: '{{ .Values.master.webPort }}'
## Annotations for the Prometheus metrics on worker nodes ## @param metrics.workerAnnotations [object] Annotations for the Prometheus metrics on worker nodes
## ##
workerAnnotations: workerAnnotations:
prometheus.io/scrape: 'true' prometheus.io/scrape: 'true'
prometheus.io/path: '/metrics/' prometheus.io/path: '/metrics/'
prometheus.io/port: '{{ .Values.worker.webPort }}' prometheus.io/port: '{{ .Values.worker.webPort }}'
## Prometheus Service Monitor ## Prometheus Service Monitor
## ref: https://github.com/coreos/prometheus-operator ## ref: https://github.com/coreos/prometheus-operator
## https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint ## https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint
## ##
podMonitor: podMonitor:
## If the operator is installed in your cluster, set to true to create a PodMonitor entry ## @param metrics.podMonitor.enabled If the operator is installed in your cluster, set to true to create a PodMonitor Resource for scraping metrics using PrometheusOperator
## ##
enabled: false enabled: false
## Add metrics endpoints for monitoring the jobs running in the worker nodes ## @param metrics.podMonitor.extraMetricsEndpoints Add metrics endpoints for monitoring the jobs running in the worker nodes
## ref: https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#podmetricsendpoint ## ref: https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#podmetricsendpoint
## e.g:
## - port: myapp
## path: /metrics/
## ##
extraMetricsEndpoints: [] extraMetricsEndpoints: []
# - port: myapp ## @param metrics.podMonitor.namespace Specify the namespace in which the podMonitor resource will be created
# path: /metrics/
## Specify the namespace in which the podMonitor resource will be created
## ##
# namespace: "" namespace: ""
## Specify the interval at which metrics should be scraped ## @param metrics.podMonitor.interval Specify the interval at which metrics should be scraped
## ##
interval: 30s interval: 30s
## Specify the timeout after which the scrape is ended ## @param metrics.podMonitor.scrapeTimeout Specify the timeout after which the scrape is ended
## e.g:
## scrapeTimeout: 30s
## ##
# scrapeTimeout: 30s scrapeTimeout:
## Used to pass Labels that are used by the Prometheus installed in your cluster to select PodMonitors to work with ## @param metrics.podMonitor.additionalLabels Additional labels that can be used so PodMonitors will be discovered by Prometheus
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
## ##
additionalLabels: {} additionalLabels: {}
## Custom PrometheusRule to be defined ## Custom PrometheusRule to be defined
## The value is evaluated as a template, so, for example, the value can depend on .Release or .Chart ## The value is evaluated as a template, so, for example, the value can depend on .Release or .Chart
## ref: https://github.com/coreos/prometheus-operator#customresourcedefinitions ## ref: https://github.com/coreos/prometheus-operator#customresourcedefinitions
## ##
prometheusRule: prometheusRule:
## @param metrics.prometheusRule.enabled Set this to true to create prometheusRules for Prometheus
##
enabled: false enabled: false
additionalLabels: {} ## @param metrics.prometheusRule.namespace Namespace where the prometheusRules resource should be created
##
namespace: '' namespace: ''
## @param metrics.prometheusRule.additionalLabels Additional labels that can be used so prometheusRules will be discovered by Prometheus
##
additionalLabels: {}
## @param metrics.prometheusRule.rules Custom Prometheus [rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/)
## These are just examples rules, please adapt them to your needs. ## These are just examples rules, please adapt them to your needs.
## Make sure to constraint the rules to the current postgresql service. ## Make sure to constraint the rules to the current postgresql service.
## rules: ## rules:
@@ -632,7 +676,3 @@ metrics:
## summary: PostgreSQL replication is lagging by {{ "{{ $value }}" }} hour(s). ## summary: PostgreSQL replication is lagging by {{ "{{ $value }}" }} hour(s).
## ##
rules: [] rules: []
## Extra objects to deploy (value evaluated as a template)
##
extraDeploy: []

View File

@@ -39,4 +39,4 @@ sources:
- https://github.com/bitnami/bitnami-docker-spring-cloud-dataflow - https://github.com/bitnami/bitnami-docker-spring-cloud-dataflow
- https://github.com/bitnami/bitnami-docker-spring-cloud-skipper - https://github.com/bitnami/bitnami-docker-spring-cloud-skipper
- https://dataflow.spring.io/ - https://dataflow.spring.io/
version: 3.0.0 version: 3.0.1

View File

@@ -44,309 +44,327 @@ helm uninstall my-release
## Parameters ## Parameters
The following tables lists the configurable parameters of the Spring Cloud Data Flow chart and their default values per section/component:
### Global parameters ### Global parameters
| Parameter | Description | Default | | Name | Description | Value |
|---------------------------|-------------------------------------------------|---------------------------------------------------------| | ------------------------- | ----------------------------------------------- | ----- |
| `global.imageRegistry` | Global Docker image registry | `nil` | | `global.imageRegistry` | Global Docker image registry | `nil` |
| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | | `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` |
| `global.storageClass` | Global storage class for dynamic provisioning | `nil` | | `global.storageClass` | Global StorageClass for Persistent Volume(s) | `nil` |
### Common parameters ### Common parameters
| Parameter | Description | Default | | Name | Description | Value |
|---------------------|----------------------------------------------------------------------|--------------------------------| | ------------------ | ------------------------------------------------------------------------------------- | --------------- |
| `nameOverride` | String to partially override common.names.fullname | `nil` | | `nameOverride` | String to partially override scdf.fullname template (will maintain the release name). | `nil` |
| `fullnameOverride` | String to fully override common.names.fullname | `nil` | | `fullnameOverride` | String to fully override scdf.fullname template. | `nil` |
| `clusterDomain` | Default Kubernetes cluster domain | `cluster.local` | | `kubeVersion` | Force target Kubernetes version (using Helm capabilities if not set) | `nil` |
| `commonLabels` | Labels to add to all deployed objects | `{}` | | `clusterDomain` | Default Kubernetes cluster domain | `cluster.local` |
| `commonAnnotations` | Annotations to add to all deployed objects | `{}` | | `extraDeploy` | Array of extra objects to deploy with the release | `[]` |
| `extraDeploy` | Array of extra objects to deploy with the release | `[]` (evaluated as a template) |
| `kubeVersion` | Force target Kubernetes version (using Helm capabilities if not set) | `nil` |
### Dataflow Server parameters ### Dataflow Server parameters
| Parameter | Description | Default | | Name | Description | Value |
|-----------------------------------------------|------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------| | -------------------------------------------- | ---------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------- |
| `server.image.registry` | Spring Cloud Dataflow image registry | `docker.io` | | `server.image.registry` | Spring Cloud Dataflow image registry | `docker.io` |
| `server.image.repository` | Spring Cloud Dataflow image name | `bitnami/spring-cloud-dataflow` | | `server.image.repository` | Spring Cloud Dataflow image repository | `bitnami/spring-cloud-dataflow` |
| `server.image.tag` | Spring Cloud Dataflow image tag | `{TAG_NAME}` | | `server.image.tag` | Spring Cloud Dataflow image tag (immutable tags are recommended) | `2.8.1-debian-10-r0` |
| `server.image.pullPolicy` | Spring Cloud Dataflow image pull policy | `IfNotPresent` | | `server.image.pullPolicy` | Spring Cloud Dataflow image pull policy | `IfNotPresent` |
| `server.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | | `server.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
| `server.composedTaskRunner.image.registry` | Spring Cloud Dataflow Composed Task Runner image registry | `docker.io` | | `server.image.debug` | Enable image debug mode | `false` |
| `server.composedTaskRunner.image.repository` | Spring Cloud Dataflow Composed Task Runner image name | `bitnami/spring-cloud-dataflow-composed-task-runner` | | `server.hostAliases` | Deployment pod host aliases | `[]` |
| `server.composedTaskRunner.image.tag` | Spring Cloud Dataflow Composed Task Runner image tag | `{TAG_NAME}` | | `server.composedTaskRunner.image.registry` | Spring Cloud Dataflow Composed Task Runner image registry | `docker.io` |
| `server.composedTaskRunner.image.pullPolicy` | Spring Cloud Dataflow Composed Task Runner image pull policy | `IfNotPresent` | | `server.composedTaskRunner.image.repository` | Spring Cloud Dataflow Composed Task Runner image repository | `bitnami/spring-cloud-dataflow-composed-task-runner` |
| `server.composedTaskRunner.image.pullSecrets` | Spring Cloud Dataflow Composed Task Runner image pull secrets | `[]` | | `server.composedTaskRunner.image.tag` | Spring Cloud Dataflow Composed Task Runner image tag (immutable tags are recommended) | `2.8.1-debian-10-r0` |
| `server.command` | Override sever command | `nil` | | `server.configuration.streamingEnabled` | Enables or disables streaming data processing | `true` |
| `server.args` | Override server args | `nil` | | `server.configuration.batchEnabled` | Enables or disables batch data (tasks and schedules) processing | `true` |
| `server.configuration.streamingEnabled` | Enables or disables streaming data processing | `true` | | `server.configuration.accountName` | The name of the account to configure for the Kubernetes platform | `default` |
| `server.configuration.batchEnabled` | Enables or disables bath data (tasks and schedules) processing | `true` | | `server.configuration.trustK8sCerts` | Trust K8s certificates when querying the Kubernetes API | `false` |
| `server.configuration.accountName` | The name of the account to configure for the Kubernetes platform | `default` | | `server.configuration.containerRegistries` | Container registries configuration | `{}` |
| `server.configuration.trustK8sCerts` | Trust K8s certificates when querying the Kubernetes API | `false` | | `server.configuration.grafanaInfo` | Endpoint to the grafana instance (Deprecated: use the metricsDashboard instead) | `nil` |
| `server.configuration.containerRegistries` | Container registries configuration | `{}` (check `values.yaml` for more information) | | `server.configuration.metricsDashboard` | Endpoint to the metricsDashboard instance | `nil` |
| `server.configuration.metricsDashboard` | Endpoint to the metricsDashboard instance | `nil` | | `server.existingConfigmap` | ConfigMap with Spring Cloud Dataflow Server Configuration | `nil` |
| `server.existingConfigmap` | Name of existing ConfigMap with Dataflow server configuration | `nil` | | `server.extraEnvVars` | Extra environment variables to be set on Dataflow server container | `[]` |
| `server.extraEnvVars` | Extra environment variables to be set on Dataflow server container | `{}` | | `server.extraEnvVarsCM` | ConfigMap with extra environment variables | `nil` |
| `server.extraEnvVarsCM` | Name of existing ConfigMap containing extra env vars | `nil` | | `server.extraEnvVarsSecret` | Secret with extra environment variables | `nil` |
| `server.extraEnvVarsSecret` | Name of existing Secret containing extra env vars | `nil` | | `server.replicaCount` | Number of Dataflow server replicas to deploy | `1` |
| `server.replicaCount` | Number of Dataflow server replicas to deploy | `1` | | `server.strategyType` | StrategyType, can be set to RollingUpdate or Recreate by default | `RollingUpdate` |
| `server.hostAliases` | Add deployment host aliases | `[]` | | `server.podAffinityPreset` | Dataflow server pod affinity preset. Ignored if `server.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `server.strategyType` | Deployment Strategy Type | `RollingUpdate` | | `server.podAntiAffinityPreset` | Dataflow server pod anti-affinity preset. Ignored if `server.affinity` is set. Allowed values: `soft` or `hard` | `soft` |
| `server.podAffinityPreset` | Dataflow server pod affinity preset. Ignored if `server.affinity` is set. Allowed values: `soft` or `hard` | `""` | | `server.containerPort` | Dataflow server port | `8080` |
| `server.podAntiAffinityPreset` | Dataflow server pod anti-affinity preset. Ignored if `server.affinity` is set. Allowed values: `soft` or `hard` | `soft` | | `server.nodeAffinityPreset.type` | Dataflow server node affinity preset type. Ignored if `server.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `server.nodeAffinityPreset.type` | Dataflow server node affinity preset type. Ignored if `server.affinity` is set. Allowed values: `soft` or `hard` | `""` | | `server.nodeAffinityPreset.key` | Dataflow server node label key to match Ignored if `server.affinity` is set. | `""` |
| `server.nodeAffinityPreset.key` | Dataflow server node label key to match Ignored if `server.affinity` is set. | `""` | | `server.nodeAffinityPreset.values` | Dataflow server node label values to match. Ignored if `server.affinity` is set. | `[]` |
| `server.nodeAffinityPreset.values` | Dataflow server node label values to match. Ignored if `server.affinity` is set. | `[]` | | `server.affinity` | Dataflow server affinity for pod assignment | `{}` |
| `server.affinity` | Dataflow server affinity for pod assignment | `{}` (evaluated as a template) | | `server.nodeSelector` | Dataflow server node labels for pod assignment | `{}` |
| `server.nodeSelector` | Dataflow server node labels for pod assignment | `{}` (evaluated as a template) | | `server.tolerations` | Dataflow server tolerations for pod assignment | `[]` |
| `server.tolerations` | Dataflow server tolerations for pod assignment | `[]` (evaluated as a template) | | `server.podAnnotations` | Annotations for Dataflow server pods | `{}` |
| `server.priorityClassName` | Controller priorityClassName | `nil` | | `server.priorityClassName` | Dataflow Server pods' priority | `""` |
| `server.podSecurityContext` | Dataflow server pods' Security Context | `{ fsGroup: "1001" }` | | `server.podSecurityContext.fsGroup` | Group ID for the volumes of the pod | `1001` |
| `server.containerSecurityContext` | Dataflow server containers' Security Context | `{ runAsUser: "1001" }` | | `server.containerSecurityContext.runAsUser` | Set Dataflow Server container's Security Context runAsUser | `1001` |
| `server.resources.limits` | The resources limits for the Dataflow server container | `{}` | | `server.resources.limits` | The resources limits for the Dataflow server container | `{}` |
| `server.resources.requests` | The requested resources for the Dataflow server container | `{}` | | `server.resources.requests` | The requested resources for the Dataflow server container | `{}` |
| `server.podAnnotations` | Annotations for Dataflow server pods | `{}` | | `server.livenessProbe.enabled` | Enable livenessProbe | `true` |
| `server.livenessProbe` | Liveness probe configuration for Dataflow server | Check `values.yaml` file | | `server.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `120` |
| `server.readinessProbe` | Readiness probe configuration for Dataflow server | Check `values.yaml` file | | `server.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `20` |
| `server.customLivenessProbe` | Override default liveness probe | `nil` | | `server.livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `1` |
| `server.customReadinessProbe` | Override default readiness probe | `nil` | | `server.livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` |
| `server.service.type` | Kubernetes service type | `ClusterIP` | | `server.livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
| `server.service.port` | Service HTTP port | `8080` | | `server.readinessProbe.enabled` | Enable readinessProbe | `true` |
| `server.service.nodePort` | Service HTTP node port | `nil` | | `server.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `120` |
| `server.service.clusterIP` | Dataflow server service clusterIP IP | `None` | | `server.readinessProbe.periodSeconds` | Period seconds for readinessProbe | `20` |
| `server.service.externalTrafficPolicy` | Enable client source IP preservation | `Cluster` | | `server.readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `1` |
| `server.service.loadBalancerIP` | loadBalancerIP if service type is `LoadBalancer` | `nil` | | `server.readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` |
| `server.service.loadBalancerSourceRanges` | Address that are allowed when service is LoadBalancer | `[]` | | `server.readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
| `server.service.annotations` | Annotations for Dataflow server service | `{}` | | `server.customLivenessProbe` | Override default liveness probe | `{}` |
| `server.containerPort` | Dataflow server port | `8080 | | `server.customReadinessProbe` | Override default readiness probe | `{}` |
| `server.ingress.enabled` | Enable ingress controller resource | `false` | | `server.service.type` | Kubernetes service type | `ClusterIP` |
| `server.ingress.pathType` | Ingress path type | `ImplementationSpecific` | | `server.service.port` | Service HTTP port | `8080` |
| `server.ingress.path` | Ingress path | `/` | | `server.service.nodePort` | Specify the nodePort value for the LoadBalancer and NodePort service types | `nil` |
| `server.ingress.certManager` | Add annotations for cert-manager | `false` | | `server.service.clusterIP` | Dataflow server service cluster IP | `nil` |
| `server.ingress.hostname` | Default host for the ingress resource | `dataflow.local` | | `server.service.externalTrafficPolicy` | Enable client source IP preservation | `Cluster` |
| `server.ingress.annotations` | Ingress annotations | `[]` | | `server.service.loadBalancerIP` | Load balancer IP if service type is `LoadBalancer` | `nil` |
| `server.ingress.extraHosts[0].name` | Additional hostnames to be covered | `nil` | | `server.service.loadBalancerSourceRanges` | Addresses that are allowed when service is LoadBalancer | `[]` |
| `server.ingress.extraHosts[0].path` | Additional hostnames to be covered | `nil` | | `server.service.annotations` | Provide any additional annotations which may be required. Evaluated as a template. | `{}` |
| `server.ingress.extraTls[0].hosts[0]` | TLS configuration for additional hostnames to be covered | `nil` | | `server.ingress.enabled` | Enable ingress controller resource | `false` |
| `server.ingress.extraTls[0].secretName` | TLS configuration for additional hostnames to be covered | `nil` | | `server.ingress.path` | The Path to WordPress. You may need to set this to '/*' in order to use this with ALB ingress controllers. | `/` |
| `server.ingress.tls` | Enables TLS configuration for the Ingress component | `false` | | `server.ingress.pathType` | Ingress path type | `ImplementationSpecific` |
| `server.ingress.secrets[0].name` | TLS Secret Name | `nil` | | `server.ingress.certManager` | Set this to true in order to add the corresponding annotations for cert-manager | `false` |
| `server.ingress.secrets[0].certificate` | TLS Secret Certificate | `nil` | | `server.ingress.hostname` | Default host for the ingress resource | `dataflow.local` |
| `server.ingress.secrets[0].key` | TLS Secret Key | `nil` | | `server.ingress.annotations` | Ingress annotations | `{}` |
| `server.initContainers` | Add additional init containers to the Dataflow server pods | `{}` (evaluated as a template) | | `server.ingress.tls` | Enable TLS configuration for the hostname defined at ingress.hostname parameter | `false` |
| `server.sidecars` | Add additional sidecar containers to the Dataflow server pods | `{}` (evaluated as a template) | | `server.ingress.extraHosts` | The list of additional hostnames to be covered with this ingress record. | `[]` |
| `server.pdb.create` | Enable/disable a Pod Disruption Budget creation | `false` | | `server.ingress.extraTls` | The tls configuration for additional hostnames to be covered with this ingress record. | `[]` |
| `server.pdb.minAvailable` | Minimum number/percentage of pods that should remain scheduled | `1` | | `server.ingress.secrets` | If you're providing your own certificates, please use this to add the certificates as secrets | `[]` |
| `server.pdb.maxUnavailable` | Maximum number/percentage of pods that may be made unavailable | `nil` | | `server.initContainers` | Add init containers to the Dataflow Server pods | `{}` |
| `server.autoscaling.enabled` | Enable autoscaling for Dataflow server | `false` | | `server.sidecars` | Add sidecars to the Dataflow Server pods | `{}` |
| `server.autoscaling.minReplicas` | Minimum number of Dataflow server replicas | `nil` | | `server.pdb.create` | Enable/disable a Pod Disruption Budget creation | `false` |
| `server.autoscaling.maxReplicas` | Maximum number of Dataflow server replicas | `nil` | | `server.pdb.minAvailable` | Minimum number/percentage of pods that should remain scheduled | `1` |
| `server.autoscaling.targetCPU` | Target CPU utilization percentage | `nil` | | `server.pdb.maxUnavailable` | Maximum number/percentage of pods that may be made unavailable | `nil` |
| `server.autoscaling.targetMemory` | Target Memory utilization percentage | `nil` | | `server.autoscaling.enabled` | Enable autoscaling for Dataflow server | `false` |
| `server.jdwp.enabled` | Enable Java Debug Wire Protocol (JDWP) | `false` | | `server.autoscaling.minReplicas` | Minimum number of Dataflow server replicas | `nil` |
| `server.jdwp.port` | JDWP TCP port | `5005` | | `server.autoscaling.maxReplicas` | Maximum number of Dataflow server replicas | `nil` |
| `server.extraVolumes` | Extra Volumes to be set on the Dataflow Server Pod | `nil` | | `server.autoscaling.targetCPU` | Target CPU utilization percentage | `nil` |
| `server.extraVolumeMounts` | Extra VolumeMounts to be set on the Dataflow Container | `nil` | | `server.autoscaling.targetMemory` | Target Memory utilization percentage | `nil` |
| `server.proxy.host` | Proxy host | `nil` | | `server.extraVolumes` | Extra Volumes to be set on the Dataflow Server Pod | `[]` |
| `server.proxy.port` | Proxy port | `nil` | | `server.extraVolumeMounts` | Extra VolumeMounts to be set on the Dataflow Container | `[]` |
| `server.proxy.user` | Proxy username (if authentication is required) | `nil` | | `server.jdwp.enabled` | Set to true to enable Java debugger | `false` |
| `server.proxy.password` | Proxy password (if authentication is required) | `nil` | | `server.jdwp.port` | Specify port for remote debugging | `5005` |
| `server.proxy` | Add proxy configuration for SCDF server | `{}` |
### Dataflow Skipper parameters ### Dataflow Skipper parameters
| Parameter | Description | Default | | Name | Description | Value |
|--------------------------------------------|-----------------------------------------------------------------------------------------------------------|---------------------------------------------------------| | -------------------------------------------- | --------------------------------------------------------------------------------------------------------- | ------------------------------ |
| `skipper.enabled` | Enable Spring Cloud Skipper component | `true` | | `skipper.enabled` | Enable Spring Cloud Skipper component | `true` |
| `skipper.image.registry` | Spring Cloud Skipper image registry | `docker.io` | | `skipper.hostAliases` | Deployment pod host aliases | `[]` |
| `skipper.image.repository` | Spring Cloud Skipper image name | `bitnami/spring-cloud-dataflow` | | `skipper.image.registry` | Spring Cloud Skipper image registry | `docker.io` |
| `skipper.image.tag` | Spring Cloud Skipper image tag | `{TAG_NAME}` | | `skipper.image.repository` | Spring Cloud Skipper image repository | `bitnami/spring-cloud-skipper` |
| `skipper.image.pullPolicy` | Spring Cloud Skipper image pull policy | `IfNotPresent` | | `skipper.image.tag` | Spring Cloud Skipper image tag (immutable tags are recommended) | `2.7.0-debian-10-r4` |
| `skipper.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | | `skipper.image.pullPolicy` | Spring Cloud Skipper image pull policy | `IfNotPresent` |
| `skipper.configuration.accountName` | The name of the account to configure for the Kubernetes platform | `default` | | `skipper.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
| `skipper.configuration.trustK8sCerts` | Trust K8s certificates when querying the Kubernetes API | `false` | | `skipper.image.debug` | Enable image debug mode | `false` |
| `skipper.existingConfigmap` | Name of existing ConfigMap with Skipper server configuration | `nil` | | `skipper.configuration.accountName` | The name of the account to configure for the Kubernetes platform | `default` |
| `skipper.extraEnvVars` | Extra environment variables to be set on Skipper server container | `{}` | | `skipper.configuration.trustK8sCerts` | Trust K8s certificates when querying the Kubernetes API | `false` |
| `skipper.extraEnvVarsCM` | Name of existing ConfigMap containing extra env vars | `nil` | | `skipper.existingConfigmap` | Name of existing ConfigMap with Skipper server configuration | `nil` |
| `skipper.extraEnvVarsSecret` | Name of existing Secret containing extra env vars | `nil` | | `skipper.extraEnvVars` | Extra environment variables to be set on Skipper server container | `[]` |
| `skipper.replicaCount` | Number of Skipper server replicas to deploy | `1` | | `skipper.extraEnvVarsCM` | Name of existing ConfigMap containing extra environment variables | `nil` |
| `skipper.strategyType` | Deployment Strategy Type | `RollingUpdate` | | `skipper.extraEnvVarsSecret` | Name of existing Secret containing extra environment variables | `nil` |
| `skipper.podAffinityPreset` | Skipper pod affinity preset. Ignored if `skipper.affinity` is set. Allowed values: `soft` or `hard` | `""` | | `skipper.replicaCount` | Number of Skipper server replicas to deploy | `1` |
| `skipper.podAntiAffinityPreset` | Skipper pod anti-affinity preset. Ignored if `skipper.affinity` is set. Allowed values: `soft` or `hard` | `soft` | | `skipper.strategyType` | Deployment Strategy Type | `RollingUpdate` |
| `skipper.nodeAffinityPreset.type` | Skipper node affinity preset type. Ignored if `skipper.affinity` is set. Allowed values: `soft` or `hard` | `""` | | `skipper.podAffinityPreset` | Skipper pod affinity preset. Ignored if `skipper.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `skipper.nodeAffinityPreset.key` | Skipper node label key to match Ignored if `skipper.affinity` is set. | `""` | | `skipper.podAntiAffinityPreset` | Skipper pod anti-affinity preset. Ignored if `skipper.affinity` is set. Allowed values: `soft` or `hard` | `soft` |
| `skipper.nodeAffinityPreset.values` | Skipper node label values to match. Ignored if `skipper.affinity` is set. | `[]` | | `skipper.nodeAffinityPreset.type` | Skipper node affinity preset type. Ignored if `skipper.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `skipper.hostAliases` | Add deployment host aliases | `[]` | | `skipper.nodeAffinityPreset.key` | Skipper node label key to match Ignored if `skipper.affinity` is set. | `""` |
| `skipper.affinity` | Skipper affinity for pod assignment | `{}` (evaluated as a template) | | `skipper.nodeAffinityPreset.values` | Skipper node label values to match. Ignored if `skipper.affinity` is set. | `[]` |
| `skipper.nodeSelector` | Skipper node labels for pod assignment | `{}` (evaluated as a template) | | `skipper.affinity` | Skipper affinity for pod assignment | `{}` |
| `skipper.tolerations` | Skipper tolerations for pod assignment | `[]` (evaluated as a template) | | `skipper.nodeSelector` | Skipper node labels for pod assignment | `{}` |
| `skipper.priorityClassName` | Controller priorityClassName | `nil` | | `skipper.tolerations` | Skipper tolerations for pod assignment | `[]` |
| `skipper.podSecurityContext` | Skipper server pods' Security Context | `{ fsGroup: "1001" }` | | `skipper.podAnnotations` | Annotations for Skipper server pods | `{}` |
| `skipper.containerSecurityContext` | Skipper server containers' Security Context | `{ runAsUser: "1001" }` | | `skipper.priorityClassName` | Controller priorityClassName | `""` |
| `skipper.resources.limits` | The resources limits for the Skipper server container | `{}` | | `skipper.podSecurityContext.fsGroup` | Group ID for the volumes of the pod | `1001` |
| `skipper.resources.requests` | The requested resources for the Skipper server container | `{}` | | `skipper.containerSecurityContext.runAsUser` | Set Dataflow Skipper container's Security Context runAsUser | `1001` |
| `skipper.podAnnotations` | Annotations for Skipper server pods | `{}` | | `skipper.resources.limits` | The resources limits for the Skipper server container | `{}` |
| `skipper.livenessProbe` | Liveness probe configuration for Skipper server | Check `values.yaml` file | | `skipper.resources.requests` | The requested resources for the Skipper server container | `{}` |
| `skipper.readinessProbe` | Readiness probe configuration for Skipper server | Check `values.yaml` file | | `skipper.livenessProbe.enabled` | Enable livenessProbe | `true` |
| `skipper.customLivenessProbe` | Override default liveness probe | `nil` | | `skipper.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `120` |
| `skipper.customReadinessProbe` | Override default readiness probe | `nil` | | `skipper.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `20` |
| `skipper.service.type` | Kubernetes service type | `ClusterIP` | | `skipper.livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `1` |
| `skipper.service.port` | Service HTTP port | `8080` | | `skipper.livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` |
| `skipper.service.nodePort` | Service HTTP node port | `nil` | | `skipper.livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
| `skipper.service.clusterIP` | Skipper server service clusterIP IP | `None` | | `skipper.readinessProbe.enabled` | Enable readinessProbe | `true` |
| `skipper.service.externalTrafficPolicy` | Enable client source IP preservation | `Cluster` | | `skipper.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `120` |
| `skipper.service.loadBalancerIP` | loadBalancerIP if service type is `LoadBalancer` | `nil` | | `skipper.readinessProbe.periodSeconds` | Period seconds for readinessProbe | `20` |
| `skipper.service.loadBalancerSourceRanges` | Address that are allowed when service is LoadBalancer | `[]` | | `skipper.readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `1` |
| `skipper.service.annotations` | Annotations for Skipper server service | `{}` | | `skipper.readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` |
| `skipper.initContainers` | Add additional init containers to the Skipper pods | `{}` (evaluated as a template) | | `skipper.readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
| `skipper.sidecars` | Add additional sidecar containers to the Skipper pods | `{}` (evaluated as a template) | | `skipper.customLivenessProbe` | Override default liveness probe | `{}` |
| `skipper.pdb.create` | Enable/disable a Pod Disruption Budget creation | `false` | | `skipper.customReadinessProbe` | Override default readiness probe | `{}` |
| `skipper.pdb.minAvailable` | Minimum number/percentage of pods that should remain scheduled | `1` | | `skipper.service.type` | Kubernetes service type | `ClusterIP` |
| `skipper.pdb.maxUnavailable` | Maximum number/percentage of pods that may be made unavailable | `nil` | | `skipper.service.port` | Service HTTP port | `80` |
| `skipper.autoscaling.enabled` | Enable autoscaling for Skipper server | `false` | | `skipper.service.nodePort` | Service HTTP node port | `nil` |
| `skipper.autoscaling.minReplicas` | Minimum number of Skipper server replicas | `nil` | | `skipper.service.clusterIP` | Skipper server service cluster IP | `nil` |
| `skipper.autoscaling.maxReplicas` | Maximum number of Skipper server replicas | `nil` | | `skipper.service.externalTrafficPolicy` | Enable client source IP preservation | `Cluster` |
| `skipper.autoscaling.targetCPU` | Target CPU utilization percentage | `nil` | | `skipper.service.loadBalancerIP` | Load balancer IP if service type is `LoadBalancer` | `nil` |
| `skipper.autoscaling.targetMemory` | Target Memory utilization percentage | `nil` | | `skipper.service.loadBalancerSourceRanges` | Address that are allowed when service is LoadBalancer | `[]` |
| `skipper.jdwp.enabled` | Enable Java Debug Wire Protocol (JDWP) | `false` | | `skipper.service.annotations` | Annotations for Skipper server service | `{}` |
| `skipper.jdwp.port` | JDWP TCP port | `5005` | | `skipper.initContainers` | Add init containers to the Dataflow Skipper pods | `{}` |
| `skipper.extraVolumes` | Extra Volumes to be set on the Skipper Pod | `nil` | | `skipper.sidecars` | Add sidecars to the Skipper pods | `{}` |
| `skipper.extraVolumeMounts` | Extra VolumeMounts to be set on the Skipper Container | `nil` | | `skipper.pdb.create` | Enable/disable a Pod Disruption Budget creation | `false` |
| `externalSkipper.host` | Host of a external Skipper Server | `localhost` | | `skipper.pdb.minAvailable` | Minimum number/percentage of pods that should remain scheduled | `1` |
| `externalSkipper.port` | External Skipper Server port number | `7577` | | `skipper.pdb.maxUnavailable` | Maximum number/percentage of pods that may be made unavailable | `nil` |
| `skipper.autoscaling.enabled` | Enable autoscaling for Skipper server | `false` |
| `skipper.autoscaling.minReplicas` | Minimum number of Skipper server replicas | `nil` |
| `skipper.autoscaling.maxReplicas` | Maximum number of Skipper server replicas | `nil` |
| `skipper.autoscaling.targetCPU` | Target CPU utilization percentage | `nil` |
| `skipper.autoscaling.targetMemory` | Target Memory utilization percentage | `nil` |
| `skipper.extraVolumes` | Extra Volumes to be set on the Skipper Pod | `[]` |
| `skipper.extraVolumeMounts` | Extra VolumeMounts to be set on the Skipper Container | `[]` |
| `skipper.jdwp.enabled` | Enable Java Debug Wire Protocol (JDWP) | `false` |
| `skipper.jdwp.port` | JDWP TCP port for remote debugging | `5005` |
| `externalSkipper.host` | Host of a external Skipper Server | `localhost` |
| `externalSkipper.port` | External Skipper Server port number | `7577` |
### Deployer parameters ### Deployer parameters
| Parameter | Description | Default | | Name | Description | Value |
|-------------------------------------|--------------------------------------------------|-------------------------------------| | --------------------------------------------- | ------------------------------------------------------------------------------------------- | ------ |
| `deployer.resources.limits` | Streaming applications resource limits | `{ cpu: "500m", memory: "1024Mi" }` | | `deployer.resources.limits` | Streaming applications resource limits | `{}` |
| `deployer.resources.requests` | Streaming applications resource requests | `{}` | | `deployer.resources.requests` | Streaming applications resource requests | `{}` |
| `deployer.resources.readinessProbe` | Streaming applications readiness probes requests | Check `values.yaml` file | | `deployer.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `90` |
| `deployer.resources.livenessProbe` | Streaming applications liveness probes requests | Check `values.yaml` file | | `deployer.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `120` |
| `deployer.nodeSelector` | Streaming applications nodeSelector | `""` | | `deployer.nodeSelector` | The node selectors to apply to the streaming applications deployments in "key:value" format | `""` |
| `deployer.tolerations` | Streaming applications tolerations | `{}` | | `deployer.tolerations` | Streaming applications tolerations | `{}` |
| `deployer.volumeMounts` | Streaming applications extra volume mounts | `{}` | | `deployer.volumeMounts` | Streaming applications extra volume mounts | `{}` |
| `deployer.volumes` | Streaming applications extra volumes | `{}` | | `deployer.volumes` | Streaming applications extra volumes | `{}` |
| `deployer.environmentVariables` | Streaming applications environment variables | `""` | | `deployer.environmentVariables` | Streaming applications environment variables | `""` |
| `deployer.podSecurityContext` | Streaming applications Security Context. | `{runAsUser: 1001}` | | `deployer.podSecurityContext.runAsUser` | Set Dataflow Streams container's Security Context runAsUser | `1001` |
### RBAC parameters ### RBAC parameters
| Parameter | Description | Default | | Name | Description | Value |
|-------------------------|-------------------------------------------------------------------------------------|------------------------------------------------------| | ----------------------- | ----------------------------------------------------------------------------------- | ------ |
| `serviceAccount.create` | Enable the creation of a ServiceAccount for Dataflow server and Skipper server pods | `true` | | `serviceAccount.create` | Enable the creation of a ServiceAccount for Dataflow server and Skipper server pods | `true` |
| `serviceAccount.name` | Name of the created serviceAccount | Generated using the `common.names.fullname` template | | `serviceAccount.name` | Name of the created serviceAccount | `""` |
| `rbac.create` | Whether to create & use RBAC resources or not | `true` | | `rbac.create` | Whether to create and use RBAC resources or not | `true` |
### Metrics parameters ### Metrics parameters
| Parameter | Description | Default | | Name | Description | Value |
|----------------------------------------|------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------| | -------------------------------------- | -------------------------------------------------------------------------------------------------------------------------- | ---------------------------------- |
| `metrics.metrics` | Enable the export of Prometheus metrics | `false` | | `metrics.enabled` | Enable Prometheus metrics | `false` |
| `metrics.image.registry` | Prometheus Rsocket Proxy image registry | `docker.io` | | `metrics.image.registry` | Prometheus Rsocket Proxy image registry | `docker.io` |
| `metrics.image.repository` | Prometheus Rsocket Proxy image name | `bitnami/prometheus-rsocket-proxy` | | `metrics.image.repository` | Prometheus Rsocket Proxy image repository | `bitnami/prometheus-rsocket-proxy` |
| `metrics.image.tag` | Prometheus Rsocket Proxy image tag | `{TAG_NAME}` | | `metrics.image.tag` | Prometheus Rsocket Proxy image tag (immutable tags are recommended) | `1.3.0-debian-10-r187` |
| `metrics.image.pullPolicy` | Prometheus Rsocket Proxy image pull policy | `IfNotPresent` | | `metrics.image.pullPolicy` | Prometheus Rsocket Proxy image pull policy | `IfNotPresent` |
| `metrics.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | | `metrics.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
| `metrics.replicaCount` | Number of Prometheus Rsocket Proxy replicas to deploy | `1` | | `metrics.resources.limits` | The resources limits for the Prometheus Rsocket Proxy container | `{}` |
| `metrics.podAffinityPreset` | Prometheus Rsocket Proxy pod affinity preset. Ignored if `metrics.affinity` is set. Allowed values: `soft` or `hard` | `""` | | `metrics.resources.requests` | The requested resources for the Prometheus Rsocket Proxy container | `{}` |
| `metrics.podAntiAffinityPreset` | Prometheus Rsocket Proxy pod anti-affinity preset. Ignored if `metrics.affinity` is set. Allowed values: `soft` or `hard` | `soft` | | `metrics.replicaCount` | Number of Prometheus Rsocket Proxy replicas to deploy | `1` |
| `metrics.nodeAffinityPreset.type` | Prometheus Rsocket Proxy node affinity preset type. Ignored if `metrics.affinity` is set. Allowed values: `soft` or `hard` | `""` | | `metrics.podAffinityPreset` | Prometheus Rsocket Proxy pod affinity preset. Ignored if `metrics.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `metrics.nodeAffinityPreset.key` | Prometheus Rsocket Proxy node label key to match Ignored if `metrics.affinity` is set. | `""` | | `metrics.podAntiAffinityPreset` | Prometheus Rsocket Proxy pod anti-affinity preset. Ignored if `metrics.affinity` is set. Allowed values: `soft` or `hard` | `soft` |
| `metrics.nodeAffinityPreset.values` | Prometheus Rsocket Proxy node label values to match. Ignored if `metrics.affinity` is set. | `[]` | | `metrics.nodeAffinityPreset.type` | Prometheus Rsocket Proxy node affinity preset type. Ignored if `metrics.affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `metrics.affinity` | Prometheus Rsocket Proxy affinity for pod assignment | `{}` (evaluated as a template) | | `metrics.nodeAffinityPreset.key` | Prometheus Rsocket Proxy node label key to match Ignored if `metrics.affinity` is set. | `""` |
| `metrics.nodeSelector` | Prometheus Rsocket Proxy node labels for pod assignment | `{}` (evaluated as a template) | | `metrics.nodeAffinityPreset.values` | Prometheus Rsocket Proxy node label values to match. Ignored if `metrics.affinity` is set. | `[]` |
| `metrics.tolerations` | Prometheus Rsocket Proxy tolerations for pod assignment | `[]` (evaluated as a template) | | `metrics.affinity` | Prometheus Rsocket Proxy affinity for pod assignment | `{}` |
| `metrics.priorityClassName` | Controller priorityClassName | `nil` | | `metrics.nodeSelector` | Prometheus Rsocket Proxy node labels for pod assignment | `{}` |
| `metrics.resources.limits` | The resources limits for the Prometheus Rsocket Proxy container | `{}` | | `metrics.tolerations` | Prometheus Rsocket Proxy tolerations for pod assignment | `[]` |
| `metrics.resources.requests` | The requested resources for the Prometheus Rsocket Proxy container | `{}` | | `metrics.podAnnotations` | Annotations for Prometheus Rsocket Proxy pods | `{}` |
| `metrics.podAnnotations` | Annotations for Prometheus Rsocket Proxy pods | `{}` | | `metrics.priorityClassName` | Prometheus Rsocket Proxy pods' priority. | `""` |
| `metrics.kafka.service.httpPort` | Prometheus Rsocket Proxy HTTP port | `8080` | | `metrics.service.httpPort` | Prometheus Rsocket Proxy HTTP port | `8080` |
| `metrics.kafka.service.rsocketPort` | Prometheus Rsocket Proxy Rsocket port | `8080` | | `metrics.service.rsocketPort` | Prometheus Rsocket Proxy Rsocket port | `7001` |
| `metrics.kafka.service.annotations` | Annotations for Prometheus Rsocket Proxy service | `Check values.yaml file` | | `metrics.service.annotations` | Annotations for the Prometheus Rsocket Proxy service | `{}` |
| `metrics.serviceMonitor.enabled` | if `true`, creates a Prometheus Operator ServiceMonitor (also requires `metrics.enabled` to be `true`) | `false` | | `metrics.serviceMonitor.enabled` | if `true`, creates a Prometheus Operator ServiceMonitor (also requires `metrics.enabled` to be `true`) | `false` |
| `metrics.serviceMonitor.namespace` | Namespace in which ServiceMonitor is created if different from release | `nil` | | `metrics.serviceMonitor.extraLabels` | Labels to add to ServiceMonitor, in case prometheus operator is configured with serviceMonitorSelector | `{}` |
| `metrics.serviceMonitor.extraLabels` | Labels to add to ServiceMonitor | `{}` | | `metrics.serviceMonitor.namespace` | Namespace in which ServiceMonitor is created if different from release | `nil` |
| `metrics.serviceMonitor.interval` | Interval at which metrics should be scraped. | `nil` (Prometheus Operator default value) | | `metrics.serviceMonitor.interval` | Interval at which metrics should be scraped. | `nil` |
| `metrics.serviceMonitor.scrapeTimeout` | Timeout after which the scrape is ended | `nil` (Prometheus Operator default value) | | `metrics.serviceMonitor.scrapeTimeout` | Timeout after which the scrape is ended | `nil` |
| `metrics.pdb.create` | Enable/disable a Pod Disruption Budget creation | `false` | | `metrics.pdb.create` | Enable/disable a Pod Disruption Budget creation | `false` |
| `metrics.pdb.minAvailable` | Minimum number/percentage of pods that should remain scheduled | `1` | | `metrics.pdb.minAvailable` | Minimum number/percentage of pods that should remain scheduled | `1` |
| `metrics.pdb.maxUnavailable` | Maximum number/percentage of pods that may be made unavailable | `nil` | | `metrics.pdb.maxUnavailable` | Maximum number/percentage of pods that may be made unavailable | `nil` |
| `metrics.autoscaling.enabled` | Enable autoscaling for Prometheus Rsocket Proxy | `false` | | `metrics.autoscaling.enabled` | Enable autoscaling for Prometheus Rsocket Proxy | `false` |
| `metrics.autoscaling.minReplicas` | Minimum number of Prometheus Rsocket Proxy replicas | `nil` | | `metrics.autoscaling.minReplicas` | Minimum number of Prometheus Rsocket Proxy replicas | `nil` |
| `metrics.autoscaling.maxReplicas` | Maximum number of Prometheus Rsocket Proxy replicas | `nil` | | `metrics.autoscaling.maxReplicas` | Maximum number of Prometheus Rsocket Proxy replicas | `nil` |
| `metrics.autoscaling.targetCPU` | Target CPU utilization percentage | `nil` | | `metrics.autoscaling.targetCPU` | Target CPU utilization percentage | `nil` |
| `metrics.autoscaling.targetMemory` | Target Memory utilization percentage | `nil` | | `metrics.autoscaling.targetMemory` | Target Memory utilization percentage | `nil` |
### Init Container parameters ### Init Container parameters
| Parameter | Description | Default | | Name | Description | Value |
|--------------------------------------|---------------------------------------------------------------------------------------------------|---------------------------------------------------------| | ------------------------------------ | ------------------------------------------------------------------------------------------------- | ---------------------- |
| `waitForBackends.enabled` | Wait for the database and other services (such as Kafka or RabbitMQ) used when enabling streaming | `true` | | `waitForBackends.enabled` | Wait for the database and other services (such as Kafka or RabbitMQ) used when enabling streaming | `true` |
| `waitForBackends.image.registry` | Init container wait-for-backend image registry | `docker.io` | | `waitForBackends.image.registry` | Init container wait-for-backend image registry | `docker.io` |
| `waitForBackends.image.repository` | Init container wait-for-backend image name | `bitnami/kubectl` | | `waitForBackends.image.repository` | Init container wait-for-backend image name | `bitnami/kubectl` |
| `waitForBackends.image.tag` | Init container wait-for-backend image tag | `{TAG_NAME}` | | `waitForBackends.image.tag` | Init container wait-for-backend image tag | `1.19.12-debian-10-r6` |
| `waitForBackends.image.pullPolicy` | Init container wait-for-backend image pull policy | `IfNotPresent` | | `waitForBackends.image.pullPolicy` | Init container wait-for-backend image pull policy | `IfNotPresent` |
| `waitForBackends.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | | `waitForBackends.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
| `waitForBackends.resources.limits` | Init container wait-for-backend resource limits | `{}` | | `waitForBackends.resources.limits` | Init container wait-for-backend resource limits | `{}` |
| `waitForBackends.resources.requests` | Init container wait-for-backend resource requests | `{}` | | `waitForBackends.resources.requests` | Init container wait-for-backend resource requests | `{}` |
### Database parameters ### Database parameters
| Parameter | Description | Default | | Name | Description | Value |
|-------------------------------------------|-----------------------------------------------------------------------------------------------------|-------------------------------------------| | ----------------------------------------- | --------------------------------------------------------------------------------------------------- | ------------ |
| `mariadb.enabled` | Enable/disable MariaDB chart installation | `true` | | `mariadb.enabled` | Enable/disable MariaDB chart installation | `true` |
| `mariadb.architecture` | MariaDB architecture (`standalone` or `replication`) | `standalone` | | `mariadb.architecture` | MariaDB architecture. Allowed values: `standalone` or `replication` | `standalone` |
| `mariadb.auth.database` | Database name to create | `dataflow` | | `mariadb.auth.rootPassword` | Password for the MariaDB `root` user | `""` |
| `mariadb.auth.username` | Username of new user to create | `dataflow` | | `mariadb.auth.username` | Username of new user to create | `dataflow` |
| `mariadb.auth.password` | Password for the new user | `change-me` | | `mariadb.auth.password` | Password for the new user | `change-me` |
| `mariadb.auth.rootPassword` | Password for the MariaDB `root` user | _random 10 character alphanumeric string_ | | `mariadb.auth.database` | Database name to create | `dataflow` |
| `mariadb.initdbScripts` | Dictionary of initdb scripts | Check `values.yaml` file | | `mariadb.auth.forcePassword` | Force users to specify required passwords in the database | `false` |
| `externalDatabase.driver` | The fully qualified name of the JDBC Driver class | `""` | | `mariadb.auth.usePasswordFiles` | Mount credentials as a file instead of using an environment variable | `false` |
| `externalDatabase.scheme` | The scheme is a vendor-specific or shared protocol string that follows the "jdbc:" of the URL | `""` | | `mariadb.initdbScripts` | Specify dictionary of scripts to be run at first boot | `{}` |
| `externalDatabase.host` | Host of the external database | `localhost` | | `externalDatabase.host` | Host of the external database | `localhost` |
| `externalDatabase.port` | External database port number | `3306` | | `externalDatabase.port` | External database port number | `3306` |
| `externalDatabase.password` | Password for the above username | `""` | | `externalDatabase.driver` | The fully qualified name of the JDBC Driver class | `nil` |
| `externalDatabase.existingPasswordSecret` | Existing secret with database password | `""` | | `externalDatabase.scheme` | The scheme is a vendor-specific or shared protocol string that follows the "jdbc:" of the URL | `nil` |
| `externalDatabase.existingPasswordKey` | Key of the existing secret with database password | `datasource-password` | | `externalDatabase.password` | Password for the above username | `""` |
| `externalDatabase.dataflow.url` | JDBC URL for dataflow server. Overrides external scheme, host, port, database, and jdbc parameters. | `""` | | `externalDatabase.existingPasswordSecret` | Existing secret with database password | `nil` |
| `externalDatabase.dataflow.username` | Existing username in the external db to be used by Dataflow server | `dataflow` | | `externalDatabase.existingPasswordKey` | Key of the existing secret with database password, defaults to `datasource-password` | `nil` |
| `externalDatabase.dataflow.database` | Name of the existing database to be used by Dataflow server | `dataflow` | | `externalDatabase.dataflow.url` | JDBC URL for dataflow server. Overrides external scheme, host, port, database, and jdbc parameters. | `""` |
| `externalDatabase.skipper.url` | JDBC URL for skipper. Overrides external scheme, host, port, database, and jdbc parameters. | `""` | | `externalDatabase.dataflow.database` | Name of the existing database to be used by Dataflow server | `dataflow` |
| `externalDatabase.skipper.username` | Existing username in the external db to be used by Skipper server | `skipper` | | `externalDatabase.dataflow.username` | Existing username in the external db to be used by Dataflow server | `dataflow` |
| `externalDatabase.skipper.database` | Name of the existing database to be used by Skipper server | `skipper` | | `externalDatabase.skipper.url` | JDBC URL for skipper. Overrides external scheme, host, port, database, and jdbc parameters. | `""` |
| `externalDatabase.hibernateDialect` | Hibernate Dialect used by Dataflow/Skipper servers | `""` | | `externalDatabase.skipper.database` | Name of the existing database to be used by Skipper server | `skipper` |
| `externalDatabase.skipper.username` | Existing username in the external db to be used by Skipper server | `skipper` |
| `externalDatabase.hibernateDialect` | Hibernate Dialect used by Dataflow/Skipper servers | `""` |
### RabbitMQ chart parameters ### RabbitMQ chart parameters
| Parameter | Description | Default | | Name | Description | Value |
|-------------------------------------------|--------------------------------------------|-------------------------------------------| | ----------------------------------------- | ------------------------------------------------------------------------------- | ----------- |
| `rabbitmq.enabled` | Enable/disable RabbitMQ chart installation | `true` | | `rabbitmq.enabled` | Enable/disable RabbitMQ chart installation | `true` |
| `rabbitmq.auth.username` | RabbitMQ username | `user` | | `rabbitmq.auth.username` | RabbitMQ username | `user` |
| `rabbitmq.auth.password` | RabbitMQ password | _random 40 character alphanumeric string_ | | `externalRabbitmq.enabled` | Enable/disable external RabbitMQ | `false` |
| `externalRabbitmq.enabled` | Enable/disable external RabbitMQ | `false` | | `externalRabbitmq.host` | Host of the external RabbitMQ | `localhost` |
| `externalRabbitmq.host` | Host of the external RabbitMQ | `localhost` | | `externalRabbitmq.port` | External RabbitMQ port number | `5672` |
| `externalRabbitmq.port` | External RabbitMQ port number | `5672` | | `externalRabbitmq.username` | External RabbitMQ username | `guest` |
| `externalRabbitmq.username` | External RabbitMQ username | `guest` | | `externalRabbitmq.password` | External RabbitMQ password. It will be saved in a kubernetes secret | `guest` |
| `externalRabbitmq.password` | External RabbitMQ password | `guest` | | `externalRabbitmq.vhost` | External RabbitMQ virtual host. It will be saved in a kubernetes secret | `nil` |
| `externalRabbitmq.vhost` | External RabbitMQ virtual host | `/` | | `externalRabbitmq.existingPasswordSecret` | Existing secret with RabbitMQ password. It will be saved in a kubernetes secret | `nil` |
| `externalRabbitmq.existingPasswordSecret` | Existing secret with RabbitMQ password | `""` |
### Kafka chart parameters ### Kafka chart parameters
| Parameter | Description | Default | | Name | Description | Value |
|---------------------------------------|---------------------------------------------|------------------| | ------------------------------------- | --------------------------------------- | ---------------- |
| `kafka.enabled` | Enable/disable Kafka chart installation | `false` | | `kafka.enabled` | Enable/disable Kafka chart installation | `false` |
| `kafka.replicaCount` | Number of Kafka brokers | `1` | | `kafka.replicaCount` | Number of Kafka brokers | `1` |
| `kafka.offsetsTopicReplicationFactor` | Kafka Secret Key | `1` | | `kafka.offsetsTopicReplicationFactor` | Kafka Secret Key | `1` |
| `kafka.zookeeper.enabled` | Enable/disable Zookeeper chart installation | `nil` | | `kafka.zookeeper.replicaCount` | Number of Zookeeper replicas | `1` |
| `kafka.zookeeper.replicaCount` | Number of Zookeeper replicas | `1` | | `externalKafka.enabled` | Enable/disable external Kafka | `false` |
| `externalKafka.enabled` | Enable/disable external Kafka | `false` | | `externalKafka.brokers` | External Kafka brokers | `localhost:9092` |
| `externalKafka.brokers` | External Kafka brokers | `localhost:9092` | | `externalKafka.zkNodes` | External Zookeeper nodes | `localhost:2181` |
| `externalKafka.zkNodes` | External Zookeeper nodes | `localhost:2181` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example, Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,

File diff suppressed because it is too large Load Diff

View File

@@ -29,4 +29,4 @@ name: suitecrm
sources: sources:
- https://github.com/bitnami/bitnami-docker-suitecrm - https://github.com/bitnami/bitnami-docker-suitecrm
- https://www.suitecrm.com/ - https://www.suitecrm.com/
version: 9.3.14 version: 9.3.15

View File

@@ -48,197 +48,216 @@ The command removes all the Kubernetes components associated with the chart and
## Parameters ## Parameters
The following table lists the configurable parameters of the SuiteCRM chart and their default values per section/component:
### Global parameters ### Global parameters
| Parameter | Description | Default | | Name | Description | Value |
|---------------------------|-------------------------------------------------|---------------------------------------------------------| | ------------------------- | ----------------------------------------------- | ----- |
| `global.imageRegistry` | Global Docker image registry | `nil` | | `global.imageRegistry` | Global Docker image registry | `nil` |
| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | | `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` |
| `global.storageClass` | Global storage class for dynamic provisioning | `nil` | | `global.storageClass` | Global StorageClass for Persistent Volume(s) | `nil` |
### Common parameters ### Common parameters
| Parameter | Description | Default | | Name | Description | Value |
|---------------------|------------------------------------------------------------------------------|---------------------------------------------------------| | ------------------- | ------------------------------------------------------------------------------------------------------------ | ----- |
| `image.registry` | SuiteCRM image registry | `docker.io` | | `kubeVersion` | Force target Kubernetes version (using Helm capabilities if not set) | `nil` |
| `image.repository` | SuiteCRM Image name | `bitnami/suitecrm` | | `nameOverride` | String to partially override suitecrm.fullname template (will maintain the release name) | `nil` |
| `image.tag` | SuiteCRM Image tag | `{TAG_NAME}` | | `fullnameOverride` | String to fully override suitecrm.fullname template | `nil` |
| `image.pullPolicy` | SuiteCRM image pull policy | `IfNotPresent` | | `extraDeploy` | Array with extra yaml to deploy with the chart. Evaluated as a template | `[]` |
| `image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | | `commonAnnotations` | Common annotations to add to all SuiteCRM resources (sub-charts are not considered). Evaluated as a template | `{}` |
| `image.debug` | Specify if debug logs should be enabled | `false` | | `commonLabels` | Common labels to add to all SuiteCRM resources (sub-charts are not considered). Evaluated as a template | `{}` |
| `nameOverride` | String to partially override suitecrm.fullname template | `nil` |
| `fullnameOverride` | String to fully override suitecrm.fullname template | `nil` |
| `commonLabels` | Labels to add to all deployed objects | `nil` |
| `commonAnnotations` | Annotations to add to all deployed objects | `[]` |
| `extraDeploy` | Array of extra objects to deploy with the release (evaluated as a template). | `nil` |
| `kubeVersion` | Force target Kubernetes version (using Helm capabilities if not set) | `nil` |
### SuiteCRM parameters ### SuiteCRM parameters
| Parameter | Description | Default | | Name | Description | Value |
|--------------------------------------|-----------------------------------------------------------------------------------------------------------------------|---------------------------------------------| | ------------------------------------ | ----------------------------------------------------------------------------------------- | ----------------------- |
| `affinity` | Map of node/pod affinities | `{}` | | `image.registry` | SuiteCRM image registry | `docker.io` |
| `allowEmptyPassword` | Allow DB blank passwords | `yes` | | `image.repository` | SuiteCRM image repository | `bitnami/suitecrm` |
| `args` | Override default container args (useful when using custom images) | `nil` | | `image.tag` | SuiteCRM image tag (immutable tags are recommended) | `7.11.20-debian-10-r22` |
| `command` | Override default container command (useful when using custom images) | `nil` | | `image.pullPolicy` | SuiteCRM image pull policy | `IfNotPresent` |
| `containerPorts.http` | Sets http port inside NGINX container | `8080` | | `image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
| `containerPorts.https` | Sets https port inside NGINX container | `8443` | | `image.debug` | Specify if debug logs should be enabled | `false` |
| `containerSecurityContext.enabled` | Enable SuiteCRM containers' Security Context | `true` | | `replicaCount` | Number of replicas (requires ReadWriteMany PVC support) | `1` |
| `containerSecurityContext.runAsUser` | SuiteCRM containers' Security Context | `1001` | | `suitecrmSkipInstall` | Skip SuiteCRM installation wizard. Useful for migrations and restoring from SQL dump | `false` |
| `customLivenessProbe` | Override default liveness probe | `nil` | | `suitecrmValidateUserIP` | Whether to validate the user IP address or not | `false` |
| `customReadinessProbe` | Override default readiness probe | `nil` | | `suitecrmHost` | SuiteCRM host to create application URLs | `nil` |
| `customStartupProbe` | Override default startup probe | `nil` | | `suitecrmUsername` | User of the application | `user` |
| `existingSecret` | Name of a secret with the application password | `nil` | | `suitecrmPassword` | Application password | `nil` |
| `extraEnvVarsCM` | ConfigMap containing extra env vars | `nil` | | `suitecrmEmail` | Admin email | `user@example.com` |
| `extraEnvVarsSecret` | Secret containing extra env vars (in case of sensitive data) | `nil` | | `allowEmptyPassword` | Allow DB blank passwords | `false` |
| `extraEnvVars` | Extra environment variables | `nil` | | `command` | Override default container command (useful when using custom images) | `nil` |
| `extraVolumeMounts` | Array of extra volume mounts to be added to the container (evaluated as template). Normally used with `extraVolumes`. | `nil` | | `args` | Override default container args (useful when using custom images) | `nil` |
| `extraVolumes` | Array of extra volumes to be added to the deployment (evaluated as template). Requires setting `extraVolumeMounts` | `nil` | | `hostAliases` | Deployment pod host aliases | `[]` |
| `initContainers` | Add additional init containers to the pod (evaluated as a template) | `nil` | | `updateStrategy.type` | Update strategy - only really applicable for deployments with RWO PVs attached | `RollingUpdate` |
| `lifecycleHooks` | LifecycleHook to set additional configuration at startup Evaluated as a template | `` | | `extraEnvVars` | An array to add extra environment variables | `[]` |
| `livenessProbe` | Liveness probe configuration | `Check values.yaml file` | | `extraEnvVarsCM` | ConfigMap containing extra environment variables | `nil` |
| `hostAliases` | Add deployment host aliases | `Check values.yaml` | | `extraEnvVarsSecret` | Secret containing extra environment variables | `nil` |
| `nodeAffinityPreset.type` | Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` | | `extraVolumes` | Extra volumes to add to the deployment. Requires setting `extraVolumeMounts` | `[]` |
| `nodeAffinityPreset.key` | Node label key to match Ignored if `affinity` is set. | `""` | | `extraVolumeMounts` | Extra volume mounts to add to the container. Requires setting `extraVolumeMounts | `[]` |
| `nodeAffinityPreset.values` | Node label values to match. Ignored if `affinity` is set. | `[]` | | `initContainers` | Extra init containers to add to the deployment | `[]` |
| `nodeSelector` | Node labels for pod assignment | `{}` (The value is evaluated as a template) | | `sidecars` | Extra sidecar containers to add to the deployment | `[]` |
| `suitecrmHost` | SuiteCRM host to create application URLs (when ingress, it will be ignored) | `nil` | | `tolerations` | Tolerations for pod assignment. Evaluated as a template. | `[]` |
| `suitecrmUsername` | User of the application | `user` | | `existingSecret` | Name of a secret with the application password | `nil` |
| `suitecrmPassword` | Application password | _random 10 character alphanumeric string_ | | `suitecrmSmtpHost` | SMTP host | `nil` |
| `suitecrmEmail` | Admin email | `user@example.com` | | `suitecrmSmtpPort` | SMTP port | `nil` |
| `suitecrmLastName` | Last name | `Last` | | `suitecrmSmtpUser` | SMTP user | `nil` |
| `suitecrmSmtpHost` | SMTP host | `nil` | | `suitecrmSmtpPassword` | SMTP password | `nil` |
| `suitecrmSmtpPort` | SMTP port | `nil` | | `suitecrmSmtpProtocol` | SMTP protocol [`ssl`, `tls`] | `nil` |
| `suitecrmSmtpUser` | SMTP user | `nil` | | `suitecrmNotifyAddress` | SuiteCRM notify address | `nil` |
| `suitecrmSmtpPassword` | SMTP password | `nil` | | `suitecrmNotifyName` | SuiteCRM notify name | `nil` |
| `suitecrmSmtpProtocol` | SMTP protocol [`ssl`, `tls`] | `nil` | | `containerPorts` | Container ports | `{}` |
| `suitecrmValidateUserIP` | Whether to validate the user IP address or not | `no` | | `sessionAffinity` | Control where client requests go, to the same pod or round-robin | `None` |
| `suitecrmSkipInstall` | Skip SuiteCRM installation wizard (`no` / `yes`) | `false` | | `podAffinityPreset` | Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `podAffinityPreset` | Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` | | `podAntiAffinityPreset` | Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `soft` |
| `podAntiAffinityPreset` | Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `soft` | | `nodeAffinityPreset.type` | Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `podAnnotations` | Pod annotations | `{}` | | `nodeAffinityPreset.key` | Node label key to match Ignored if `affinity` is set. | `""` |
| `podLabels` | Add additional labels to the pod (evaluated as a template) | `nil` | | `nodeAffinityPreset.values` | Node label values to match. Ignored if `affinity` is set. | `[]` |
| `podSecurityContext.enabled` | Enable SuiteCRM pods' Security Context | `true` | | `affinity` | Affinity for pod assignment | `{}` |
| `podSecurityContext.fsGroup` | SuiteCRM pods' group ID | `1001` | | `nodeSelector` | Node labels for pod assignment. Evaluated as a template. | `{}` |
| `readinessProbe` | Readiness probe configuration | `Check values.yaml file` | | `resources.requests` | The requested resources for the container | `{}` |
| `replicaCount` | Number of SuiteCRM Pods to run | `1` | | `podSecurityContext.enabled` | Enable SuiteCRM pods' Security Context | `true` |
| `resources` | CPU/Memory resource requests/limits | Memory: `512Mi`, CPU: `300m` | | `podSecurityContext.fsGroup` | SuiteCRM pods' group ID | `1001` |
| `sidecars` | Attach additional containers to the pod (evaluated as a template) | `nil` | | `containerSecurityContext.enabled` | Enable SuiteCRM containers' Security Context | `true` |
| `smtpHost` | SMTP host | `nil` | | `containerSecurityContext.runAsUser` | SuiteCRM containers' Security Context | `1001` |
| `smtpPort` | SMTP port | `nil` (but suitecrm internal default is 25) | | `livenessProbe.enabled` | Enable livenessProbe | `true` |
| `smtpProtocol` | SMTP Protocol (options: ssl,tls, nil) | `nil` | | `livenessProbe.path` | Request path for livenessProbe | `/index.php` |
| `smtpUser` | SMTP user | `nil` | | `livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `600` |
| `smtpPassword` | SMTP password | `nil` | | `livenessProbe.periodSeconds` | Period seconds for livenessProbe | `10` |
| `startupProbe` | Startup probe configuration | `Check values.yaml file` | | `livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `5` |
| `tolerations` | Tolerations for pod assignment | `[]` (The value is evaluated as a template) | | `livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` |
| `updateStrategy` | Deployment update strategy | `nil` | | `livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` |
| `readinessProbe.enabled` | Enable readinessProbe | `true` |
| `readinessProbe.path` | Request path for readinessProbe | `/index.php` |
| `readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `30` |
| `readinessProbe.periodSeconds` | Period seconds for readinessProbe | `5` |
| `readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `3` |
| `readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` |
| `readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` |
| `startupProbe.enabled` | Enable startupProbe | `false` |
| `startupProbe.path` | Request path for startupProbe | `/index.php` |
| `startupProbe.initialDelaySeconds` | Initial delay seconds for startupProbe | `0` |
| `startupProbe.periodSeconds` | Period seconds for startupProbe | `10` |
| `startupProbe.timeoutSeconds` | Timeout seconds for startupProbe | `3` |
| `startupProbe.failureThreshold` | Failure threshold for startupProbe | `60` |
| `startupProbe.successThreshold` | Success threshold for startupProbe | `1` |
| `customLivenessProbe` | Override default liveness probe | `{}` |
| `customReadinessProbe` | Override default readiness probe | `{}` |
| `customStartupProbe` | Override default startup probe | `{}` |
| `lifecycleHooks` | lifecycleHooks for the container to automate configuration before or after startup | `nil` |
| `podAnnotations` | Pod annotations | `{}` |
| `podLabels` | Pod extra labels | `{}` |
### Database parameters ### Database parameters
| Parameter | Description | Default | | Name | Description | Value |
|---------------------------------------------|------------------------------------------------------------------------------------------|------------------------------------------------| | ------------------------------------------- | ---------------------------------------------------------------------------------------- | ------------------ |
| `mariadb.enabled` | Whether to use the MariaDB chart | `true` | | `mariadb.enabled` | Whether to deploy a mariadb server to satisfy the applications database requirements | `true` |
| `mariadb.architecture` | MariaDB architecture (`standalone` or `replication`) | `standalone` | | `mariadb.architecture` | MariaDB architecture. Allowed values: `standalone` or `replication` | `standalone` |
| `mariadb.auth.rootPassword` | Password for the MariaDB `root` user | _random 10 character alphanumeric string_ | | `mariadb.auth.rootPassword` | Password for the MariaDB `root` user | `""` |
| `mariadb.auth.database` | Database name to create | `bitnami_suitecrm` | | `mariadb.auth.database` | Database name to create | `bitnami_suitecrm` |
| `mariadb.auth.username` | Database user to create | `bn_suitecrm` | | `mariadb.auth.username` | Database user to create | `bn_suitecrm` |
| `mariadb.auth.password` | Password for the database | _random 10 character long alphanumeric string_ | | `mariadb.auth.password` | Password for the database | `""` |
| `mariadb.primary.persistence.enabled` | Enable database persistence using PVC | `true` | | `mariadb.primary.persistence.enabled` | Enable database persistence using PVC | `true` |
| `mariadb.primary.persistence.existingClaim` | Name of an existing `PersistentVolumeClaim` for MariaDB primary replicas | `nil` | | `mariadb.primary.persistence.storageClass` | MariaDB data Persistent Volume Storage Class | `nil` |
| `mariadb.primary.persistence.accessModes` | Database Persistent Volume Access Modes | `[ReadWriteOnce]` | | `mariadb.primary.persistence.accessModes` | Database Persistent Volume Access Modes | `[]` |
| `mariadb.primary.persistence.size` | Database Persistent Volume Size | `8Gi` | | `mariadb.primary.persistence.size` | Database Persistent Volume Size | `8Gi` |
| `mariadb.primary.persistence.hostPath` | Set path in case you want to use local host path volumes (not recommended in production) | `nil` | | `mariadb.primary.persistence.hostPath` | Set path in case you want to use local host path volumes (not recommended in production) | `nil` |
| `mariadb.primary.persistence.storageClass` | MariaDB primary persistent volume storage Class | `nil` | | `mariadb.primary.persistence.existingClaim` | Name of an existing `PersistentVolumeClaim` for MariaDB primary replicas | `nil` |
| `externalDatabase.user` | Existing username in the external db | `bn_suitecrm` | | `externalDatabase.host` | Host of the existing database | `nil` |
| `externalDatabase.password` | Password for the above username | `""` | | `externalDatabase.port` | Port of the existing database | `3306` |
| `externalDatabase.database` | Name of the existing database | `bitnami_suitecrm` | | `externalDatabase.user` | Existing username in the external database | `bn_suitecrm` |
| `externalDatabase.host` | Host of the existing database | `nil` | | `externalDatabase.password` | Password for the above username | `nil` |
| `externalDatabase.port` | Port of the existing database | `3306` | | `externalDatabase.database` | Name of the existing database | `bitnami_suitecrm` |
### Persistence parameters ### Persistence parameters
| Parameter | Description | Default | | Name | Description | Value |
|-----------------------------|------------------------------------------|---------------------------------------------| | --------------------------- | ---------------------------------------- | --------------- |
| `persistence.enabled` | Enable persistence using PVC | `true` | | `persistence.enabled` | Enable persistence using PVC | `true` |
| `persistence.storageClass` | PVC Storage Class for SuiteCRM volume | `nil` (uses alpha storage class annotation) | | `persistence.storageClass` | PVC Storage Class for SuiteCRM volume | `nil` |
| `persistence.existingClaim` | An Existing PVC name for SuiteCRM volume | `nil` (uses alpha storage class annotation) | | `persistence.accessMode` | PVC Access Mode for SuiteCRM volume | `ReadWriteOnce` |
| `persistence.hostPath` | Host mount path for SuiteCRM volume | `nil` (will not mount to a host path) | | `persistence.accessMode` | PVC Access Mode for SuiteCRM volume | `ReadWriteOnce` |
| `persistence.accessMode` | PVC Access Mode for SuiteCRM volume | `ReadWriteOnce` | | `persistence.size` | PVC Storage Request for SuiteCRM volume | `8Gi` |
| `persistence.size` | PVC Storage Request for SuiteCRM volume | `8Gi` | | `persistence.existingClaim` | An Existing PVC name for SuiteCRM volume | `nil` |
| `persistence.hostPath` | Host mount path for SuiteCRM volume | `nil` |
### Volume Permissions parameters ### Volume Permissions parameters
| Parameter | Description | Default | | Name | Description | Value |
|---------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------| | -------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------- |
| `volumePermissions.enabled` | Enable init container that changes volume permissions in the data directory (for cases where the default k8s `runAsUser` and `fsUser` values do not work) | `false` | | `volumePermissions.enabled` | Enable init container that changes volume permissions in the data directory (for cases where the default k8s `runAsUser` and `fsUser` values do not work) | `false` |
| `volumePermissions.image.registry` | Init container volume-permissions image registry | `docker.io` | | `volumePermissions.image.registry` | Init container volume-permissions image registry | `docker.io` |
| `volumePermissions.image.repository` | Init container volume-permissions image name | `bitnami/bitnami-shell` | | `volumePermissions.image.repository` | Init container volume-permissions image repository | `bitnami/bitnami-shell` |
| `volumePermissions.image.tag` | Init container volume-permissions image tag | `"10"` | | `volumePermissions.image.tag` | Init container volume-permissions image tag | `10-debian-10-r123` |
| `volumePermissions.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | | `volumePermissions.image.pullPolicy` | Init container volume-permissions image pull policy | `Always` |
| `volumePermissions.image.pullPolicy` | Init container volume-permissions image pull policy | `Always` | | `volumePermissions.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
| `volumePermissions.resources` | Init container resource requests/limit | `nil` | | `volumePermissions.resources.limits` | The resources limits for the container | `{}` |
| `volumePermissions.resources.requests` | The requested resources for the container | `{}` |
### Traffic Exposure Parameters ### Traffic Exposure Parameters
| Parameter | Description | Default | | Name | Description | Value |
|----------------------------------|---------------------------------------------------------------|--------------------------| | ------------------------------- | --------------------------------------------------------------------------------------------- | ------------------------ |
| `service.type` | Kubernetes Service type | `LoadBalancer` | | `service.type` | Kubernetes Service type | `LoadBalancer` |
| `service.port` | Service HTTP port | `80` | | `service.port` | Service HTTP port | `8080` |
| `service.httpsPort` | Service HTTPS port | `443` | | `service.httpsPort` | Service HTTPS port | `8443` |
| `service.externalTrafficPolicy` | Enable client source IP preservation | `Cluster` | | `service.nodePorts.http` | Kubernetes HTTP node port | `""` |
| `service.nodePorts.http` | Kubernetes http node port | `""` | | `service.nodePorts.https` | Kubernetes HTTPS node port | `""` |
| `service.nodePorts.https` | Kubernetes https node port | `""` | | `service.externalTrafficPolicy` | Enable client source IP preservation | `Cluster` |
| `ingress.enabled` | Enable ingress controller resource | `false` | | `ingress.enabled` | Enable ingress controller resource | `false` |
| `ingress.certManager` | Add annotations for cert-manager | `false` | | `ingress.certManager` | Set this to true in order to add the corresponding annotations for cert-manager | `false` |
| `ingress.hostname` | Default host for the ingress resource | `suitecrm.local` | | `ingress.hostname` | Default host for the ingress resource | `suitecrm.local` |
| `ingress.annotations` | Ingress annotations | `{}` | | `ingress.annotations` | Ingress annotations | `{}` |
| `ingress.hosts[0].name` | Hostname to your SuiteCRM installation | `nil` | | `ingress.hosts` | The list of additional hostnames to be covered with this ingress record. | `nil` |
| `ingress.hosts[0].path` | Path within the url structure | `nil` | | `ingress.tls` | The tls configuration for the ingress | `nil` |
| `ingress.tls[0].hosts[0]` | TLS hosts | `nil` | | `ingress.secrets` | If you're providing your own certificates, please use this to add the certificates as secrets | `nil` |
| `ingress.tls[0].secretName` | TLS Secret (certificates) | `nil` | | `ingress.apiVersion` | Force Ingress API version (automatically detected if not set) | `nil` |
| `ingress.secrets[0].name` | TLS Secret Name | `nil` | | `ingress.path` | Ingress path | `/` |
| `ingress.secrets[0].certificate` | TLS Secret Certificate | `nil` | | `ingress.pathType` | Ingress path type | `ImplementationSpecific` |
| `ingress.secrets[0].key` | TLS Secret Key | `nil` |
| `ingress.apiVersion` | Force Ingress API version (automatically detected if not set) | `` |
| `ingress.path` | Ingress path | `/` |
| `ingress.pathType` | Ingress path type | `ImplementationSpecific` |
### Metrics parameters ### Metrics parameters
| Parameter | Description | Default | | Name | Description | Value |
|-----------------------------|--------------------------------------------------|--------------------------------------------------------------| | --------------------------- | ---------------------------------------------------------- | ------------------------- |
| `metrics.enabled` | Start a side-car prometheus exporter | `false` | | `metrics.enabled` | Start a side-car prometheus exporter | `false` |
| `metrics.image.registry` | Apache exporter image registry | `docker.io` | | `metrics.image.registry` | Apache exporter image registry | `docker.io` |
| `metrics.image.repository` | Apache exporter image name | `bitnami/apache-exporter` | | `metrics.image.repository` | Apache exporter image repository | `bitnami/apache-exporter` |
| `metrics.image.tag` | Apache exporter image tag | `{TAG_NAME}` | | `metrics.image.tag` | Apache exporter image tag (immutable tags are recommended) | `0.9.0-debian-10-r21` |
| `metrics.image.pullPolicy` | Image pull policy | `IfNotPresent` | | `metrics.image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `metrics.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | | `metrics.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
| `metrics.podAnnotations` | Additional annotations for Metrics exporter pod | `{prometheus.io/scrape: "true", prometheus.io/port: "9117"}` | | `metrics.resources` | Metrics exporter resource requests and limits | `{}` |
| `metrics.resources` | Exporter resource requests/limit | {} | | `metrics.podAnnotations` | Additional annotations for Metrics exporter pod | `{}` |
### Certificate injection parameters ### Certificate injection parameters
| Parameter | Description | Default | | Name | Description | Value |
|------------------------------------------------------|----------------------------------------------------------------------|------------------------------------------| | ---------------------------------------------------- | ------------------------------------------------------------------------- | ---------------------------------------- |
| `certificates.customCertificate.certificateSecret` | Secret containing the certificate and key to add | `""` | | `certificates.customCertificate.certificateSecret` | Secret containing the certificate and key to add | `""` |
| `certificates.customCertificate.chainSecret.name` | Name of the secret containing the certificate chain | `""` | | `certificates.customCertificate.chainSecret.name` | Name of the secret containing the certificate chain | `nil` |
| `certificates.customCertificate.chainSecret.key` | Key of the certificate chain file inside the secret | `""` | | `certificates.customCertificate.chainSecret.key` | Key of the certificate chain file inside the secret | `nil` |
| `certificates.customCertificate.certificateLocation` | Location in the container to store the certificate | `/etc/ssl/certs/ssl-cert-snakeoil.pem` | | `certificates.customCertificate.certificateLocation` | Location in the container to store the certificate | `/etc/ssl/certs/ssl-cert-snakeoil.pem` |
| `certificates.customCertificate.keyLocation` | Location in the container to store the private key | `/etc/ssl/private/ssl-cert-snakeoil.key` | | `certificates.customCertificate.keyLocation` | Location in the container to store the private key | `/etc/ssl/private/ssl-cert-snakeoil.key` |
| `certificates.customCertificate.chainLocation` | Location in the container to store the certificate chain | `/etc/ssl/certs/chain.pem` | | `certificates.customCertificate.chainLocation` | Location in the container to store the certificate chain | `/etc/ssl/certs/mychain.pem` |
| `certificates.customCAs` | Defines a list of secrets to import into the container trust store | `[]` | | `certificates.customCAs` | Defines a list of secrets to import into the container trust store | `[]` |
| `certificates.image.registry` | Container sidecar registry | `docker.io` | | `certificates.command` | Override default container command (useful when using custom images) | `nil` |
| `certificates.image.repository` | Container sidecar image | `bitnami/bitnami-shell` | | `certificates.args` | Override default container args (useful when using custom images) | `nil` |
| `certificates.image.tag` | Container sidecar image tag | `"10"` | | `certificates.extraEnvVars` | Container sidecar extra environment variables | `[]` |
| `certificates.image.pullPolicy` | Container sidecar image pull policy | `IfNotPresent` | | `certificates.extraEnvVarsCM` | ConfigMap containing extra environment variables | `nil` |
| `certificates.image.pullSecrets` | Container sidecar image pull secrets | `image.pullSecrets` | | `certificates.extraEnvVarsSecret` | Secret containing extra environment variables (in case of sensitive data) | `nil` |
| `certificates.args` | Override default container args (useful when using custom images) | `nil` | | `certificates.image.registry` | Container sidecar registry | `docker.io` |
| `certificates.command` | Override default container command (useful when using custom images) | `nil` | | `certificates.image.repository` | Container sidecar image repository | `bitnami/bitnami-shell` |
| `certificates.extraEnvVars` | Container sidecar extra environment variables (eg proxy) | `[]` | | `certificates.image.tag` | Container sidecar image tag (immutable tags are recommended) | `10-debian-10-r123` |
| `certificates.extraEnvVarsCM` | ConfigMap containing extra env vars | `nil` | | `certificates.image.pullPolicy` | Container sidecar image pull policy | `IfNotPresent` |
| `certificates.extraEnvVarsSecret` | Secret containing extra env vars (in case of sensitive data) | `nil` | | `certificates.image.pullSecrets` | Container sidecar image pull secrets | `[]` |
The above parameters map to the env variables defined in [bitnami/suitecrm](http://github.com/bitnami/bitnami-docker-suitecrm). For more information please refer to the [bitnami/suitecrm](http://github.com/bitnami/bitnami-docker-suitecrm) image documentation. The above parameters map to the env variables defined in [bitnami/suitecrm](http://github.com/bitnami/bitnami-docker-suitecrm). For more information please refer to the [bitnami/suitecrm](http://github.com/bitnami/bitnami-docker-suitecrm) image documentation.

File diff suppressed because it is too large Load Diff