diff --git a/.github/workflows/generate-chart-readme.yml b/.github/workflows/generate-chart-readme.yml index a876109a4f..40a013bde1 100644 --- a/.github/workflows/generate-chart-readme.yml +++ b/.github/workflows/generate-chart-readme.yml @@ -46,6 +46,9 @@ on: - 'bitnami/prestashop/values.yaml' - 'bitnami/pytorch/values.yaml' - 'bitnami/rabbitmq/values.yaml' + - 'bitnami/spark/values.yaml' + - 'bitnami/spring-cloud-dataflow/values.yaml' + - 'bitnami/suitecrm/values.yaml' jobs: generate-chart-readme: diff --git a/bitnami/spark/Chart.yaml b/bitnami/spark/Chart.yaml index 8a2f9fa395..f81ac4ee15 100644 --- a/bitnami/spark/Chart.yaml +++ b/bitnami/spark/Chart.yaml @@ -22,4 +22,4 @@ name: spark sources: - https://github.com/bitnami/bitnami-docker-spark - https://spark.apache.org/ -version: 5.6.1 +version: 5.6.2 diff --git a/bitnami/spark/README.md b/bitnami/spark/README.md index 06ced699d0..1c969e577d 100644 --- a/bitnami/spark/README.md +++ b/bitnami/spark/README.md @@ -45,198 +45,196 @@ The command removes all the Kubernetes components associated with the chart and ## Parameters -The following tables lists the configurable parameters of the Apache Spark chart and their default values. - ### Global parameters -| Parameter | Description | Default | -|---------------------------|-------------------------------------------------|---------------------------------------------------------| -| `global.imageRegistry` | Global Docker image registry | `nil` | -| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | +| Name | Description | Value | +| ------------------------- | ----------------------------------------------- | ----- | +| `global.imageRegistry` | Global Docker image registry | `nil` | +| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` | + ### Common parameters -| Parameter | Description | Default | -|---------------------|-----------------------------------------------------------------------------------------------------------|--------------------------------| -| `nameOverride` | String to partially override common.names.fullname template with a string (will prepend the release name) | `nil` | -| `fullnameOverride` | String to fully override common.names.fullname template with a string | `nil` | -| `commonLabels` | Labels to add to all deployed objects | `{}` | -| `commonAnnotations` | Annotations to add to all deployed objects | `{}` | -| `kubeVersion` | Force target Kubernetes version (using Helm capabilities if not set) | `nil` | -| `extraDeploy` | Array of extra objects to deploy with the release | `[]` (evaluated as a template) | +| Name | Description | Value | +| ------------------ | -------------------------------------------------------------------------------------------- | ----- | +| `kubeVersion` | Force target Kubernetes version (using Helm capabilities if not set) | `nil` | +| `nameOverride` | String to partially override common.names.fullname template (will maintain the release name) | `nil` | +| `fullnameOverride` | String to fully override common.names.fullname template | `nil` | +| `extraDeploy` | Array of extra objects to deploy with the release | `[]` | + ### Spark parameters -| Parameter | Description | Default | -|---------------------|-----------------------------------------------------------------------------------------|---------------------------------------------------------| -| `image.registry` | spark image registry | `docker.io` | -| `image.repository` | spark Image name | `bitnami/spark` | -| `image.tag` | spark Image tag | `{TAG_NAME}` | -| `image.pullPolicy` | spark image pull policy | `IfNotPresent` | -| `image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | -| `hostNetwork` | Use Host-Network for the PODs (if true, also dnsPolicy: ClusterFirstWithHostNet is set) | `false` | +| Name | Description | Value | +| ------------------- | ------------------------------------------------ | --------------------- | +| `image.registry` | Spark image registry | `docker.io` | +| `image.repository` | Spark image repository | `bitnami/spark` | +| `image.tag` | Spark image tag (immutable tags are recommended) | `3.1.2-debian-10-r18` | +| `image.pullPolicy` | Spark image pull policy | `IfNotPresent` | +| `image.pullSecrets` | Specify docker-registry secret names as an array | `[]` | +| `image.debug` | Enable image debug mode | `false` | +| `hostNetwork` | Enable HOST Network | `false` | + ### Spark master parameters -| Parameter | Description | Default | -|---------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------| -| `master.debug` | Specify if debug values should be set on the master | `false` | -| `master.webPort` | Specify the port where the web interface will listen on the master | `8080` | -| `master.clusterPort` | Specify the port where the master listens to communicate with workers | `7077` | -| `master.hostAliases` | Add deployment host aliases | `[]` | -| `master.daemonMemoryLimit` | Set the memory limit for the master daemon | No default | -| `master.configOptions` | Optional configuration if the form `-Dx=y` | No default | -| `master.securityContext.enabled` | Enable security context | `true` | -| `master.securityContext.fsGroup` | Group ID for the container | `1001` | -| `master.securityContext.runAsUser` | User ID for the container | `1001` | -| `master.securityContext.runAsGroup` | Group ID for the container | `0` | -| `master.securityContext.seLinuxOptions` | SELinux options for the container | `{}` | -| `master.podAnnotations` | Annotations for pods in StatefulSet | `{}` (The value is evaluated as a template) | -| `master.extraPodLabels` | Extra labels for pods in StatefulSet | `{}` (The value is evaluated as a template) | -| `master.podAffinityPreset` | Spark master pod affinity preset. Ignored if `master.affinity` is set. Allowed values: `soft` or `hard` | `""` | -| `master.podAntiAffinityPreset` | Spark master pod anti-affinity preset. Ignored if `master.affinity` is set. Allowed values: `soft` or `hard` | `soft` | -| `master.nodeAffinityPreset.type` | Spark master node affinity preset type. Ignored if `master.affinity` is set. Allowed values: `soft` or `hard` | `""` | -| `master.nodeAffinityPreset.key` | Spark master node label key to match Ignored if `master.affinity` is set. | `""` | -| `master.nodeAffinityPreset.values` | Spark master node label values to match. Ignored if `master.affinity` is set. | `[]` | -| `master.affinity` | Spark master affinity for pod assignment | `{}` (evaluated as a template) | -| `master.nodeSelector` | Spark master node labels for pod assignment | `{}` (evaluated as a template) | -| `master.tolerations` | Spark master tolerations for pod assignment | `[]` (evaluated as a template) | -| `master.resources` | CPU/Memory resource requests/limits | `{}` | -| `master.extraEnvVars` | Extra environment variables to pass to the master container | `{}` | -| `master.extraVolumes` | Array of extra volumes to be added to the Spark master deployment (evaluated as template). Requires setting `master.extraVolumeMounts` | `nil` | -| `master.extraVolumeMounts` | Array of extra volume mounts to be added to the Spark master deployment (evaluated as template). Normally used with `master.extraVolumes`. | `nil` | -| `master.livenessProbe.enabled` | Turn on and off liveness probe | `true` | -| `master.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 10 | -| `master.livenessProbe.periodSeconds` | How often to perform the probe | 10 | -| `master.livenessProbe.timeoutSeconds` | When the probe times out | 5 | -| `master.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 2 | -| `master.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | -| `master.readinessProbe.enabled` | Turn on and off readiness probe | `true` | -| `master.readinessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 5 | -| `master.readinessProbe.periodSeconds` | How often to perform the probe | 10 | -| `master.readinessProbe.timeoutSeconds` | When the probe times out | 5 | -| `master.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 | -| `master.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | +| Name | Description | Value | +| ------------------------------------------- | ------------------------------------------------------------------------------------------------------------- | ------ | +| `master.configurationConfigMap` | Set a custom configuration by using an existing configMap with the configuration file. | `nil` | +| `master.webPort` | Specify the port where the web interface will listen on the master | `8080` | +| `master.clusterPort` | Specify the port where the master listens to communicate with workers | `7077` | +| `master.hostAliases` | Deployment pod host aliases | `[]` | +| `master.daemonMemoryLimit` | Set the memory limit for the master daemon | `nil` | +| `master.configOptions` | Use a string to set the config options for in the form "-Dx=y" | `nil` | +| `master.extraEnvVars` | Extra environment variables to pass to the master container | `nil` | +| `master.securityContext.enabled` | Enable security context | `true` | +| `master.securityContext.fsGroup` | Group ID for the container | `1001` | +| `master.securityContext.runAsUser` | User ID for the container | `1001` | +| `master.securityContext.runAsGroup` | Group ID for the container | `0` | +| `master.securityContext.seLinuxOptions` | SELinux options for the container | `{}` | +| `master.podAnnotations` | Annotations for pods in StatefulSet | `{}` | +| `master.extraPodLabels` | Extra labels for pods in StatefulSet | `{}` | +| `master.podAffinityPreset` | Spark master pod affinity preset. Ignored if `master.affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `master.podAntiAffinityPreset` | Spark master pod anti-affinity preset. Ignored if `master.affinity` is set. Allowed values: `soft` or `hard` | `soft` | +| `master.nodeAffinityPreset.type` | Spark master node affinity preset type. Ignored if `master.affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `master.nodeAffinityPreset.key` | Spark master node label key to match Ignored if `master.affinity` is set. | `""` | +| `master.nodeAffinityPreset.values` | Spark master node label values to match. Ignored if `master.affinity` is set. | `[]` | +| `master.affinity` | Spark master affinity for pod assignment | `{}` | +| `master.nodeSelector` | Spark master node labels for pod assignment | `{}` | +| `master.tolerations` | Spark master tolerations for pod assignment | `[]` | +| `master.resources.limits` | The resources limits for the container | `{}` | +| `master.resources.requests` | The requested resources for the container | `{}` | +| `master.livenessProbe.enabled` | Enable livenessProbe | `true` | +| `master.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `180` | +| `master.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `20` | +| `master.livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `5` | +| `master.livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` | +| `master.livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` | +| `master.readinessProbe.enabled` | Enable readinessProbe | `true` | +| `master.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `30` | +| `master.readinessProbe.periodSeconds` | Period seconds for readinessProbe | `10` | +| `master.readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `5` | +| `master.readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` | +| `master.readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` | +| `master.initContainers` | Add initContainers to the master pods. | `{}` | + ### Spark worker parameters -| Parameter | Description | Default | -|---------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------| -| `worker.debug` | Specify if debug values should be set on workers | `false` | -| `worker.webPort` | Specify the port where the web interface will listen on the worker | `8080` | -| `worker.clusterPort` | Specify the port where the worker listens to communicate with the master | `7077` | -| `worker.extraPorts` | Specify the port where the running jobs inside the workers listens, [ContainerPort spec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#containerport-v1-core) | `[]` | -| `worker.daemonMemoryLimit` | Set the memory limit for the worker daemon | No default | -| `worker.memoryLimit` | Set the maximum memory the worker is allowed to use | No default | -| `worker.coreLimit` | Se the maximum number of cores that the worker can use | No default | -| `worker.dir` | Set a custom working directory for the application | No default | -| `worker.hostAliases` | Add deployment host aliases | `[]` | -| `worker.javaOptions` | Set options for the JVM in the form `-Dx=y` | No default | -| `worker.configOptions` | Set extra options to configure the worker in the form `-Dx=y` | No default | -| `worker.replicaCount` | Set the number of workers | `2` | -| `worker.podManagementPolicy` | Statefulset Pod Management Policy Type | `OrderedReady` | -| `worker.autoscaling.enabled` | Enable autoscaling depending on CPU | `false` | -| `worker.autoscaling.CpuTargetPercentage` | k8s hpa cpu targetPercentage | `50` | -| `worker.autoscaling.replicasMax` | Maximum number of workers when using autoscaling | `5` | -| `worker.securityContext.enabled` | Enable security context | `true` | -| `worker.securityContext.fsGroup` | Group ID for the container | `1001` | -| `worker.securityContext.runAsUser` | User ID for the container | `1001` | -| `worker.securityContext.runAsGroup` | Group ID for the container | `0` | -| `worker.securityContext.seLinuxOptions` | SELinux options for the container | `{}` | -| `worker.podAnnotations` | Annotations for pods in StatefulSet | `{}` | -| `worker.extraPodLabels` | Extra labels for pods in StatefulSet | `{}` (The value is evaluated as a template) | -| `worker.podAffinityPreset` | Spark worker pod affinity preset. Ignored if `worker.affinity` is set. Allowed values: `soft` or `hard` | `""` | -| `worker.podAntiAffinityPreset` | Spark worker pod anti-affinity preset. Ignored if `worker.affinity` is set. Allowed values: `soft` or `hard` | `soft` | -| `worker.nodeAffinityPreset.type` | Spark worker node affinity preset type. Ignored if `worker.affinity` is set. Allowed values: `soft` or `hard` | `""` | -| `worker.nodeAffinityPreset.key` | Spark worker node label key to match Ignored if `worker.affinity` is set. | `""` | -| `worker.nodeAffinityPreset.values` | Spark worker node label values to match. Ignored if `worker.affinity` is set. | `[]` | -| `worker.affinity` | Spark worker affinity for pod assignment | `{}` (evaluated as a template) | -| `worker.nodeSelector` | Spark worker node labels for pod assignment | `{}` (evaluated as a template) | -| `worker.tolerations` | Spark worker tolerations for pod assignment | `[]` (evaluated as a template) | -| `worker.resources` | CPU/Memory resource requests/limits | Memory: `256Mi`, CPU: `250m` | -| `worker.livenessProbe.enabled` | Turn on and off liveness probe | `true` | -| `worker.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 10 | -| `worker.livenessProbe.periodSeconds` | How often to perform the probe | 10 | -| `worker.livenessProbe.timeoutSeconds` | When the probe times out | 5 | -| `worker.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 2 | -| `worker.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | -| `worker.readinessProbe.enabled` | Turn on and off readiness probe | `true` | -| `worker.readinessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 5 | -| `worker.readinessProbe.periodSeconds` | How often to perform the probe | 10 | -| `worker.readinessProbe.timeoutSeconds` | When the probe times out | 5 | -| `worker.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 | -| `worker.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | -| `master.extraEnvVars` | Extra environment variables to pass to the worker container | `{}` | -| `worker.extraVolumes` | Array of extra volumes to be added to the Spark worker deployment (evaluated as template). Requires setting `worker.extraVolumeMounts` | `nil` | -| `worker.extraVolumeMounts` | Array of extra volume mounts to be added to the Spark worker deployment (evaluated as template). Normally used with `worker.extraVolumes`. | `nil` | +| Name | Description | Value | +| ------------------------------------------- | ------------------------------------------------------------------------------------------------------------- | -------------- | +| `worker.configurationConfigMap` | Set a custom configuration by using an existing configMap with the configuration file. | `nil` | +| `worker.webPort` | Specify the port where the web interface will listen on the worker | `8081` | +| `worker.clusterPort` | Specify the port where the worker listens to communicate with the master | `nil` | +| `worker.hostAliases` | Add deployment host aliases | `[]` | +| `worker.extraPorts` | Specify the port where the running jobs inside the workers listens | `[]` | +| `worker.daemonMemoryLimit` | Set the memory limit for the worker daemon | `nil` | +| `worker.memoryLimit` | Set the maximum memory the worker is allowed to use | `nil` | +| `worker.coreLimit` | Se the maximum number of cores that the worker can use | `nil` | +| `worker.dir` | Set a custom working directory for the application | `nil` | +| `worker.javaOptions` | Set options for the JVM in the form `-Dx=y` | `nil` | +| `worker.configOptions` | Set extra options to configure the worker in the form `-Dx=y` | `nil` | +| `worker.extraEnvVars` | An array to add extra env vars | `nil` | +| `worker.replicaCount` | Number of spark workers (will be the minimum number when autoscaling is enabled) | `2` | +| `worker.podManagementPolicy` | Statefulset Pod Management Policy Type | `OrderedReady` | +| `worker.securityContext.enabled` | Enable security context | `true` | +| `worker.securityContext.fsGroup` | Group ID for the container | `1001` | +| `worker.securityContext.runAsUser` | User ID for the container | `1001` | +| `worker.securityContext.runAsGroup` | Group ID for the container | `0` | +| `worker.securityContext.seLinuxOptions` | SELinux options for the container | `{}` | +| `worker.podAnnotations` | Annotations for pods in StatefulSet | `{}` | +| `worker.extraPodLabels` | Extra labels for pods in StatefulSet | `{}` | +| `worker.podAffinityPreset` | Spark worker pod affinity preset. Ignored if `worker.affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `worker.podAntiAffinityPreset` | Spark worker pod anti-affinity preset. Ignored if `worker.affinity` is set. Allowed values: `soft` or `hard` | `soft` | +| `worker.nodeAffinityPreset.type` | Spark worker node affinity preset type. Ignored if `worker.affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `worker.nodeAffinityPreset.key` | Spark worker node label key to match Ignored if `worker.affinity` is set. | `""` | +| `worker.nodeAffinityPreset.values` | Spark worker node label values to match. Ignored if `worker.affinity` is set. | `[]` | +| `worker.affinity` | Spark worker affinity for pod assignment | `{}` | +| `worker.nodeSelector` | Spark worker node labels for pod assignment | `{}` | +| `worker.tolerations` | Spark worker tolerations for pod assignment | `[]` | +| `worker.resources.limits` | The resources limits for the container | `{}` | +| `worker.resources.requests` | The requested resources for the container | `{}` | +| `worker.livenessProbe.enabled` | Enable livenessProbe | `true` | +| `worker.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `180` | +| `worker.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `20` | +| `worker.livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `5` | +| `worker.livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` | +| `worker.livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` | +| `worker.readinessProbe.enabled` | Enable readinessProbe | `true` | +| `worker.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `30` | +| `worker.readinessProbe.periodSeconds` | Period seconds for readinessProbe | `10` | +| `worker.readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `5` | +| `worker.readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` | +| `worker.readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` | +| `worker.initContainers` | Add initContainers to the master pods. | `{}` | +| `worker.autoscaling.enabled` | Enable replica autoscaling depending on CPU | `false` | +| `worker.autoscaling.CpuTargetPercentage` | Kubernetes HPA CPU target percentage | `50` | +| `worker.autoscaling.replicasMax` | Maximum number of workers when using autoscaling | `5` | + ### Security parameters -| Parameter | Description | Default | -|--------------------------------------|-------------------------------------------------------------------------|------------| -| `security.passwordsSecretName` | Secret to use when using security configuration to set custom passwords | No default | -| `security.rpc.authenticationEnabled` | Enable the RPC authentication | `false` | -| `security.rpc.encryptionEnabled` | Enable the encryption for RPC | `false` | -| `security.storageEncryptionEnabled` | Enable the encryption of the storage | `false` | -| `security.ssl.enabled` | Enable the SSL configuration | `false` | -| `security.ssl.needClientAuth` | Enable the client authentication | `false` | -| `security.ssl.protocol` | Set the SSL protocol | `TLSv1.2` | -| `security.ssl.existingSecret` | Set the name of the secret that contains the certificates | No default | -| `security.ssl.keystorePassword` | Set the password of the JKS Keystore | No default | -| `security.ssl.existingSecret` | Set the password of the JKS Truststore | No default | -| `security.ssl.autoGenerated` | Generate automatically self-signed TLS certificates | `false` | -| `security.ssl.resources.limits` | The resources limits for the TLS | `{}` | -| `security.ssl.resources.requests` | The requested resources for the TLS init | `{}` | +| Name | Description | Value | +| ------------------------------------ | ----------------------------------------------------------------------------- | --------- | +| `security.passwordsSecretName` | Name of the secret that contains all the passwords | `nil` | +| `security.rpc.authenticationEnabled` | Enable the RPC authentication | `false` | +| `security.rpc.encryptionEnabled` | Enable the encryption for RPC | `false` | +| `security.storageEncryptionEnabled` | Enables local storage encryption | `false` | +| `security.certificatesSecretName` | Name of the secret that contains the certificates. | `nil` | +| `security.ssl.enabled` | Enable the SSL configuration | `false` | +| `security.ssl.needClientAuth` | Enable the client authentication | `false` | +| `security.ssl.protocol` | Set the SSL protocol | `TLSv1.2` | +| `security.ssl.existingSecret` | Name of the existing secret containing the TLS certificates | `nil` | +| `security.ssl.autoGenerated` | Create self-signed TLS certificates. Currently only supports PEM certificates | `false` | +| `security.ssl.keystorePassword` | Set the password of the JKS Keystore | `nil` | +| `security.ssl.truststorePassword` | Truststore password. | `nil` | +| `security.ssl.resources.limits` | The resources limits for the container | `{}` | +| `security.ssl.resources.requests` | The requested resources for the container | `{}` | -### Exposure parameters -| Parameter | Description | Default | -|----------------------------------|---------------------------------------------------------------|--------------------------------| -| `service.type` | Kubernetes Service type | `ClusterIP` | -| `service.webPort` | Spark client port | `80` | -| `service.clusterPort` | Spark cluster port | `7077` | -| `service.nodePort` | Port to bind to for NodePort service type (client port) | `nil` | -| `service.nodePorts.cluster` | Kubernetes cluster node port | `""` | -| `service.nodePorts.web` | Kubernetes web node port | `""` | -| `service.annotations` | Annotations for spark service | {} | -| `service.loadBalancerIP` | loadBalancerIP if spark service type is `LoadBalancer` | `nil` | -| `ingress.enabled` | Enable ingress controller resource | `false` | -| `ingress.certManager` | Add annotations for cert-manager | `false` | -| `ingress.hostname` | Default host for the ingress resource | `spark.local` | -| `ingress.path` | Default path for the ingress resource | `/` | -| `ingress.tls` | Create TLS Secret | `false` | -| `ingress.annotations` | Ingress annotations | `[]` (evaluated as a template) | -| `ingress.extraHosts[0].name` | Additional hostnames to be covered | `nil` | -| `ingress.extraHosts[0].path` | Additional hostnames to be covered | `nil` | -| `ingress.extraPaths` | Additional arbitrary path/backend objects | `nil` | -| `ingress.extraTls[0].hosts[0]` | TLS configuration for additional hostnames to be covered | `nil` | -| `ingress.extraTls[0].secretName` | TLS configuration for additional hostnames to be covered | `nil` | -| `ingress.secrets[0].name` | TLS Secret Name | `nil` | -| `ingress.secrets[0].certificate` | TLS Secret Certificate | `nil` | -| `ingress.secrets[0].key` | TLS Secret Key | `nil` | -| `ingress.apiVersion` | Force Ingress API version (automatically detected if not set) | `` | -| `ingress.path` | Ingress path | `/` | -| `ingress.pathType` | Ingress path type | `ImplementationSpecific` | +### Traffic Exposure parameters + +| Name | Description | Value | +| --------------------------- | ------------------------------------------------------------------------------------------------------ | ------------------------ | +| `service.type` | Kubernetes Service type | `ClusterIP` | +| `service.clusterPort` | Spark cluster port | `7077` | +| `service.webPort` | Spark client port | `80` | +| `service.nodePorts.cluster` | Kubernetes cluster node port | `""` | +| `service.nodePorts.web` | Kubernetes web node port | `""` | +| `service.loadBalancerIP` | Load balancer IP if spark service type is `LoadBalancer` | `nil` | +| `service.annotations` | Annotations for spark service | `{}` | +| `ingress.enabled` | Enable ingress controller resource | `false` | +| `ingress.certManager` | Set this to true in order to add the corresponding annotations for cert-manager | `false` | +| `ingress.pathType` | Ingress path type | `ImplementationSpecific` | +| `ingress.apiVersion` | Force Ingress API version (automatically detected if not set) | `nil` | +| `ingress.hostname` | Default host for the ingress resource | `spark.local` | +| `ingress.path` | The Path to Spark. You may need to set this to '/*' in order to use this with ALB ingress controllers. | `ImplementationSpecific` | +| `ingress.annotations` | Ingress annotations | `{}` | +| `ingress.tls` | Enable TLS configuration for the hostname defined at ingress.hostname parameter | `false` | +| `ingress.extraHosts` | The list of additional hostnames to be covered with this ingress record. | `[]` | +| `ingress.extraPaths` | Any additional arbitrary paths that may need to be added to the ingress under the main host. | `[]` | +| `ingress.extraTls` | The tls configuration for additional hostnames to be covered with this ingress record. | `[]` | +| `ingress.secrets` | If you're providing your own certificates, please use this to add the certificates as secrets | `[]` | + ### Metrics parameters -| Parameter | Description | Default | -|--------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------| -| `metrics.enabled` | Start a side-car prometheus exporter | `false` | -| `metrics.masterAnnotations` | Annotations for enabling prometheus to access the metrics endpoint of the master nodes | `{prometheus.io/scrape: "true", prometheus.io/path: "/metrics/", prometheus.io/port: "8080"}` | -| `metrics.workerAnnotations` | Annotations for enabling prometheus to access the metrics endpoint of the worker nodes | `{prometheus.io/scrape: "true", prometheus.io/path: "/metrics/", prometheus.io/port: "8081"}` | -| `metrics.resources.limits` | The resources limits for the metrics exporter container | `{}` | -| `metrics.resources.requests` | The requested resources for the metrics exporter container | `{}` | -| `metrics.podMonitor.enabled` | Create PodMonitor Resource for scraping metrics using PrometheusOperator | `false` | -| `metrics.podMonitor.extraMetricsEndpoints` | Add metrics endpoints for monitoring the jobs running in the worker nodes, [MetricsEndpoint](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#podmetricsendpoint) | `[]` | -| `metrics.podMonitor.namespace` | Namespace where podmonitor resource should be created | `nil` | -| `metrics.podMonitor.interval` | Specify the interval at which metrics should be scraped | `30s` | -| `metrics.podMonitor.scrapeTimeout` | Specify the timeout after which the scrape is ended | `nil` | -| `metrics.podMonitor.additionalLabels` | Additional labels that can be used so PodMonitors will be discovered by Prometheus | `{}` | -| `metrics.prometheusRule.enabled` | Set this to true to create prometheusRules for Prometheus | `false` | -| `metrics.prometheusRule.additionalLabels` | Additional labels that can be used so prometheusRules will be discovered by Prometheus | `{}` | -| `metrics.prometheusRule.namespace` | namespace where prometheusRules resource should be created | the same namespace as spark | -| `metrics.prometheusRule.rules` | [rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/) to be created, check values for an example. | `[]` | +| Name | Description | Value | +| ------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------- | ------- | +| `metrics.enabled` | Start a side-car prometheus exporter | `false` | +| `metrics.masterAnnotations` | Annotations for the Prometheus metrics on master nodes | `{}` | +| `metrics.workerAnnotations` | Annotations for the Prometheus metrics on worker nodes | `{}` | +| `metrics.podMonitor.enabled` | If the operator is installed in your cluster, set to true to create a PodMonitor Resource for scraping metrics using PrometheusOperator | `false` | +| `metrics.podMonitor.extraMetricsEndpoints` | Add metrics endpoints for monitoring the jobs running in the worker nodes | `[]` | +| `metrics.podMonitor.namespace` | Specify the namespace in which the podMonitor resource will be created | `""` | +| `metrics.podMonitor.interval` | Specify the interval at which metrics should be scraped | `30s` | +| `metrics.podMonitor.scrapeTimeout` | Specify the timeout after which the scrape is ended | `nil` | +| `metrics.podMonitor.additionalLabels` | Additional labels that can be used so PodMonitors will be discovered by Prometheus | `{}` | +| `metrics.prometheusRule.enabled` | Set this to true to create prometheusRules for Prometheus | `false` | +| `metrics.prometheusRule.namespace` | Namespace where the prometheusRules resource should be created | `""` | +| `metrics.prometheusRule.additionalLabels` | Additional labels that can be used so prometheusRules will be discovered by Prometheus | `{}` | +| `metrics.prometheusRule.rules` | Custom Prometheus [rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/) | `[]` | + Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example, diff --git a/bitnami/spark/values.yaml b/bitnami/spark/values.yaml index 2ec12afa8e..c5c9fbd448 100644 --- a/bitnami/spark/values.yaml +++ b/bitnami/spark/values.yaml @@ -1,14 +1,44 @@ +## @section Global parameters ## Global Docker image parameters ## Please, note that this will override the image parameters, including dependencies, configured to use the global value -## Current available global Docker image parameters: imageRegistry and imagePullSecrets +## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass + +## @param global.imageRegistry Global Docker image registry +## @param global.imagePullSecrets Global Docker registry secret names as an array ## -# global: -# imageRegistry: myRegistryName -# imagePullSecrets: -# - myRegistryKeySecretName +global: + imageRegistry: + ## E.g. + ## imagePullSecrets: + ## - myRegistryKeySecretName + ## + imagePullSecrets: [] + +## @section Common parameters + +## @param kubeVersion Force target Kubernetes version (using Helm capabilities if not set) +## +kubeVersion: +## @param nameOverride String to partially override common.names.fullname template (will maintain the release name) +## +nameOverride: +## @param fullnameOverride String to fully override common.names.fullname template +## +fullnameOverride: +## @param extraDeploy Array of extra objects to deploy with the release +## +extraDeploy: [] + +## @section Spark parameters ## Bitnami Spark image version ## ref: https://hub.docker.com/r/bitnami/spark/tags/ +## @param image.registry Spark image registry +## @param image.repository Spark image repository +## @param image.tag Spark image tag (immutable tags are recommended) +## @param image.pullPolicy Spark image pull policy +## @param image.pullSecrets Specify docker-registry secret names as an array +## @param image.debug Enable image debug mode ## image: registry: docker.io @@ -19,68 +49,61 @@ image: ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images ## pullPolicy: IfNotPresent - - ## Pull secret for this image - # pullSecrets: - # - myRegistryKeySecretName - + ## Optionally specify an array of imagePullSecrets. + ## Secrets must be manually created in the namespace. + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + ## e.g: + ## pullSecrets: + ## - myRegistryKeySecretName + ## + pullSecrets: [] ## Set to true if you would like to see extra information on logs ## It turns BASH and/or NAMI debugging in the image ## debug: false - -## Enable HOST Network -## If hostNetwork true -> dnsPolicy is set to ClusterFirstWithHostNet +## @param hostNetwork Enable HOST Network +## If hostNetwork is true, then dnsPolicy is set to ClusterFirstWithHostNet ## hostNetwork: false -## Force target Kubernetes version (using Helm capabilites if not set) -## -kubeVersion: - -## String to partially override common.names.fullname template (will maintain the release name) -## -# nameOverride: - -## String to fully override common.names.fullname template -## -# fullnameOverride: +## @section Spark master parameters ## Spark master specific configuration ## master: - ## Set a custom configuration by using an existing configMap with the configuration file. + ## @param master.configurationConfigMap Set a custom configuration by using an existing configMap with the configuration file. ## - # configurationConfigMap: - - ## Spark container ports + configurationConfigMap: + ## @param master.webPort Specify the port where the web interface will listen on the master ## webPort: 8080 + ## @param master.clusterPort Specify the port where the master listens to communicate with workers + ## clusterPort: 7077 - - ## Deployment pod host aliases + ## @param master.hostAliases Deployment pod host aliases ## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/ ## hostAliases: [] - - ## Set the master daemon memory limit. + ## @param master.daemonMemoryLimit Set the memory limit for the master daemon ## - # daemonMemoryLimit: - - ## Use a string to set the config options for in the form "-Dx=y" + daemonMemoryLimit: + ## @param master.configOptions Use a string to set the config options for in the form "-Dx=y" ## - # configOptions: - - ## An array to add extra env vars + configOptions: + ## @param master.extraEnvVars Extra environment variables to pass to the master container ## For example: ## extraEnvVars: ## - name: SPARK_DAEMON_JAVA_OPTS ## value: -Dx=y ## - # extraEnvVars: - + extraEnvVars: ## Kubernetes Security Context ## https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ + ## @param master.securityContext.enabled Enable security context + ## @param master.securityContext.fsGroup Group ID for the container + ## @param master.securityContext.runAsUser User ID for the container + ## @param master.securityContext.runAsGroup Group ID for the container + ## @param master.securityContext.seLinuxOptions SELinux options for the container ## securityContext: enabled: true @@ -88,84 +111,80 @@ master: runAsUser: 1001 runAsGroup: 0 seLinuxOptions: {} - - ## Annotations to add to the statefulset - ## + ## @param master.podAnnotations Annotations for pods in StatefulSet ## podAnnotations: {} - - ## Labes to add to the statefulset - ## + ## @param master.extraPodLabels Extra labels for pods in StatefulSet ## extraPodLabels: {} - - ## Spark master pod affinity preset + ## @param master.podAffinityPreset Spark master pod affinity preset. Ignored if `master.affinity` is set. Allowed values: `soft` or `hard` ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity - ## Allowed values: soft, hard ## podAffinityPreset: '' - - ## Spark master pod anti-affinity preset + ## @param master.podAntiAffinityPreset Spark master pod anti-affinity preset. Ignored if `master.affinity` is set. Allowed values: `soft` or `hard` ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity - ## Allowed values: soft, hard ## podAntiAffinityPreset: soft - ## Spark master node affinity preset ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity - ## Allowed values: soft, hard ## nodeAffinityPreset: - ## Node affinity type - ## Allowed values: soft, hard + ## @param master.nodeAffinityPreset.type Spark master node affinity preset type. Ignored if `master.affinity` is set. Allowed values: `soft` or `hard` ## type: '' - ## Node label key to match + ## @param master.nodeAffinityPreset.key Spark master node label key to match Ignored if `master.affinity` is set. ## E.g. ## key: "kubernetes.io/e2e-az-name" ## key: '' - ## Node label values to match + ## @param master.nodeAffinityPreset.values Spark master node label values to match. Ignored if `master.affinity` is set. ## E.g. ## values: ## - e2e-az1 ## - e2e-az2 ## values: [] - - ## Affinity for Spark master pods assignment + ## @param master.affinity Spark master affinity for pod assignment ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity ## Note: master.podAffinityPreset, master.podAntiAffinityPreset, and master.nodeAffinityPreset will be ignored when it's set ## affinity: {} - - ## Node labels for Spark master pods assignment + ## @param master.nodeSelector Spark master node labels for pod assignment ## ref: https://kubernetes.io/docs/user-guide/node-selection/ ## nodeSelector: {} - - ## Tolerations for Spark master pods assignment + ## @param master.tolerations Spark master tolerations for pod assignment ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ ## tolerations: [] - - ## Configure resource requests and limits + ## Container resource requests and limits ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## We usually recommend not to specify default resources and to leave this as a conscious + ## choice for the user. This also increases chances charts run on environments with little + ## resources, such as Minikube. If you do want to specify resources, uncomment the following + ## lines, adjust them as necessary, and remove the curly braces after 'resources:'. + ## @param master.resources.limits The resources limits for the container + ## @param master.resources.requests The requested resources for the container ## resources: - # We usually recommend not to specify default resources and to leave this as a conscious - # choice for the user. This also increases chances charts run on environments with little - # resources, such as Minikube. If you do want to specify resources, uncomment the following - # lines, adjust them as necessary, and remove the curly braces after 'resources:'. + ## Example: + ## limits: + ## cpu: 250m + ## memory: 256Mi limits: {} - # cpu: 250m - # memory: 256Mi + ## Examples: + ## requests: + ## cpu: 250m + ## memory: 256Mi requests: {} - # cpu: 250m - # memory: 256Mi - - ## Configure liveness and readiness probes - ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes) + ## Configure extra options for liveness probe + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes + ## @param master.livenessProbe.enabled Enable livenessProbe + ## @param master.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe + ## @param master.livenessProbe.periodSeconds Period seconds for livenessProbe + ## @param master.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe + ## @param master.livenessProbe.failureThreshold Failure threshold for livenessProbe + ## @param master.livenessProbe.successThreshold Success threshold for livenessProbe ## livenessProbe: enabled: true @@ -174,6 +193,15 @@ master: timeoutSeconds: 5 failureThreshold: 6 successThreshold: 1 + ## Configure extra options for readiness probe + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes + ## @param master.readinessProbe.enabled Enable readinessProbe + ## @param master.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe + ## @param master.readinessProbe.periodSeconds Period seconds for readinessProbe + ## @param master.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe + ## @param master.readinessProbe.failureThreshold Failure threshold for readinessProbe + ## @param master.readinessProbe.successThreshold Success threshold for readinessProbe + ## readinessProbe: enabled: true initialDelaySeconds: 30 @@ -181,8 +209,7 @@ master: timeoutSeconds: 5 failureThreshold: 6 successThreshold: 1 - - ## Add initContainers to the master pods. + ## @param master.initContainers Add initContainers to the master pods. ## Example: ## initContainers: ## - name: your-image-name @@ -194,73 +221,70 @@ master: ## initContainers: {} +## @section Spark worker parameters + ## Spark worker specific configuration ## worker: - ## Set a custom configuration by using an existing configMap with the configuration file. + ## @param worker.configurationConfigMap Set a custom configuration by using an existing configMap with the configuration file. ## - # configurationConfigMap: - - ## Spark container ports + configurationConfigMap: + ## @param worker.webPort Specify the port where the web interface will listen on the worker ## webPort: 8081 - # clusterPort: - - ## Deployment pod host aliases + ## @param worker.clusterPort Specify the port where the worker listens to communicate with the master + ## + clusterPort: + ## @param worker.hostAliases Add deployment host aliases ## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/ ## hostAliases: [] - - ## Add ports for exposing jobs running inside the worker nodes + ## @param worker.extraPorts Specify the port where the running jobs inside the workers listens ## ref: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#containerport-v1-core + ## e.g: + ## - name: myapp + ## containerPort: 8000 + ## protocol: TCP ## extraPorts: [] - # - name: myapp - # containerPort: 8000 - # protocol: TCP - - ## Set the daemonMemoryLimit as the daemon max memory + ## @param worker.daemonMemoryLimit Set the memory limit for the worker daemon ## - # daemonMemoryLimit: - - ## Set the worker memory limit + daemonMemoryLimit: + ## @param worker.memoryLimit Set the maximum memory the worker is allowed to use ## - # memoryLimit: - - ## Set the maximum number of cores + memoryLimit: + ## @param worker.coreLimit Se the maximum number of cores that the worker can use ## - # coreLimit: - - ## Working directory for the application + coreLimit: + ## @param worker.dir Set a custom working directory for the application ## - # dir: - - ## Options for the JVM as "-Dx=y" + dir: + ## @param worker.javaOptions Set options for the JVM in the form `-Dx=y` ## - # javaOptions: - - ## Configuration options in the form "-Dx=y" + javaOptions: + ## @param worker.configOptions Set extra options to configure the worker in the form `-Dx=y` ## - # configOptions: - - ## An array to add extra env vars + configOptions: + ## @param worker.extraEnvVars An array to add extra env vars ## For example: ## extraEnvVars: ## - name: SPARK_DAEMON_JAVA_OPTS ## value: -Dx=y - # extraEnvVars: - - ## Number of spark workers (will be the min number when autoscaling is enabled) + extraEnvVars: + ## @param worker.replicaCount Number of spark workers (will be the minimum number when autoscaling is enabled) ## replicaCount: 2 - - ## Pod management policy + ## @param worker.podManagementPolicy Statefulset Pod Management Policy Type ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies ## podManagementPolicy: OrderedReady - ## Kubernetes Security Context ## https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ + ## @param worker.securityContext.enabled Enable security context + ## @param worker.securityContext.fsGroup Group ID for the container + ## @param worker.securityContext.runAsUser User ID for the container + ## @param worker.securityContext.runAsGroup Group ID for the container + ## @param worker.securityContext.seLinuxOptions SELinux options for the container ## securityContext: enabled: true @@ -268,84 +292,80 @@ worker: runAsUser: 1001 runAsGroup: 0 seLinuxOptions: {} - - ## Annotations to add to the statefulset - ## + ## @param worker.podAnnotations Annotations for pods in StatefulSet ## podAnnotations: {} - - ## Labes to add to the statefulset - ## + ## @param worker.extraPodLabels Extra labels for pods in StatefulSet ## extraPodLabels: {} - - ## Spark worker pod affinity preset + ## @param worker.podAffinityPreset Spark worker pod affinity preset. Ignored if `worker.affinity` is set. Allowed values: `soft` or `hard` ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity - ## Allowed values: soft, hard ## podAffinityPreset: '' - - ## Spark worker pod anti-affinity preset + ## @param worker.podAntiAffinityPreset Spark worker pod anti-affinity preset. Ignored if `worker.affinity` is set. Allowed values: `soft` or `hard` ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity - ## Allowed values: soft, hard ## podAntiAffinityPreset: soft - ## Spark worker node affinity preset ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity - ## Allowed values: soft, hard ## nodeAffinityPreset: - ## Node affinity type - ## Allowed values: soft, hard + ## @param worker.nodeAffinityPreset.type Spark worker node affinity preset type. Ignored if `worker.affinity` is set. Allowed values: `soft` or `hard` ## type: '' - ## Node label key to match + ## @param worker.nodeAffinityPreset.key Spark worker node label key to match Ignored if `worker.affinity` is set. ## E.g. ## key: "kubernetes.io/e2e-az-name" ## key: '' - ## Node label values to match + ## @param worker.nodeAffinityPreset.values Spark worker node label values to match. Ignored if `worker.affinity` is set. ## E.g. ## values: ## - e2e-az1 ## - e2e-az2 ## values: [] - - ## Affinity for Spark worker pods assignment + ## @param worker.affinity Spark worker affinity for pod assignment ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity ## Note: worker.podAffinityPreset, worker.podAntiAffinityPreset, and worker.nodeAffinityPreset will be ignored when it's set ## affinity: {} - - ## Node labels for Spark worker pods assignment + ## @param worker.nodeSelector Spark worker node labels for pod assignment ## ref: https://kubernetes.io/docs/user-guide/node-selection/ ## nodeSelector: {} - - ## Tolerations for Spark master worker assignment + ## @param worker.tolerations Spark worker tolerations for pod assignment ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ ## tolerations: [] - - ## Configure resource requests and limits + ## Container resource requests and limits ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## We usually recommend not to specify default resources and to leave this as a conscious + ## choice for the user. This also increases chances charts run on environments with little + ## resources, such as Minikube. If you do want to specify resources, uncomment the following + ## lines, adjust them as necessary, and remove the curly braces after 'resources:'. + ## @param worker.resources.limits The resources limits for the container + ## @param worker.resources.requests The requested resources for the container ## resources: - # We usually recommend not to specify default resources and to leave this as a conscious - # choice for the user. This also increases chances charts run on environments with little - # resources, such as Minikube. If you do want to specify resources, uncomment the following - # lines, adjust them as necessary, and remove the curly braces after 'resources:'. + ## Example: + ## limits: + ## cpu: 250m + ## memory: 256Mi limits: {} - # cpu: 250m - # memory: 256Mi + ## Examples: + ## requests: + ## cpu: 250m + ## memory: 256Mi requests: {} - # cpu: 250m - # memory: 256Mi - - ## Configure liveness and readiness probes - ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes) + ## Configure extra options for liveness probe + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes + ## @param worker.livenessProbe.enabled Enable livenessProbe + ## @param worker.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe + ## @param worker.livenessProbe.periodSeconds Period seconds for livenessProbe + ## @param worker.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe + ## @param worker.livenessProbe.failureThreshold Failure threshold for livenessProbe + ## @param worker.livenessProbe.successThreshold Success threshold for livenessProbe ## livenessProbe: enabled: true @@ -354,6 +374,15 @@ worker: timeoutSeconds: 5 failureThreshold: 6 successThreshold: 1 + ## Configure extra options for readiness probe + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes + ## @param worker.readinessProbe.enabled Enable readinessProbe + ## @param worker.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe + ## @param worker.readinessProbe.periodSeconds Period seconds for readinessProbe + ## @param worker.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe + ## @param worker.readinessProbe.failureThreshold Failure threshold for readinessProbe + ## @param worker.readinessProbe.successThreshold Success threshold for readinessProbe + ## readinessProbe: enabled: true initialDelaySeconds: 30 @@ -361,8 +390,7 @@ worker: timeoutSeconds: 5 failureThreshold: 6 successThreshold: 1 - - ## Add initContainers to the master pods. + ## @param worker.initContainers Add initContainers to the master pods. ## Example: ## initContainers: ## - name: your-image-name @@ -373,7 +401,6 @@ worker: ## containerPort: 1234 ## initContainers: {} - ## Array to add extra volumes ## ## extraVolumes: @@ -383,156 +410,162 @@ worker: ## Autoscaling parameters ## autoscaling: - ## Enable replica autoscaling depending on CPU + ## @param worker.autoscaling.enabled Enable replica autoscaling depending on CPU ## enabled: false + ## @param worker.autoscaling.CpuTargetPercentage Kubernetes HPA CPU target percentage + ## CpuTargetPercentage: 50 - ## Max number of workers when using autoscaling + ## @param worker.autoscaling.replicasMax Maximum number of workers when using autoscaling ## replicasMax: 5 +## @section Security parameters + ## Security configuration ## security: - ## Name of the secret that contains all the passwords. This is optional, by default random passwords are generated. + ## @param security.passwordsSecretName Name of the secret that contains all the passwords + ## This is optional, by default random passwords are generated ## - # passwordsSecretName: - + passwordsSecretName: ## RPC configuration + ## @param security.rpc.authenticationEnabled Enable the RPC authentication + ## @param security.rpc.encryptionEnabled Enable the encryption for RPC ## rpc: authenticationEnabled: false encryptionEnabled: false - - ## Enables local storage encryption + ## @param security.storageEncryptionEnabled Enables local storage encryption ## storageEncryptionEnabled: false - - ## Name of the secret that contains the certificates. + ## @param security.certificatesSecretName Name of the secret that contains the certificates. ## It should contains two keys called "spark-keystore.jks" and "spark-truststore.jks" with the files in JKS format. ## DEPRECATED. Use `security.ssl.existingSecret` instead ## - # certificatesSecretName: - + certificatesSecretName: ## SSL configuration ## ssl: + ## @param security.ssl.enabled Enable the SSL configuration + ## enabled: false + ## @param security.ssl.needClientAuth Enable the client authentication + ## needClientAuth: false + ## @param security.ssl.protocol Set the SSL protocol + ## protocol: TLSv1.2 - ## Name of the existing secret containing the TLS certificates. + ## @param security.ssl.existingSecret Name of the existing secret containing the TLS certificates ## It should contains two keys called "spark-keystore.jks" and "spark-truststore.jks" with the files in JKS format. ## existingSecret: - ## Create self-signed TLS certificates. Currently only supports PEM certificates. + ## @param security.ssl.autoGenerated Create self-signed TLS certificates. Currently only supports PEM certificates ## The Spark container will generate a JKS keystore and trustore using the PEM certificates. ## autoGenerated: false - ## Key, Keystore and Truststore passwords. + ## @param security.ssl.keystorePassword Set the password of the JKS Keystore ## keystorePassword: + ## @param security.ssl.truststorePassword Truststore password. + ## truststorePassword: - + ## Container resource requests and limits + ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## We usually recommend not to specify default resources and to leave this as a conscious + ## choice for the user. This also increases chances charts run on environments with little + ## resources, such as Minikube. If you do want to specify resources, uncomment the following + ## lines, adjust them as necessary, and remove the curly braces after 'resources:'. + ## @param security.ssl.resources.limits The resources limits for the container + ## @param security.ssl.resources.requests The requested resources for the container + ## resources: - ## We usually recommend not to specify default resources and to leave this as a conscious - ## choice for the user. This also increases chances charts run on environments with little - ## resources, such as Minikube. If you do want to specify resources, uncomment the following - ## lines, adjust them as necessary, and remove the curly braces after 'resources:'. - ## + ## Example: + ## limits: + ## cpu: 100m + ## memory: 128Mi limits: {} - ## cpu: 100m - ## memory: 128Mi - ## + ## Examples: + ## requests: + ## cpu: 100m + ## memory: 128Mi requests: {} - ## cpu: 100m - ## memory: 128Mi - ## + +## @section Traffic Exposure parameters ## Service parameters ## service: - ## Kubernetes service type + ## @param service.type Kubernetes Service type ## type: ClusterIP - - ## Cluster Service port + ## @param service.clusterPort Spark cluster port ## clusterPort: 7077 - - ## Web Service port + ## @param service.webPort Spark client port ## webPort: 80 - ## Specify the nodePort(s) value(s) for the LoadBalancer and NodePort service types. ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport + ## @param service.nodePorts.cluster Kubernetes cluster node port + ## @param service.nodePorts.web Kubernetes web node port ## nodePorts: cluster: '' web: '' - - ## Set the LoadBalancer service type to internal only. + ## @param service.loadBalancerIP Load balancer IP if spark service type is `LoadBalancer` + ## Set the LoadBalancer service type to internal only ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer ## - # loadBalancerIP: - - ## Provide any additional annotations which may be required. This can be used to - ## set the LoadBalancer service type to internal only. + loadBalancerIP: + ## @param service.annotations Annotations for spark service + ## This can be used to set the LoadBalancer service type to internal only. ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer ## annotations: {} - ## Configure the ingress resource that allows you to access the ## Spark installation. Set up the URL ## ref: http://kubernetes.io/docs/user-guide/ingress/ ## ingress: - ## Set to true to enable ingress record generation + ## @param ingress.enabled Enable ingress controller resource ## enabled: false - - ## Set this to true in order to add the corresponding annotations for cert-manager + ## @param ingress.certManager Set this to true in order to add the corresponding annotations for cert-manager ## certManager: false - - ## Ingress Path type + ## @param ingress.pathType Ingress path type ## pathType: ImplementationSpecific - - ## Override API Version (automatically detected if not set) + ## @param ingress.apiVersion Force Ingress API version (automatically detected if not set) ## apiVersion: - - ## When the ingress is enabled, a host pointing to this will be created + ## @param ingress.hostname Default host for the ingress resource ## hostname: spark.local - - ## The Path to Spark. You may need to set this to '/*' in order to use this - ## with ALB ingress controllers. + ## @param ingress.path The Path to Spark. You may need to set this to '/*' in order to use this with ALB ingress controllers. ## path: / - - ## Ingress annotations done as key:value pairs + ## @param ingress.annotations Ingress annotations ## For a full list of possible ingress annotations, please see ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md ## ## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set ## annotations: {} - - ## Enable TLS configuration for the hostname defined at ingress.hostname parameter + ## @param ingress.tls Enable TLS configuration for the hostname defined at ingress.hostname parameter ## TLS certificates will be retrieved from a TLS secret with name: {{- printf "%s-tls" .Values.ingress.hostname }} ## You can use the ingress.secrets parameter to create this TLS secret or relay on cert-manager to create it ## tls: false - - ## The list of additional hostnames to be covered with this ingress record. + ## @param ingress.extraHosts The list of additional hostnames to be covered with this ingress record. ## Most likely the hostname above will be enough, but in the event more hosts are needed, this is an array ## extraHosts: ## - name: spark.local ## path: / ## - - ## Any additional arbitrary paths that may need to be added to the ingress under the main host. + extraHosts: [] + ## @param ingress.extraPaths Any additional arbitrary paths that may need to be added to the ingress under the main host. ## For example: The ALB ingress controller requires a special rule for handling SSL redirection. ## extraPaths: ## - path: /* @@ -540,16 +573,16 @@ ingress: ## serviceName: ssl-redirect ## servicePort: use-annotation ## - - ## The tls configuration for additional hostnames to be covered with this ingress record. + extraPaths: [] + ## @param ingress.extraTls The tls configuration for additional hostnames to be covered with this ingress record. ## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls ## extraTls: ## - hosts: ## - spark.local ## secretName: spark.local-tls ## - - ## If you're providing your own certificates, please use this to add the certificates as secrets + extraTls: [] + ## @param ingress.secrets If you're providing your own certificates, please use this to add the certificates as secrets ## key and certificate should start with -----BEGIN CERTIFICATE----- or ## -----BEGIN RSA PRIVATE KEY----- ## @@ -558,67 +591,78 @@ ingress: ## ## It is also possible to create and manage the certificates outside of this helm chart ## Please see README.md for more information - ## - secrets: [] + ## e.g: ## - name: spark.local-tls ## key: ## certificate: ## + secrets: [] + +## @section Metrics parameters ## Metrics configuration ## metrics: + ## @param metrics.enabled Start a side-car prometheus exporter + ## enabled: false - - ## Annotations for the Prometheus metrics on master nodes + ## @param metrics.masterAnnotations [object] Annotations for the Prometheus metrics on master nodes ## masterAnnotations: prometheus.io/scrape: 'true' prometheus.io/path: '/metrics/' prometheus.io/port: '{{ .Values.master.webPort }}' - ## Annotations for the Prometheus metrics on worker nodes + ## @param metrics.workerAnnotations [object] Annotations for the Prometheus metrics on worker nodes ## workerAnnotations: prometheus.io/scrape: 'true' prometheus.io/path: '/metrics/' prometheus.io/port: '{{ .Values.worker.webPort }}' - ## Prometheus Service Monitor ## ref: https://github.com/coreos/prometheus-operator ## https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint ## podMonitor: - ## If the operator is installed in your cluster, set to true to create a PodMonitor entry + ## @param metrics.podMonitor.enabled If the operator is installed in your cluster, set to true to create a PodMonitor Resource for scraping metrics using PrometheusOperator ## enabled: false - ## Add metrics endpoints for monitoring the jobs running in the worker nodes + ## @param metrics.podMonitor.extraMetricsEndpoints Add metrics endpoints for monitoring the jobs running in the worker nodes ## ref: https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#podmetricsendpoint + ## e.g: + ## - port: myapp + ## path: /metrics/ ## extraMetricsEndpoints: [] - # - port: myapp - # path: /metrics/ - ## Specify the namespace in which the podMonitor resource will be created + ## @param metrics.podMonitor.namespace Specify the namespace in which the podMonitor resource will be created ## - # namespace: "" - ## Specify the interval at which metrics should be scraped + namespace: "" + ## @param metrics.podMonitor.interval Specify the interval at which metrics should be scraped ## interval: 30s - ## Specify the timeout after which the scrape is ended + ## @param metrics.podMonitor.scrapeTimeout Specify the timeout after which the scrape is ended + ## e.g: + ## scrapeTimeout: 30s ## - # scrapeTimeout: 30s - ## Used to pass Labels that are used by the Prometheus installed in your cluster to select PodMonitors to work with + scrapeTimeout: + ## @param metrics.podMonitor.additionalLabels Additional labels that can be used so PodMonitors will be discovered by Prometheus ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec ## additionalLabels: {} - ## Custom PrometheusRule to be defined ## The value is evaluated as a template, so, for example, the value can depend on .Release or .Chart ## ref: https://github.com/coreos/prometheus-operator#customresourcedefinitions ## prometheusRule: + ## @param metrics.prometheusRule.enabled Set this to true to create prometheusRules for Prometheus + ## enabled: false - additionalLabels: {} + ## @param metrics.prometheusRule.namespace Namespace where the prometheusRules resource should be created + ## namespace: '' + ## @param metrics.prometheusRule.additionalLabels Additional labels that can be used so prometheusRules will be discovered by Prometheus + ## + additionalLabels: {} + ## @param metrics.prometheusRule.rules Custom Prometheus [rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/) ## These are just examples rules, please adapt them to your needs. ## Make sure to constraint the rules to the current postgresql service. ## rules: @@ -632,7 +676,3 @@ metrics: ## summary: PostgreSQL replication is lagging by {{ "{{ $value }}" }} hour(s). ## rules: [] - -## Extra objects to deploy (value evaluated as a template) -## -extraDeploy: [] diff --git a/bitnami/spring-cloud-dataflow/Chart.yaml b/bitnami/spring-cloud-dataflow/Chart.yaml index 7e9ff1edf5..12e75e439b 100644 --- a/bitnami/spring-cloud-dataflow/Chart.yaml +++ b/bitnami/spring-cloud-dataflow/Chart.yaml @@ -39,4 +39,4 @@ sources: - https://github.com/bitnami/bitnami-docker-spring-cloud-dataflow - https://github.com/bitnami/bitnami-docker-spring-cloud-skipper - https://dataflow.spring.io/ -version: 3.0.0 +version: 3.0.1 diff --git a/bitnami/spring-cloud-dataflow/README.md b/bitnami/spring-cloud-dataflow/README.md index eeee582a54..69595720e0 100644 --- a/bitnami/spring-cloud-dataflow/README.md +++ b/bitnami/spring-cloud-dataflow/README.md @@ -44,309 +44,327 @@ helm uninstall my-release ## Parameters -The following tables lists the configurable parameters of the Spring Cloud Data Flow chart and their default values per section/component: - ### Global parameters -| Parameter | Description | Default | -|---------------------------|-------------------------------------------------|---------------------------------------------------------| -| `global.imageRegistry` | Global Docker image registry | `nil` | -| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | -| `global.storageClass` | Global storage class for dynamic provisioning | `nil` | +| Name | Description | Value | +| ------------------------- | ----------------------------------------------- | ----- | +| `global.imageRegistry` | Global Docker image registry | `nil` | +| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` | +| `global.storageClass` | Global StorageClass for Persistent Volume(s) | `nil` | + ### Common parameters -| Parameter | Description | Default | -|---------------------|----------------------------------------------------------------------|--------------------------------| -| `nameOverride` | String to partially override common.names.fullname | `nil` | -| `fullnameOverride` | String to fully override common.names.fullname | `nil` | -| `clusterDomain` | Default Kubernetes cluster domain | `cluster.local` | -| `commonLabels` | Labels to add to all deployed objects | `{}` | -| `commonAnnotations` | Annotations to add to all deployed objects | `{}` | -| `extraDeploy` | Array of extra objects to deploy with the release | `[]` (evaluated as a template) | -| `kubeVersion` | Force target Kubernetes version (using Helm capabilities if not set) | `nil` | +| Name | Description | Value | +| ------------------ | ------------------------------------------------------------------------------------- | --------------- | +| `nameOverride` | String to partially override scdf.fullname template (will maintain the release name). | `nil` | +| `fullnameOverride` | String to fully override scdf.fullname template. | `nil` | +| `kubeVersion` | Force target Kubernetes version (using Helm capabilities if not set) | `nil` | +| `clusterDomain` | Default Kubernetes cluster domain | `cluster.local` | +| `extraDeploy` | Array of extra objects to deploy with the release | `[]` | + ### Dataflow Server parameters -| Parameter | Description | Default | -|-----------------------------------------------|------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------| -| `server.image.registry` | Spring Cloud Dataflow image registry | `docker.io` | -| `server.image.repository` | Spring Cloud Dataflow image name | `bitnami/spring-cloud-dataflow` | -| `server.image.tag` | Spring Cloud Dataflow image tag | `{TAG_NAME}` | -| `server.image.pullPolicy` | Spring Cloud Dataflow image pull policy | `IfNotPresent` | -| `server.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | -| `server.composedTaskRunner.image.registry` | Spring Cloud Dataflow Composed Task Runner image registry | `docker.io` | -| `server.composedTaskRunner.image.repository` | Spring Cloud Dataflow Composed Task Runner image name | `bitnami/spring-cloud-dataflow-composed-task-runner` | -| `server.composedTaskRunner.image.tag` | Spring Cloud Dataflow Composed Task Runner image tag | `{TAG_NAME}` | -| `server.composedTaskRunner.image.pullPolicy` | Spring Cloud Dataflow Composed Task Runner image pull policy | `IfNotPresent` | -| `server.composedTaskRunner.image.pullSecrets` | Spring Cloud Dataflow Composed Task Runner image pull secrets | `[]` | -| `server.command` | Override sever command | `nil` | -| `server.args` | Override server args | `nil` | -| `server.configuration.streamingEnabled` | Enables or disables streaming data processing | `true` | -| `server.configuration.batchEnabled` | Enables or disables bath data (tasks and schedules) processing | `true` | -| `server.configuration.accountName` | The name of the account to configure for the Kubernetes platform | `default` | -| `server.configuration.trustK8sCerts` | Trust K8s certificates when querying the Kubernetes API | `false` | -| `server.configuration.containerRegistries` | Container registries configuration | `{}` (check `values.yaml` for more information) | -| `server.configuration.metricsDashboard` | Endpoint to the metricsDashboard instance | `nil` | -| `server.existingConfigmap` | Name of existing ConfigMap with Dataflow server configuration | `nil` | -| `server.extraEnvVars` | Extra environment variables to be set on Dataflow server container | `{}` | -| `server.extraEnvVarsCM` | Name of existing ConfigMap containing extra env vars | `nil` | -| `server.extraEnvVarsSecret` | Name of existing Secret containing extra env vars | `nil` | -| `server.replicaCount` | Number of Dataflow server replicas to deploy | `1` | -| `server.hostAliases` | Add deployment host aliases | `[]` | -| `server.strategyType` | Deployment Strategy Type | `RollingUpdate` | -| `server.podAffinityPreset` | Dataflow server pod affinity preset. Ignored if `server.affinity` is set. Allowed values: `soft` or `hard` | `""` | -| `server.podAntiAffinityPreset` | Dataflow server pod anti-affinity preset. Ignored if `server.affinity` is set. Allowed values: `soft` or `hard` | `soft` | -| `server.nodeAffinityPreset.type` | Dataflow server node affinity preset type. Ignored if `server.affinity` is set. Allowed values: `soft` or `hard` | `""` | -| `server.nodeAffinityPreset.key` | Dataflow server node label key to match Ignored if `server.affinity` is set. | `""` | -| `server.nodeAffinityPreset.values` | Dataflow server node label values to match. Ignored if `server.affinity` is set. | `[]` | -| `server.affinity` | Dataflow server affinity for pod assignment | `{}` (evaluated as a template) | -| `server.nodeSelector` | Dataflow server node labels for pod assignment | `{}` (evaluated as a template) | -| `server.tolerations` | Dataflow server tolerations for pod assignment | `[]` (evaluated as a template) | -| `server.priorityClassName` | Controller priorityClassName | `nil` | -| `server.podSecurityContext` | Dataflow server pods' Security Context | `{ fsGroup: "1001" }` | -| `server.containerSecurityContext` | Dataflow server containers' Security Context | `{ runAsUser: "1001" }` | -| `server.resources.limits` | The resources limits for the Dataflow server container | `{}` | -| `server.resources.requests` | The requested resources for the Dataflow server container | `{}` | -| `server.podAnnotations` | Annotations for Dataflow server pods | `{}` | -| `server.livenessProbe` | Liveness probe configuration for Dataflow server | Check `values.yaml` file | -| `server.readinessProbe` | Readiness probe configuration for Dataflow server | Check `values.yaml` file | -| `server.customLivenessProbe` | Override default liveness probe | `nil` | -| `server.customReadinessProbe` | Override default readiness probe | `nil` | -| `server.service.type` | Kubernetes service type | `ClusterIP` | -| `server.service.port` | Service HTTP port | `8080` | -| `server.service.nodePort` | Service HTTP node port | `nil` | -| `server.service.clusterIP` | Dataflow server service clusterIP IP | `None` | -| `server.service.externalTrafficPolicy` | Enable client source IP preservation | `Cluster` | -| `server.service.loadBalancerIP` | loadBalancerIP if service type is `LoadBalancer` | `nil` | -| `server.service.loadBalancerSourceRanges` | Address that are allowed when service is LoadBalancer | `[]` | -| `server.service.annotations` | Annotations for Dataflow server service | `{}` | -| `server.containerPort` | Dataflow server port | `8080 | -| `server.ingress.enabled` | Enable ingress controller resource | `false` | -| `server.ingress.pathType` | Ingress path type | `ImplementationSpecific` | -| `server.ingress.path` | Ingress path | `/` | -| `server.ingress.certManager` | Add annotations for cert-manager | `false` | -| `server.ingress.hostname` | Default host for the ingress resource | `dataflow.local` | -| `server.ingress.annotations` | Ingress annotations | `[]` | -| `server.ingress.extraHosts[0].name` | Additional hostnames to be covered | `nil` | -| `server.ingress.extraHosts[0].path` | Additional hostnames to be covered | `nil` | -| `server.ingress.extraTls[0].hosts[0]` | TLS configuration for additional hostnames to be covered | `nil` | -| `server.ingress.extraTls[0].secretName` | TLS configuration for additional hostnames to be covered | `nil` | -| `server.ingress.tls` | Enables TLS configuration for the Ingress component | `false` | -| `server.ingress.secrets[0].name` | TLS Secret Name | `nil` | -| `server.ingress.secrets[0].certificate` | TLS Secret Certificate | `nil` | -| `server.ingress.secrets[0].key` | TLS Secret Key | `nil` | -| `server.initContainers` | Add additional init containers to the Dataflow server pods | `{}` (evaluated as a template) | -| `server.sidecars` | Add additional sidecar containers to the Dataflow server pods | `{}` (evaluated as a template) | -| `server.pdb.create` | Enable/disable a Pod Disruption Budget creation | `false` | -| `server.pdb.minAvailable` | Minimum number/percentage of pods that should remain scheduled | `1` | -| `server.pdb.maxUnavailable` | Maximum number/percentage of pods that may be made unavailable | `nil` | -| `server.autoscaling.enabled` | Enable autoscaling for Dataflow server | `false` | -| `server.autoscaling.minReplicas` | Minimum number of Dataflow server replicas | `nil` | -| `server.autoscaling.maxReplicas` | Maximum number of Dataflow server replicas | `nil` | -| `server.autoscaling.targetCPU` | Target CPU utilization percentage | `nil` | -| `server.autoscaling.targetMemory` | Target Memory utilization percentage | `nil` | -| `server.jdwp.enabled` | Enable Java Debug Wire Protocol (JDWP) | `false` | -| `server.jdwp.port` | JDWP TCP port | `5005` | -| `server.extraVolumes` | Extra Volumes to be set on the Dataflow Server Pod | `nil` | -| `server.extraVolumeMounts` | Extra VolumeMounts to be set on the Dataflow Container | `nil` | -| `server.proxy.host` | Proxy host | `nil` | -| `server.proxy.port` | Proxy port | `nil` | -| `server.proxy.user` | Proxy username (if authentication is required) | `nil` | -| `server.proxy.password` | Proxy password (if authentication is required) | `nil` | +| Name | Description | Value | +| -------------------------------------------- | ---------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------- | +| `server.image.registry` | Spring Cloud Dataflow image registry | `docker.io` | +| `server.image.repository` | Spring Cloud Dataflow image repository | `bitnami/spring-cloud-dataflow` | +| `server.image.tag` | Spring Cloud Dataflow image tag (immutable tags are recommended) | `2.8.1-debian-10-r0` | +| `server.image.pullPolicy` | Spring Cloud Dataflow image pull policy | `IfNotPresent` | +| `server.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` | +| `server.image.debug` | Enable image debug mode | `false` | +| `server.hostAliases` | Deployment pod host aliases | `[]` | +| `server.composedTaskRunner.image.registry` | Spring Cloud Dataflow Composed Task Runner image registry | `docker.io` | +| `server.composedTaskRunner.image.repository` | Spring Cloud Dataflow Composed Task Runner image repository | `bitnami/spring-cloud-dataflow-composed-task-runner` | +| `server.composedTaskRunner.image.tag` | Spring Cloud Dataflow Composed Task Runner image tag (immutable tags are recommended) | `2.8.1-debian-10-r0` | +| `server.configuration.streamingEnabled` | Enables or disables streaming data processing | `true` | +| `server.configuration.batchEnabled` | Enables or disables batch data (tasks and schedules) processing | `true` | +| `server.configuration.accountName` | The name of the account to configure for the Kubernetes platform | `default` | +| `server.configuration.trustK8sCerts` | Trust K8s certificates when querying the Kubernetes API | `false` | +| `server.configuration.containerRegistries` | Container registries configuration | `{}` | +| `server.configuration.grafanaInfo` | Endpoint to the grafana instance (Deprecated: use the metricsDashboard instead) | `nil` | +| `server.configuration.metricsDashboard` | Endpoint to the metricsDashboard instance | `nil` | +| `server.existingConfigmap` | ConfigMap with Spring Cloud Dataflow Server Configuration | `nil` | +| `server.extraEnvVars` | Extra environment variables to be set on Dataflow server container | `[]` | +| `server.extraEnvVarsCM` | ConfigMap with extra environment variables | `nil` | +| `server.extraEnvVarsSecret` | Secret with extra environment variables | `nil` | +| `server.replicaCount` | Number of Dataflow server replicas to deploy | `1` | +| `server.strategyType` | StrategyType, can be set to RollingUpdate or Recreate by default | `RollingUpdate` | +| `server.podAffinityPreset` | Dataflow server pod affinity preset. Ignored if `server.affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `server.podAntiAffinityPreset` | Dataflow server pod anti-affinity preset. Ignored if `server.affinity` is set. Allowed values: `soft` or `hard` | `soft` | +| `server.containerPort` | Dataflow server port | `8080` | +| `server.nodeAffinityPreset.type` | Dataflow server node affinity preset type. Ignored if `server.affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `server.nodeAffinityPreset.key` | Dataflow server node label key to match Ignored if `server.affinity` is set. | `""` | +| `server.nodeAffinityPreset.values` | Dataflow server node label values to match. Ignored if `server.affinity` is set. | `[]` | +| `server.affinity` | Dataflow server affinity for pod assignment | `{}` | +| `server.nodeSelector` | Dataflow server node labels for pod assignment | `{}` | +| `server.tolerations` | Dataflow server tolerations for pod assignment | `[]` | +| `server.podAnnotations` | Annotations for Dataflow server pods | `{}` | +| `server.priorityClassName` | Dataflow Server pods' priority | `""` | +| `server.podSecurityContext.fsGroup` | Group ID for the volumes of the pod | `1001` | +| `server.containerSecurityContext.runAsUser` | Set Dataflow Server container's Security Context runAsUser | `1001` | +| `server.resources.limits` | The resources limits for the Dataflow server container | `{}` | +| `server.resources.requests` | The requested resources for the Dataflow server container | `{}` | +| `server.livenessProbe.enabled` | Enable livenessProbe | `true` | +| `server.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `120` | +| `server.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `20` | +| `server.livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `1` | +| `server.livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` | +| `server.livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` | +| `server.readinessProbe.enabled` | Enable readinessProbe | `true` | +| `server.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `120` | +| `server.readinessProbe.periodSeconds` | Period seconds for readinessProbe | `20` | +| `server.readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `1` | +| `server.readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` | +| `server.readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` | +| `server.customLivenessProbe` | Override default liveness probe | `{}` | +| `server.customReadinessProbe` | Override default readiness probe | `{}` | +| `server.service.type` | Kubernetes service type | `ClusterIP` | +| `server.service.port` | Service HTTP port | `8080` | +| `server.service.nodePort` | Specify the nodePort value for the LoadBalancer and NodePort service types | `nil` | +| `server.service.clusterIP` | Dataflow server service cluster IP | `nil` | +| `server.service.externalTrafficPolicy` | Enable client source IP preservation | `Cluster` | +| `server.service.loadBalancerIP` | Load balancer IP if service type is `LoadBalancer` | `nil` | +| `server.service.loadBalancerSourceRanges` | Addresses that are allowed when service is LoadBalancer | `[]` | +| `server.service.annotations` | Provide any additional annotations which may be required. Evaluated as a template. | `{}` | +| `server.ingress.enabled` | Enable ingress controller resource | `false` | +| `server.ingress.path` | The Path to WordPress. You may need to set this to '/*' in order to use this with ALB ingress controllers. | `/` | +| `server.ingress.pathType` | Ingress path type | `ImplementationSpecific` | +| `server.ingress.certManager` | Set this to true in order to add the corresponding annotations for cert-manager | `false` | +| `server.ingress.hostname` | Default host for the ingress resource | `dataflow.local` | +| `server.ingress.annotations` | Ingress annotations | `{}` | +| `server.ingress.tls` | Enable TLS configuration for the hostname defined at ingress.hostname parameter | `false` | +| `server.ingress.extraHosts` | The list of additional hostnames to be covered with this ingress record. | `[]` | +| `server.ingress.extraTls` | The tls configuration for additional hostnames to be covered with this ingress record. | `[]` | +| `server.ingress.secrets` | If you're providing your own certificates, please use this to add the certificates as secrets | `[]` | +| `server.initContainers` | Add init containers to the Dataflow Server pods | `{}` | +| `server.sidecars` | Add sidecars to the Dataflow Server pods | `{}` | +| `server.pdb.create` | Enable/disable a Pod Disruption Budget creation | `false` | +| `server.pdb.minAvailable` | Minimum number/percentage of pods that should remain scheduled | `1` | +| `server.pdb.maxUnavailable` | Maximum number/percentage of pods that may be made unavailable | `nil` | +| `server.autoscaling.enabled` | Enable autoscaling for Dataflow server | `false` | +| `server.autoscaling.minReplicas` | Minimum number of Dataflow server replicas | `nil` | +| `server.autoscaling.maxReplicas` | Maximum number of Dataflow server replicas | `nil` | +| `server.autoscaling.targetCPU` | Target CPU utilization percentage | `nil` | +| `server.autoscaling.targetMemory` | Target Memory utilization percentage | `nil` | +| `server.extraVolumes` | Extra Volumes to be set on the Dataflow Server Pod | `[]` | +| `server.extraVolumeMounts` | Extra VolumeMounts to be set on the Dataflow Container | `[]` | +| `server.jdwp.enabled` | Set to true to enable Java debugger | `false` | +| `server.jdwp.port` | Specify port for remote debugging | `5005` | +| `server.proxy` | Add proxy configuration for SCDF server | `{}` | + ### Dataflow Skipper parameters -| Parameter | Description | Default | -|--------------------------------------------|-----------------------------------------------------------------------------------------------------------|---------------------------------------------------------| -| `skipper.enabled` | Enable Spring Cloud Skipper component | `true` | -| `skipper.image.registry` | Spring Cloud Skipper image registry | `docker.io` | -| `skipper.image.repository` | Spring Cloud Skipper image name | `bitnami/spring-cloud-dataflow` | -| `skipper.image.tag` | Spring Cloud Skipper image tag | `{TAG_NAME}` | -| `skipper.image.pullPolicy` | Spring Cloud Skipper image pull policy | `IfNotPresent` | -| `skipper.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | -| `skipper.configuration.accountName` | The name of the account to configure for the Kubernetes platform | `default` | -| `skipper.configuration.trustK8sCerts` | Trust K8s certificates when querying the Kubernetes API | `false` | -| `skipper.existingConfigmap` | Name of existing ConfigMap with Skipper server configuration | `nil` | -| `skipper.extraEnvVars` | Extra environment variables to be set on Skipper server container | `{}` | -| `skipper.extraEnvVarsCM` | Name of existing ConfigMap containing extra env vars | `nil` | -| `skipper.extraEnvVarsSecret` | Name of existing Secret containing extra env vars | `nil` | -| `skipper.replicaCount` | Number of Skipper server replicas to deploy | `1` | -| `skipper.strategyType` | Deployment Strategy Type | `RollingUpdate` | -| `skipper.podAffinityPreset` | Skipper pod affinity preset. Ignored if `skipper.affinity` is set. Allowed values: `soft` or `hard` | `""` | -| `skipper.podAntiAffinityPreset` | Skipper pod anti-affinity preset. Ignored if `skipper.affinity` is set. Allowed values: `soft` or `hard` | `soft` | -| `skipper.nodeAffinityPreset.type` | Skipper node affinity preset type. Ignored if `skipper.affinity` is set. Allowed values: `soft` or `hard` | `""` | -| `skipper.nodeAffinityPreset.key` | Skipper node label key to match Ignored if `skipper.affinity` is set. | `""` | -| `skipper.nodeAffinityPreset.values` | Skipper node label values to match. Ignored if `skipper.affinity` is set. | `[]` | -| `skipper.hostAliases` | Add deployment host aliases | `[]` | -| `skipper.affinity` | Skipper affinity for pod assignment | `{}` (evaluated as a template) | -| `skipper.nodeSelector` | Skipper node labels for pod assignment | `{}` (evaluated as a template) | -| `skipper.tolerations` | Skipper tolerations for pod assignment | `[]` (evaluated as a template) | -| `skipper.priorityClassName` | Controller priorityClassName | `nil` | -| `skipper.podSecurityContext` | Skipper server pods' Security Context | `{ fsGroup: "1001" }` | -| `skipper.containerSecurityContext` | Skipper server containers' Security Context | `{ runAsUser: "1001" }` | -| `skipper.resources.limits` | The resources limits for the Skipper server container | `{}` | -| `skipper.resources.requests` | The requested resources for the Skipper server container | `{}` | -| `skipper.podAnnotations` | Annotations for Skipper server pods | `{}` | -| `skipper.livenessProbe` | Liveness probe configuration for Skipper server | Check `values.yaml` file | -| `skipper.readinessProbe` | Readiness probe configuration for Skipper server | Check `values.yaml` file | -| `skipper.customLivenessProbe` | Override default liveness probe | `nil` | -| `skipper.customReadinessProbe` | Override default readiness probe | `nil` | -| `skipper.service.type` | Kubernetes service type | `ClusterIP` | -| `skipper.service.port` | Service HTTP port | `8080` | -| `skipper.service.nodePort` | Service HTTP node port | `nil` | -| `skipper.service.clusterIP` | Skipper server service clusterIP IP | `None` | -| `skipper.service.externalTrafficPolicy` | Enable client source IP preservation | `Cluster` | -| `skipper.service.loadBalancerIP` | loadBalancerIP if service type is `LoadBalancer` | `nil` | -| `skipper.service.loadBalancerSourceRanges` | Address that are allowed when service is LoadBalancer | `[]` | -| `skipper.service.annotations` | Annotations for Skipper server service | `{}` | -| `skipper.initContainers` | Add additional init containers to the Skipper pods | `{}` (evaluated as a template) | -| `skipper.sidecars` | Add additional sidecar containers to the Skipper pods | `{}` (evaluated as a template) | -| `skipper.pdb.create` | Enable/disable a Pod Disruption Budget creation | `false` | -| `skipper.pdb.minAvailable` | Minimum number/percentage of pods that should remain scheduled | `1` | -| `skipper.pdb.maxUnavailable` | Maximum number/percentage of pods that may be made unavailable | `nil` | -| `skipper.autoscaling.enabled` | Enable autoscaling for Skipper server | `false` | -| `skipper.autoscaling.minReplicas` | Minimum number of Skipper server replicas | `nil` | -| `skipper.autoscaling.maxReplicas` | Maximum number of Skipper server replicas | `nil` | -| `skipper.autoscaling.targetCPU` | Target CPU utilization percentage | `nil` | -| `skipper.autoscaling.targetMemory` | Target Memory utilization percentage | `nil` | -| `skipper.jdwp.enabled` | Enable Java Debug Wire Protocol (JDWP) | `false` | -| `skipper.jdwp.port` | JDWP TCP port | `5005` | -| `skipper.extraVolumes` | Extra Volumes to be set on the Skipper Pod | `nil` | -| `skipper.extraVolumeMounts` | Extra VolumeMounts to be set on the Skipper Container | `nil` | -| `externalSkipper.host` | Host of a external Skipper Server | `localhost` | -| `externalSkipper.port` | External Skipper Server port number | `7577` | +| Name | Description | Value | +| -------------------------------------------- | --------------------------------------------------------------------------------------------------------- | ------------------------------ | +| `skipper.enabled` | Enable Spring Cloud Skipper component | `true` | +| `skipper.hostAliases` | Deployment pod host aliases | `[]` | +| `skipper.image.registry` | Spring Cloud Skipper image registry | `docker.io` | +| `skipper.image.repository` | Spring Cloud Skipper image repository | `bitnami/spring-cloud-skipper` | +| `skipper.image.tag` | Spring Cloud Skipper image tag (immutable tags are recommended) | `2.7.0-debian-10-r4` | +| `skipper.image.pullPolicy` | Spring Cloud Skipper image pull policy | `IfNotPresent` | +| `skipper.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` | +| `skipper.image.debug` | Enable image debug mode | `false` | +| `skipper.configuration.accountName` | The name of the account to configure for the Kubernetes platform | `default` | +| `skipper.configuration.trustK8sCerts` | Trust K8s certificates when querying the Kubernetes API | `false` | +| `skipper.existingConfigmap` | Name of existing ConfigMap with Skipper server configuration | `nil` | +| `skipper.extraEnvVars` | Extra environment variables to be set on Skipper server container | `[]` | +| `skipper.extraEnvVarsCM` | Name of existing ConfigMap containing extra environment variables | `nil` | +| `skipper.extraEnvVarsSecret` | Name of existing Secret containing extra environment variables | `nil` | +| `skipper.replicaCount` | Number of Skipper server replicas to deploy | `1` | +| `skipper.strategyType` | Deployment Strategy Type | `RollingUpdate` | +| `skipper.podAffinityPreset` | Skipper pod affinity preset. Ignored if `skipper.affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `skipper.podAntiAffinityPreset` | Skipper pod anti-affinity preset. Ignored if `skipper.affinity` is set. Allowed values: `soft` or `hard` | `soft` | +| `skipper.nodeAffinityPreset.type` | Skipper node affinity preset type. Ignored if `skipper.affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `skipper.nodeAffinityPreset.key` | Skipper node label key to match Ignored if `skipper.affinity` is set. | `""` | +| `skipper.nodeAffinityPreset.values` | Skipper node label values to match. Ignored if `skipper.affinity` is set. | `[]` | +| `skipper.affinity` | Skipper affinity for pod assignment | `{}` | +| `skipper.nodeSelector` | Skipper node labels for pod assignment | `{}` | +| `skipper.tolerations` | Skipper tolerations for pod assignment | `[]` | +| `skipper.podAnnotations` | Annotations for Skipper server pods | `{}` | +| `skipper.priorityClassName` | Controller priorityClassName | `""` | +| `skipper.podSecurityContext.fsGroup` | Group ID for the volumes of the pod | `1001` | +| `skipper.containerSecurityContext.runAsUser` | Set Dataflow Skipper container's Security Context runAsUser | `1001` | +| `skipper.resources.limits` | The resources limits for the Skipper server container | `{}` | +| `skipper.resources.requests` | The requested resources for the Skipper server container | `{}` | +| `skipper.livenessProbe.enabled` | Enable livenessProbe | `true` | +| `skipper.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `120` | +| `skipper.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `20` | +| `skipper.livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `1` | +| `skipper.livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` | +| `skipper.livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` | +| `skipper.readinessProbe.enabled` | Enable readinessProbe | `true` | +| `skipper.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `120` | +| `skipper.readinessProbe.periodSeconds` | Period seconds for readinessProbe | `20` | +| `skipper.readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `1` | +| `skipper.readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` | +| `skipper.readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` | +| `skipper.customLivenessProbe` | Override default liveness probe | `{}` | +| `skipper.customReadinessProbe` | Override default readiness probe | `{}` | +| `skipper.service.type` | Kubernetes service type | `ClusterIP` | +| `skipper.service.port` | Service HTTP port | `80` | +| `skipper.service.nodePort` | Service HTTP node port | `nil` | +| `skipper.service.clusterIP` | Skipper server service cluster IP | `nil` | +| `skipper.service.externalTrafficPolicy` | Enable client source IP preservation | `Cluster` | +| `skipper.service.loadBalancerIP` | Load balancer IP if service type is `LoadBalancer` | `nil` | +| `skipper.service.loadBalancerSourceRanges` | Address that are allowed when service is LoadBalancer | `[]` | +| `skipper.service.annotations` | Annotations for Skipper server service | `{}` | +| `skipper.initContainers` | Add init containers to the Dataflow Skipper pods | `{}` | +| `skipper.sidecars` | Add sidecars to the Skipper pods | `{}` | +| `skipper.pdb.create` | Enable/disable a Pod Disruption Budget creation | `false` | +| `skipper.pdb.minAvailable` | Minimum number/percentage of pods that should remain scheduled | `1` | +| `skipper.pdb.maxUnavailable` | Maximum number/percentage of pods that may be made unavailable | `nil` | +| `skipper.autoscaling.enabled` | Enable autoscaling for Skipper server | `false` | +| `skipper.autoscaling.minReplicas` | Minimum number of Skipper server replicas | `nil` | +| `skipper.autoscaling.maxReplicas` | Maximum number of Skipper server replicas | `nil` | +| `skipper.autoscaling.targetCPU` | Target CPU utilization percentage | `nil` | +| `skipper.autoscaling.targetMemory` | Target Memory utilization percentage | `nil` | +| `skipper.extraVolumes` | Extra Volumes to be set on the Skipper Pod | `[]` | +| `skipper.extraVolumeMounts` | Extra VolumeMounts to be set on the Skipper Container | `[]` | +| `skipper.jdwp.enabled` | Enable Java Debug Wire Protocol (JDWP) | `false` | +| `skipper.jdwp.port` | JDWP TCP port for remote debugging | `5005` | +| `externalSkipper.host` | Host of a external Skipper Server | `localhost` | +| `externalSkipper.port` | External Skipper Server port number | `7577` | + ### Deployer parameters -| Parameter | Description | Default | -|-------------------------------------|--------------------------------------------------|-------------------------------------| -| `deployer.resources.limits` | Streaming applications resource limits | `{ cpu: "500m", memory: "1024Mi" }` | -| `deployer.resources.requests` | Streaming applications resource requests | `{}` | -| `deployer.resources.readinessProbe` | Streaming applications readiness probes requests | Check `values.yaml` file | -| `deployer.resources.livenessProbe` | Streaming applications liveness probes requests | Check `values.yaml` file | -| `deployer.nodeSelector` | Streaming applications nodeSelector | `""` | -| `deployer.tolerations` | Streaming applications tolerations | `{}` | -| `deployer.volumeMounts` | Streaming applications extra volume mounts | `{}` | -| `deployer.volumes` | Streaming applications extra volumes | `{}` | -| `deployer.environmentVariables` | Streaming applications environment variables | `""` | -| `deployer.podSecurityContext` | Streaming applications Security Context. | `{runAsUser: 1001}` | +| Name | Description | Value | +| --------------------------------------------- | ------------------------------------------------------------------------------------------- | ------ | +| `deployer.resources.limits` | Streaming applications resource limits | `{}` | +| `deployer.resources.requests` | Streaming applications resource requests | `{}` | +| `deployer.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `90` | +| `deployer.readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `120` | +| `deployer.nodeSelector` | The node selectors to apply to the streaming applications deployments in "key:value" format | `""` | +| `deployer.tolerations` | Streaming applications tolerations | `{}` | +| `deployer.volumeMounts` | Streaming applications extra volume mounts | `{}` | +| `deployer.volumes` | Streaming applications extra volumes | `{}` | +| `deployer.environmentVariables` | Streaming applications environment variables | `""` | +| `deployer.podSecurityContext.runAsUser` | Set Dataflow Streams container's Security Context runAsUser | `1001` | + ### RBAC parameters -| Parameter | Description | Default | -|-------------------------|-------------------------------------------------------------------------------------|------------------------------------------------------| -| `serviceAccount.create` | Enable the creation of a ServiceAccount for Dataflow server and Skipper server pods | `true` | -| `serviceAccount.name` | Name of the created serviceAccount | Generated using the `common.names.fullname` template | -| `rbac.create` | Whether to create & use RBAC resources or not | `true` | +| Name | Description | Value | +| ----------------------- | ----------------------------------------------------------------------------------- | ------ | +| `serviceAccount.create` | Enable the creation of a ServiceAccount for Dataflow server and Skipper server pods | `true` | +| `serviceAccount.name` | Name of the created serviceAccount | `""` | +| `rbac.create` | Whether to create and use RBAC resources or not | `true` | + ### Metrics parameters -| Parameter | Description | Default | -|----------------------------------------|------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------| -| `metrics.metrics` | Enable the export of Prometheus metrics | `false` | -| `metrics.image.registry` | Prometheus Rsocket Proxy image registry | `docker.io` | -| `metrics.image.repository` | Prometheus Rsocket Proxy image name | `bitnami/prometheus-rsocket-proxy` | -| `metrics.image.tag` | Prometheus Rsocket Proxy image tag | `{TAG_NAME}` | -| `metrics.image.pullPolicy` | Prometheus Rsocket Proxy image pull policy | `IfNotPresent` | -| `metrics.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | -| `metrics.replicaCount` | Number of Prometheus Rsocket Proxy replicas to deploy | `1` | -| `metrics.podAffinityPreset` | Prometheus Rsocket Proxy pod affinity preset. Ignored if `metrics.affinity` is set. Allowed values: `soft` or `hard` | `""` | -| `metrics.podAntiAffinityPreset` | Prometheus Rsocket Proxy pod anti-affinity preset. Ignored if `metrics.affinity` is set. Allowed values: `soft` or `hard` | `soft` | -| `metrics.nodeAffinityPreset.type` | Prometheus Rsocket Proxy node affinity preset type. Ignored if `metrics.affinity` is set. Allowed values: `soft` or `hard` | `""` | -| `metrics.nodeAffinityPreset.key` | Prometheus Rsocket Proxy node label key to match Ignored if `metrics.affinity` is set. | `""` | -| `metrics.nodeAffinityPreset.values` | Prometheus Rsocket Proxy node label values to match. Ignored if `metrics.affinity` is set. | `[]` | -| `metrics.affinity` | Prometheus Rsocket Proxy affinity for pod assignment | `{}` (evaluated as a template) | -| `metrics.nodeSelector` | Prometheus Rsocket Proxy node labels for pod assignment | `{}` (evaluated as a template) | -| `metrics.tolerations` | Prometheus Rsocket Proxy tolerations for pod assignment | `[]` (evaluated as a template) | -| `metrics.priorityClassName` | Controller priorityClassName | `nil` | -| `metrics.resources.limits` | The resources limits for the Prometheus Rsocket Proxy container | `{}` | -| `metrics.resources.requests` | The requested resources for the Prometheus Rsocket Proxy container | `{}` | -| `metrics.podAnnotations` | Annotations for Prometheus Rsocket Proxy pods | `{}` | -| `metrics.kafka.service.httpPort` | Prometheus Rsocket Proxy HTTP port | `8080` | -| `metrics.kafka.service.rsocketPort` | Prometheus Rsocket Proxy Rsocket port | `8080` | -| `metrics.kafka.service.annotations` | Annotations for Prometheus Rsocket Proxy service | `Check values.yaml file` | -| `metrics.serviceMonitor.enabled` | if `true`, creates a Prometheus Operator ServiceMonitor (also requires `metrics.enabled` to be `true`) | `false` | -| `metrics.serviceMonitor.namespace` | Namespace in which ServiceMonitor is created if different from release | `nil` | -| `metrics.serviceMonitor.extraLabels` | Labels to add to ServiceMonitor | `{}` | -| `metrics.serviceMonitor.interval` | Interval at which metrics should be scraped. | `nil` (Prometheus Operator default value) | -| `metrics.serviceMonitor.scrapeTimeout` | Timeout after which the scrape is ended | `nil` (Prometheus Operator default value) | -| `metrics.pdb.create` | Enable/disable a Pod Disruption Budget creation | `false` | -| `metrics.pdb.minAvailable` | Minimum number/percentage of pods that should remain scheduled | `1` | -| `metrics.pdb.maxUnavailable` | Maximum number/percentage of pods that may be made unavailable | `nil` | -| `metrics.autoscaling.enabled` | Enable autoscaling for Prometheus Rsocket Proxy | `false` | -| `metrics.autoscaling.minReplicas` | Minimum number of Prometheus Rsocket Proxy replicas | `nil` | -| `metrics.autoscaling.maxReplicas` | Maximum number of Prometheus Rsocket Proxy replicas | `nil` | -| `metrics.autoscaling.targetCPU` | Target CPU utilization percentage | `nil` | -| `metrics.autoscaling.targetMemory` | Target Memory utilization percentage | `nil` | +| Name | Description | Value | +| -------------------------------------- | -------------------------------------------------------------------------------------------------------------------------- | ---------------------------------- | +| `metrics.enabled` | Enable Prometheus metrics | `false` | +| `metrics.image.registry` | Prometheus Rsocket Proxy image registry | `docker.io` | +| `metrics.image.repository` | Prometheus Rsocket Proxy image repository | `bitnami/prometheus-rsocket-proxy` | +| `metrics.image.tag` | Prometheus Rsocket Proxy image tag (immutable tags are recommended) | `1.3.0-debian-10-r187` | +| `metrics.image.pullPolicy` | Prometheus Rsocket Proxy image pull policy | `IfNotPresent` | +| `metrics.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` | +| `metrics.resources.limits` | The resources limits for the Prometheus Rsocket Proxy container | `{}` | +| `metrics.resources.requests` | The requested resources for the Prometheus Rsocket Proxy container | `{}` | +| `metrics.replicaCount` | Number of Prometheus Rsocket Proxy replicas to deploy | `1` | +| `metrics.podAffinityPreset` | Prometheus Rsocket Proxy pod affinity preset. Ignored if `metrics.affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `metrics.podAntiAffinityPreset` | Prometheus Rsocket Proxy pod anti-affinity preset. Ignored if `metrics.affinity` is set. Allowed values: `soft` or `hard` | `soft` | +| `metrics.nodeAffinityPreset.type` | Prometheus Rsocket Proxy node affinity preset type. Ignored if `metrics.affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `metrics.nodeAffinityPreset.key` | Prometheus Rsocket Proxy node label key to match Ignored if `metrics.affinity` is set. | `""` | +| `metrics.nodeAffinityPreset.values` | Prometheus Rsocket Proxy node label values to match. Ignored if `metrics.affinity` is set. | `[]` | +| `metrics.affinity` | Prometheus Rsocket Proxy affinity for pod assignment | `{}` | +| `metrics.nodeSelector` | Prometheus Rsocket Proxy node labels for pod assignment | `{}` | +| `metrics.tolerations` | Prometheus Rsocket Proxy tolerations for pod assignment | `[]` | +| `metrics.podAnnotations` | Annotations for Prometheus Rsocket Proxy pods | `{}` | +| `metrics.priorityClassName` | Prometheus Rsocket Proxy pods' priority. | `""` | +| `metrics.service.httpPort` | Prometheus Rsocket Proxy HTTP port | `8080` | +| `metrics.service.rsocketPort` | Prometheus Rsocket Proxy Rsocket port | `7001` | +| `metrics.service.annotations` | Annotations for the Prometheus Rsocket Proxy service | `{}` | +| `metrics.serviceMonitor.enabled` | if `true`, creates a Prometheus Operator ServiceMonitor (also requires `metrics.enabled` to be `true`) | `false` | +| `metrics.serviceMonitor.extraLabels` | Labels to add to ServiceMonitor, in case prometheus operator is configured with serviceMonitorSelector | `{}` | +| `metrics.serviceMonitor.namespace` | Namespace in which ServiceMonitor is created if different from release | `nil` | +| `metrics.serviceMonitor.interval` | Interval at which metrics should be scraped. | `nil` | +| `metrics.serviceMonitor.scrapeTimeout` | Timeout after which the scrape is ended | `nil` | +| `metrics.pdb.create` | Enable/disable a Pod Disruption Budget creation | `false` | +| `metrics.pdb.minAvailable` | Minimum number/percentage of pods that should remain scheduled | `1` | +| `metrics.pdb.maxUnavailable` | Maximum number/percentage of pods that may be made unavailable | `nil` | +| `metrics.autoscaling.enabled` | Enable autoscaling for Prometheus Rsocket Proxy | `false` | +| `metrics.autoscaling.minReplicas` | Minimum number of Prometheus Rsocket Proxy replicas | `nil` | +| `metrics.autoscaling.maxReplicas` | Maximum number of Prometheus Rsocket Proxy replicas | `nil` | +| `metrics.autoscaling.targetCPU` | Target CPU utilization percentage | `nil` | +| `metrics.autoscaling.targetMemory` | Target Memory utilization percentage | `nil` | ### Init Container parameters -| Parameter | Description | Default | -|--------------------------------------|---------------------------------------------------------------------------------------------------|---------------------------------------------------------| -| `waitForBackends.enabled` | Wait for the database and other services (such as Kafka or RabbitMQ) used when enabling streaming | `true` | -| `waitForBackends.image.registry` | Init container wait-for-backend image registry | `docker.io` | -| `waitForBackends.image.repository` | Init container wait-for-backend image name | `bitnami/kubectl` | -| `waitForBackends.image.tag` | Init container wait-for-backend image tag | `{TAG_NAME}` | -| `waitForBackends.image.pullPolicy` | Init container wait-for-backend image pull policy | `IfNotPresent` | -| `waitForBackends.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | -| `waitForBackends.resources.limits` | Init container wait-for-backend resource limits | `{}` | -| `waitForBackends.resources.requests` | Init container wait-for-backend resource requests | `{}` | +| Name | Description | Value | +| ------------------------------------ | ------------------------------------------------------------------------------------------------- | ---------------------- | +| `waitForBackends.enabled` | Wait for the database and other services (such as Kafka or RabbitMQ) used when enabling streaming | `true` | +| `waitForBackends.image.registry` | Init container wait-for-backend image registry | `docker.io` | +| `waitForBackends.image.repository` | Init container wait-for-backend image name | `bitnami/kubectl` | +| `waitForBackends.image.tag` | Init container wait-for-backend image tag | `1.19.12-debian-10-r6` | +| `waitForBackends.image.pullPolicy` | Init container wait-for-backend image pull policy | `IfNotPresent` | +| `waitForBackends.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` | +| `waitForBackends.resources.limits` | Init container wait-for-backend resource limits | `{}` | +| `waitForBackends.resources.requests` | Init container wait-for-backend resource requests | `{}` | + ### Database parameters -| Parameter | Description | Default | -|-------------------------------------------|-----------------------------------------------------------------------------------------------------|-------------------------------------------| -| `mariadb.enabled` | Enable/disable MariaDB chart installation | `true` | -| `mariadb.architecture` | MariaDB architecture (`standalone` or `replication`) | `standalone` | -| `mariadb.auth.database` | Database name to create | `dataflow` | -| `mariadb.auth.username` | Username of new user to create | `dataflow` | -| `mariadb.auth.password` | Password for the new user | `change-me` | -| `mariadb.auth.rootPassword` | Password for the MariaDB `root` user | _random 10 character alphanumeric string_ | -| `mariadb.initdbScripts` | Dictionary of initdb scripts | Check `values.yaml` file | -| `externalDatabase.driver` | The fully qualified name of the JDBC Driver class | `""` | -| `externalDatabase.scheme` | The scheme is a vendor-specific or shared protocol string that follows the "jdbc:" of the URL | `""` | -| `externalDatabase.host` | Host of the external database | `localhost` | -| `externalDatabase.port` | External database port number | `3306` | -| `externalDatabase.password` | Password for the above username | `""` | -| `externalDatabase.existingPasswordSecret` | Existing secret with database password | `""` | -| `externalDatabase.existingPasswordKey` | Key of the existing secret with database password | `datasource-password` | -| `externalDatabase.dataflow.url` | JDBC URL for dataflow server. Overrides external scheme, host, port, database, and jdbc parameters. | `""` | -| `externalDatabase.dataflow.username` | Existing username in the external db to be used by Dataflow server | `dataflow` | -| `externalDatabase.dataflow.database` | Name of the existing database to be used by Dataflow server | `dataflow` | -| `externalDatabase.skipper.url` | JDBC URL for skipper. Overrides external scheme, host, port, database, and jdbc parameters. | `""` | -| `externalDatabase.skipper.username` | Existing username in the external db to be used by Skipper server | `skipper` | -| `externalDatabase.skipper.database` | Name of the existing database to be used by Skipper server | `skipper` | -| `externalDatabase.hibernateDialect` | Hibernate Dialect used by Dataflow/Skipper servers | `""` | +| Name | Description | Value | +| ----------------------------------------- | --------------------------------------------------------------------------------------------------- | ------------ | +| `mariadb.enabled` | Enable/disable MariaDB chart installation | `true` | +| `mariadb.architecture` | MariaDB architecture. Allowed values: `standalone` or `replication` | `standalone` | +| `mariadb.auth.rootPassword` | Password for the MariaDB `root` user | `""` | +| `mariadb.auth.username` | Username of new user to create | `dataflow` | +| `mariadb.auth.password` | Password for the new user | `change-me` | +| `mariadb.auth.database` | Database name to create | `dataflow` | +| `mariadb.auth.forcePassword` | Force users to specify required passwords in the database | `false` | +| `mariadb.auth.usePasswordFiles` | Mount credentials as a file instead of using an environment variable | `false` | +| `mariadb.initdbScripts` | Specify dictionary of scripts to be run at first boot | `{}` | +| `externalDatabase.host` | Host of the external database | `localhost` | +| `externalDatabase.port` | External database port number | `3306` | +| `externalDatabase.driver` | The fully qualified name of the JDBC Driver class | `nil` | +| `externalDatabase.scheme` | The scheme is a vendor-specific or shared protocol string that follows the "jdbc:" of the URL | `nil` | +| `externalDatabase.password` | Password for the above username | `""` | +| `externalDatabase.existingPasswordSecret` | Existing secret with database password | `nil` | +| `externalDatabase.existingPasswordKey` | Key of the existing secret with database password, defaults to `datasource-password` | `nil` | +| `externalDatabase.dataflow.url` | JDBC URL for dataflow server. Overrides external scheme, host, port, database, and jdbc parameters. | `""` | +| `externalDatabase.dataflow.database` | Name of the existing database to be used by Dataflow server | `dataflow` | +| `externalDatabase.dataflow.username` | Existing username in the external db to be used by Dataflow server | `dataflow` | +| `externalDatabase.skipper.url` | JDBC URL for skipper. Overrides external scheme, host, port, database, and jdbc parameters. | `""` | +| `externalDatabase.skipper.database` | Name of the existing database to be used by Skipper server | `skipper` | +| `externalDatabase.skipper.username` | Existing username in the external db to be used by Skipper server | `skipper` | +| `externalDatabase.hibernateDialect` | Hibernate Dialect used by Dataflow/Skipper servers | `""` | + ### RabbitMQ chart parameters -| Parameter | Description | Default | -|-------------------------------------------|--------------------------------------------|-------------------------------------------| -| `rabbitmq.enabled` | Enable/disable RabbitMQ chart installation | `true` | -| `rabbitmq.auth.username` | RabbitMQ username | `user` | -| `rabbitmq.auth.password` | RabbitMQ password | _random 40 character alphanumeric string_ | -| `externalRabbitmq.enabled` | Enable/disable external RabbitMQ | `false` | -| `externalRabbitmq.host` | Host of the external RabbitMQ | `localhost` | -| `externalRabbitmq.port` | External RabbitMQ port number | `5672` | -| `externalRabbitmq.username` | External RabbitMQ username | `guest` | -| `externalRabbitmq.password` | External RabbitMQ password | `guest` | -| `externalRabbitmq.vhost` | External RabbitMQ virtual host | `/` | -| `externalRabbitmq.existingPasswordSecret` | Existing secret with RabbitMQ password | `""` | +| Name | Description | Value | +| ----------------------------------------- | ------------------------------------------------------------------------------- | ----------- | +| `rabbitmq.enabled` | Enable/disable RabbitMQ chart installation | `true` | +| `rabbitmq.auth.username` | RabbitMQ username | `user` | +| `externalRabbitmq.enabled` | Enable/disable external RabbitMQ | `false` | +| `externalRabbitmq.host` | Host of the external RabbitMQ | `localhost` | +| `externalRabbitmq.port` | External RabbitMQ port number | `5672` | +| `externalRabbitmq.username` | External RabbitMQ username | `guest` | +| `externalRabbitmq.password` | External RabbitMQ password. It will be saved in a kubernetes secret | `guest` | +| `externalRabbitmq.vhost` | External RabbitMQ virtual host. It will be saved in a kubernetes secret | `nil` | +| `externalRabbitmq.existingPasswordSecret` | Existing secret with RabbitMQ password. It will be saved in a kubernetes secret | `nil` | + ### Kafka chart parameters -| Parameter | Description | Default | -|---------------------------------------|---------------------------------------------|------------------| -| `kafka.enabled` | Enable/disable Kafka chart installation | `false` | -| `kafka.replicaCount` | Number of Kafka brokers | `1` | -| `kafka.offsetsTopicReplicationFactor` | Kafka Secret Key | `1` | -| `kafka.zookeeper.enabled` | Enable/disable Zookeeper chart installation | `nil` | -| `kafka.zookeeper.replicaCount` | Number of Zookeeper replicas | `1` | -| `externalKafka.enabled` | Enable/disable external Kafka | `false` | -| `externalKafka.brokers` | External Kafka brokers | `localhost:9092` | -| `externalKafka.zkNodes` | External Zookeeper nodes | `localhost:2181` | +| Name | Description | Value | +| ------------------------------------- | --------------------------------------- | ---------------- | +| `kafka.enabled` | Enable/disable Kafka chart installation | `false` | +| `kafka.replicaCount` | Number of Kafka brokers | `1` | +| `kafka.offsetsTopicReplicationFactor` | Kafka Secret Key | `1` | +| `kafka.zookeeper.replicaCount` | Number of Zookeeper replicas | `1` | +| `externalKafka.enabled` | Enable/disable external Kafka | `false` | +| `externalKafka.brokers` | External Kafka brokers | `localhost:9092` | +| `externalKafka.zkNodes` | External Zookeeper nodes | `localhost:2181` | + Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example, diff --git a/bitnami/spring-cloud-dataflow/values.yaml b/bitnami/spring-cloud-dataflow/values.yaml index e7a7c09da6..e04fb12641 100644 --- a/bitnami/spring-cloud-dataflow/values.yaml +++ b/bitnami/spring-cloud-dataflow/values.yaml @@ -1,35 +1,52 @@ +## @section Global parameters ## Global Docker image parameters ## Please, note that this will override the image parameters, including dependencies, configured to use the global value -## Current available global Docker image parameters: imageRegistry, imagePullSecrets, and storageClass -## -# global: -# imageRegistry: myRegistryName -# imagePullSecrets: -# - myRegistryKeySecretName -# storageClass: myStorageClass +## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass -## String to partially override scdf.fullname template (will maintain the release name). +## @param global.imageRegistry Global Docker image registry +## @param global.imagePullSecrets Global Docker registry secret names as an array +## @param global.storageClass Global StorageClass for Persistent Volume(s) ## -# nameOverride: +global: + imageRegistry: + ## E.g. + ## imagePullSecrets: + ## - myRegistryKeySecretName + ## + imagePullSecrets: [] + storageClass: -## String to fully override scdf.fullname template. +## @section Common parameters + +## @param nameOverride String to partially override scdf.fullname template (will maintain the release name). ## -# fullnameOverride: - -## Force target Kubernetes version (using Helm capabilites if not set) +nameOverride: +## @param fullnameOverride String to fully override scdf.fullname template. +## +fullnameOverride: +## @param kubeVersion Force target Kubernetes version (using Helm capabilities if not set) ## kubeVersion: - -## Kubernetes Cluster Domain. +## @param clusterDomain Default Kubernetes cluster domain ## clusterDomain: cluster.local - +## @param extraDeploy Array of extra objects to deploy with the release ## +extraDeploy: [] + +## @section Dataflow Server parameters + ## Spring Cloud Dataflow Server parameters. ## server: ## Bitnami Spring Cloud Dataflow Server image ## ref: https://hub.docker.com/r/bitnami/spring-cloud-dataflow/tags/ + ## @param server.image.registry Spring Cloud Dataflow image registry + ## @param server.image.repository Spring Cloud Dataflow image repository + ## @param server.image.tag Spring Cloud Dataflow image tag (immutable tags are recommended) + ## @param server.image.pullPolicy Spring Cloud Dataflow image pull policy + ## @param server.image.pullSecrets Specify docker-registry secret names as an array + ## @param server.image.debug Enable image debug mode ## image: registry: docker.io @@ -39,45 +56,48 @@ server: ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images ## pullPolicy: IfNotPresent - ## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace) + ## Optionally specify an array of imagePullSecrets. + ## Secrets must be manually created in the namespace. ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + ## e.g: + ## pullSecrets: + ## - myRegistryKeySecretName ## - # pullSecrets: - # - myRegistryKeySecretName + pullSecrets: [] ## Set to true if you would like to see extra information on logs ## debug: false - - ## Deployment pod host aliases + ## @param server.hostAliases Deployment pod host aliases ## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/ ## hostAliases: [] - composedTaskRunner: ## Bitnami Spring Cloud Dataflow Composed Task Runner image ## ref: https://hub.docker.com/r/bitnami/spring-cloud-dataflow/tags/ + ## @param server.composedTaskRunner.image.registry Spring Cloud Dataflow Composed Task Runner image registry + ## @param server.composedTaskRunner.image.repository Spring Cloud Dataflow Composed Task Runner image repository + ## @param server.composedTaskRunner.image.tag Spring Cloud Dataflow Composed Task Runner image tag (immutable tags are recommended) ## image: registry: docker.io repository: bitnami/spring-cloud-dataflow-composed-task-runner tag: 2.8.1-debian-10-r0 - ## Spring Cloud Dataflow Server configuration parameters ## configuration: - ## Enables or disables streams + ## @param server.configuration.streamingEnabled Enables or disables streaming data processing ## streamingEnabled: true - ## Enables or disables tasks and schedules + ## @param server.configuration.batchEnabled Enables or disables batch data (tasks and schedules) processing ## batchEnabled: true - ## The name of the account to configure for the Kubernetes platform. + ## @param server.configuration.accountName The name of the account to configure for the Kubernetes platform ## accountName: default - ## Trust K8s certificates when querying the Kubernetes API. + ## @param server.configuration.trustK8sCerts Trust K8s certificates when querying the Kubernetes API ## trustK8sCerts: false - ## Container registries configuration parameters + ## @param server.configuration.containerRegistries Container registries configuration ## Example: ## containerRegistries: ## default: @@ -85,136 +105,127 @@ server: ## authorization-type: dockeroauth2 ## containerRegistries: {} - ## Endpoint to the grafana instance (Deprecated: use the metricsDashboard instead) + ## @param server.configuration.grafanaInfo Endpoint to the grafana instance (Deprecated: use the metricsDashboard instead) ## grafanaInfo: - ## Endpoint to the metricsDashboard instance + ## @param server.configuration.metricsDashboard Endpoint to the metricsDashboard instance ## metricsDashboard: - - ## ConfigMap with Spring Cloud Dataflow Server Configuration + ## @param server.existingConfigmap ConfigMap with Spring Cloud Dataflow Server Configuration ## NOTE: When it's set the server.configuration.* and deployer.* ## parameters are ignored, ## - # existingConfigmap: - - ## Additional environment variables to set + existingConfigmap: + ## @param server.extraEnvVars Extra environment variables to be set on Dataflow server container ## E.g: ## extraEnvVars: ## - name: FOO ## value: BAR ## extraEnvVars: [] - - ## ConfigMap with extra environment variables + ## @param server.extraEnvVarsCM ConfigMap with extra environment variables ## - # extraEnvVarsCM: - - ## Secret with extra environment variables + extraEnvVarsCM: + ## @param server.extraEnvVarsSecret Secret with extra environment variables ## - # extraEnvVarsSecret: - - ## Number of Dataflow Server replicas to deploy. + extraEnvVarsSecret: + ## @param server.replicaCount Number of Dataflow server replicas to deploy ## replicaCount: 1 - - ## StrategyType, can be set to RollingUpdate or Recreate by default. + ## @param server.strategyType StrategyType, can be set to RollingUpdate or Recreate by default ## strategyType: RollingUpdate - - ## Dataflow Server pod affinity preset + ## @param server.podAffinityPreset Dataflow server pod affinity preset. Ignored if `server.affinity` is set. Allowed values: `soft` or `hard` ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity - ## Allowed values: soft, hard ## podAffinityPreset: "" - - ## Dataflow Server pod anti-affinity preset + ## @param server.podAntiAffinityPreset Dataflow server pod anti-affinity preset. Ignored if `server.affinity` is set. Allowed values: `soft` or `hard` ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity - ## Allowed values: soft, hard ## podAntiAffinityPreset: soft - - ## Dataflow Server port + ## @param server.containerPort Dataflow server port ## containerPort: 8080 - ## Dataflow Server node affinity preset ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity - ## Allowed values: soft, hard ## nodeAffinityPreset: - ## Node affinity type - ## Allowed values: soft, hard + ## @param server.nodeAffinityPreset.type Dataflow server node affinity preset type. Ignored if `server.affinity` is set. Allowed values: `soft` or `hard` ## type: "" - ## Node label key to match + ## @param server.nodeAffinityPreset.key Dataflow server node label key to match Ignored if `server.affinity` is set. ## E.g. ## key: "kubernetes.io/e2e-az-name" ## key: "" - ## Node label values to match + ## @param server.nodeAffinityPreset.values Dataflow server node label values to match. Ignored if `server.affinity` is set. ## E.g. ## values: ## - e2e-az1 ## - e2e-az2 ## values: [] - - ## Affinity for Dataflow Server pods assignment + ## @param server.affinity Dataflow server affinity for pod assignment ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity ## Note: server.podAffinityPreset, server.podAntiAffinityPreset, and server.nodeAffinityPreset will be ignored when it's set ## affinity: {} - - ## Node labels for Dataflow Server pods assignment + ## @param server.nodeSelector Dataflow server node labels for pod assignment ## ref: https://kubernetes.io/docs/user-guide/node-selection/ ## nodeSelector: {} - - ## Tolerations for Dataflow Server pods assignment + ## @param server.tolerations Dataflow server tolerations for pod assignment ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ ## tolerations: [] - - ## Annotations for server pods. + ## @param server.podAnnotations Annotations for Dataflow server pods ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ ## podAnnotations: {} - - ## Dataflow Server pods' priority. + ## @param server.priorityClassName Dataflow Server pods' priority ## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/ ## - # priorityClassName: "" - + priorityClassName: "" ## Dataflow Server pods' Security Context. ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod + ## @param server.podSecurityContext.fsGroup Group ID for the volumes of the pod ## podSecurityContext: fsGroup: 1001 - ## Dataflow Server containers' Security Context (only main container). ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container + ## @param server.containerSecurityContext.runAsUser Set Dataflow Server container's Security Context runAsUser ## containerSecurityContext: runAsUser: 1001 - ## Dataflow Server containers' resource requests and limits. ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## We usually recommend not to specify default resources and to leave this as a conscious + ## choice for the user. This also increases chances charts run on environments with little + ## resources, such as Minikube. If you do want to specify resources, uncomment the following + ## lines, adjust them as necessary, and remove the curly braces after 'resources:'. + ## @param server.resources.limits The resources limits for the Dataflow server container + ## @param server.resources.requests The requested resources for the Dataflow server container ## resources: - # We usually recommend not to specify default resources and to leave this as a conscious - # choice for the user. This also increases chances charts run on environments with little - # resources, such as Minikube. If you do want to specify resources, uncomment the following - # lines, adjust them as necessary, and remove the curly braces after 'resources:'. + ## Example: + ## limits: + ## cpu: 100m + ## memory: 128Mi limits: {} - # cpu: 100m - # memory: 128Mi + ## Examples: + ## requests: + ## cpu: 100m + ## memory: 128Mi requests: {} - # cpu: 100m - # memory: 128Mi - - ## Dataflow Server pods' liveness and readiness probes. Evaluated as a template. - ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes + ## Dataflow Server pods' liveness probes. Evaluated as a template. + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes + ## @param server.livenessProbe.enabled Enable livenessProbe + ## @param server.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe + ## @param server.livenessProbe.periodSeconds Period seconds for livenessProbe + ## @param server.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe + ## @param server.livenessProbe.failureThreshold Failure threshold for livenessProbe + ## @param server.livenessProbe.successThreshold Success threshold for livenessProbe ## livenessProbe: enabled: true @@ -223,6 +234,15 @@ server: periodSeconds: 20 failureThreshold: 6 successThreshold: 1 + ## Dataflow Server pods' readiness probes. Evaluated as a template. + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes + ## @param server.readinessProbe.enabled Enable readinessProbe + ## @param server.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe + ## @param server.readinessProbe.periodSeconds Period seconds for readinessProbe + ## @param server.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe + ## @param server.readinessProbe.failureThreshold Failure threshold for readinessProbe + ## @param server.readinessProbe.successThreshold Success threshold for readinessProbe + ## readinessProbe: enabled: true initialDelaySeconds: 120 @@ -230,102 +250,95 @@ server: periodSeconds: 20 failureThreshold: 6 successThreshold: 1 - - ## Custom Liveness probes for Dataflow Server pods + ## @param server.customLivenessProbe Override default liveness probe ## customLivenessProbe: {} - - ## Custom Rediness probes Dataflow Server pods + ## @param server.customReadinessProbe Override default readiness probe ## customReadinessProbe: {} - ## Dataflow Server Service parameters. ## service: - ## Service type. + ## @param server.service.type Kubernetes service type ## type: ClusterIP - ## Service port. + ## @param server.service.port Service HTTP port ## port: 8080 - ## Specify the nodePort value for the LoadBalancer and NodePort service types. + ## @param server.service.nodePort Specify the nodePort value for the LoadBalancer and NodePort service types ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport ## - # nodePort: - ## Service clusterIP. + nodePort: + ## @param server.service.clusterIP Dataflow server service cluster IP + ## e.g: + ## clusterIP: None ## - # clusterIP: None - ## Enable client source IP preservation + clusterIP: + ## @param server.service.externalTrafficPolicy Enable client source IP preservation ## ref http://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip ## externalTrafficPolicy: Cluster - ## Set the LoadBalancer service type to internal only. + ## @param server.service.loadBalancerIP Load balancer IP if service type is `LoadBalancer` + ## Set the LoadBalancer service type to internal only ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer ## - # loadBalancerIP: - ## Load Balancer sources. + loadBalancerIP: + ## @param server.service.loadBalancerSourceRanges Addresses that are allowed when service is LoadBalancer ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service + ## e.g: + ## loadBalancerSourceRanges: + ## - 10.10.10.0/24 ## - # loadBalancerSourceRanges: - # - 10.10.10.0/24 - ## Provide any additional annotations which may be required. Evaluated as a template. + loadBalancerSourceRanges: [] + ## @param server.service.annotations Provide any additional annotations which may be required. Evaluated as a template. ## annotations: {} - ## Configure the ingress resource that allows you to access Dataflow Server ## ingress: - ## Set to true to enable ingress record generation + ## @param server.ingress.enabled Enable ingress controller resource ## enabled: false - - ## The Path to WordPress. You may need to set this to '/*' in order to use this - ## with ALB ingress controllers. + ## @param server.ingress.path The Path to WordPress. You may need to set this to '/*' in order to use this with ALB ingress controllers. ## path: / - - ## Ingress Path type + ## @param server.ingress.pathType Ingress path type ## pathType: ImplementationSpecific - - ## Set this to true in order to add the corresponding annotations for cert-manager + ## @param server.ingress.certManager Set this to true in order to add the corresponding annotations for cert-manager ## certManager: false - - ## When the ingress is enabled, a host pointing to this will be created + ## @param server.ingress.hostname Default host for the ingress resource ## hostname: dataflow.local - - ## Ingress annotations done as key:value pairs + ## @param server.ingress.annotations Ingress annotations ## For a full list of possible ingress annotations, please see ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md ## ## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set ## annotations: {} - - ## Enable TLS configuration for the hostname defined at ingress.hostname parameter + ## @param server.ingress.tls Enable TLS configuration for the hostname defined at ingress.hostname parameter ## TLS certificates will be retrieved from a TLS secret with name: {{- printf "%s-tls" .Values.ingress.hostname }} ## You can use the ingress.secrets parameter to create this TLS secret or relay on cert-manager to create it ## tls: false - - ## The list of additional hostnames to be covered with this ingress record. + ## @param server.ingress.extraHosts The list of additional hostnames to be covered with this ingress record. ## Most likely the hostname above will be enough, but in the event more hosts are needed, this is an array ## extraHosts: ## - name: dataflow.local ## path: / ## - - ## The tls configuration for additional hostnames to be covered with this ingress record. + extraHosts: [] + ## @param server.ingress.extraTls The tls configuration for additional hostnames to be covered with this ingress record. ## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls ## extraTls: ## - hosts: ## - dataflow.local ## secretName: dataflow.local-tls ## - - ## If you're providing your own certificates, please use this to add the certificates as secrets + extraTls: [] + ## @param server.ingress.secrets If you're providing your own certificates, please use this to add the certificates as secrets ## key and certificate should start with -----BEGIN CERTIFICATE----- or ## -----BEGIN RSA PRIVATE KEY----- ## @@ -334,14 +347,13 @@ server: ## ## It is also possible to create and manage the certificates outside of this helm chart ## Please see README.md for more information - ## - secrets: [] + ## e.g: ## - name: dataflow.local-tls ## key: ## certificate: ## - - ## Add init containers to the Dataflow Server pods. + secrets: [] + ## @param server.initContainers Add init containers to the Dataflow Server pods ## Example: ## initContainers: ## - name: your-image-name @@ -352,8 +364,7 @@ server: ## containerPort: 1234 ## initContainers: {} - - ## Add sidecars to the Dataflow Server pods. + ## @param server.sidecars Add sidecars to the Dataflow Server pods ## Example: ## sidecars: ## - name: your-image-name @@ -364,49 +375,57 @@ server: ## containerPort: 1234 ## sidecars: {} - ## Dataflow Server Pod Disruption Budget configuration ## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/ ## pdb: + ## @param server.pdb.create Enable/disable a Pod Disruption Budget creation + ## create: false - ## Min number of pods that must still be available after the eviction + ## @param server.pdb.minAvailable Minimum number/percentage of pods that should remain scheduled ## minAvailable: 1 - ## Max number of pods that can be unavailable after the eviction + ## @param server.pdb.maxUnavailable Maximum number/percentage of pods that may be made unavailable ## - # maxUnavailable: 1 - + maxUnavailable: ## Dataflow Server Autoscaling parameters. ## autoscaling: + ## @param server.autoscaling.enabled Enable autoscaling for Dataflow server + ## @param server.autoscaling.minReplicas Minimum number of Dataflow server replicas + ## @param server.autoscaling.maxReplicas Maximum number of Dataflow server replicas + ## @param server.autoscaling.targetCPU Target CPU utilization percentage + ## @param server.autoscaling.targetMemory Target Memory utilization percentage + ## enabled: false - # minReplicas: 1 - # maxReplicas: 11 - # targetCPU: 50 - # targetMemory: 50 - - ## Extra volumes to mount + minReplicas: + maxReplicas: + targetCPU: + targetMemory: + ## @param server.extraVolumes Extra Volumes to be set on the Dataflow Server Pod + ## e.g: + ## extraVolumes: + ## - name: sample + ## emptyDir: {} ## - # extraVolumes: - # - name: sample - # emptyDir: {} - # - # extraVolumeMounts: - # - name: sample - # mountPath: /temp/sample - + extraVolumes: [] + ## @param server.extraVolumeMounts Extra VolumeMounts to be set on the Dataflow Container + ## e.g: + ## extraVolumeMounts: + ## - name: sample + ## mountPath: /temp/sample + ## + extraVolumeMounts: [] ## Java Debug Wire Protocol (JDWP) parameters. ## jdwp: - ## Set to true to enable Java debugger. + ## @param server.jdwp.enabled Set to true to enable Java debugger ## enabled: false - ## Specify port for remote debugging. + ## @param server.jdwp.port Specify port for remote debugging ## port: 5005 - - ## Add proxy configuration for SCDF server + ## @param server.proxy Add proxy configuration for SCDF server ## Example: ## proxy: ## host: "myproxy.com" @@ -416,22 +435,27 @@ server: ## proxy: {} -## +## @section Dataflow Skipper parameters + ## Spring Cloud Skipper parameters. ## skipper: - ## Set to true to enable Spring Cloud Skipper component. + ## @param skipper.enabled Enable Spring Cloud Skipper component ## Note: it'll be also enabled if streams are enabled in Dataflow server configuration. ## enabled: true - - ## Deployment pod host aliases + ## @param skipper.hostAliases Deployment pod host aliases ## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/ ## hostAliases: [] - ## Bitnami Spring Cloud Skipper image ## ref: https://hub.docker.com/r/bitnami/spring-cloud-skipper/tags/ + ## @param skipper.image.registry Spring Cloud Skipper image registry + ## @param skipper.image.repository Spring Cloud Skipper image repository + ## @param skipper.image.tag Spring Cloud Skipper image tag (immutable tags are recommended) + ## @param skipper.image.pullPolicy Spring Cloud Skipper image pull policy + ## @param skipper.image.pullSecrets Specify docker-registry secret names as an array + ## @param skipper.image.debug Enable image debug mode ## image: registry: docker.io @@ -441,144 +465,138 @@ skipper: ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images ## pullPolicy: IfNotPresent - ## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace) + ## Optionally specify an array of imagePullSecrets. + ## Secrets must be manually created in the namespace. ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + ## e.g: + ## pullSecrets: + ## - myRegistryKeySecretName ## - # pullSecrets: - # - myRegistryKeySecretName + pullSecrets: [] ## Set to true if you would like to see extra information on logs ## debug: false - ## Skipper Server configuration parameters ## configuration: - ## The name of the account to configure for the Kubernetes platform. + ## @param skipper.configuration.accountName The name of the account to configure for the Kubernetes platform ## accountName: default - ## Trust K8s certificates when querying the Kubernetes API. + ## @param skipper.configuration.trustK8sCerts Trust K8s certificates when querying the Kubernetes API ## trustK8sCerts: false - - ## ConfigMap with Spring Cloud Dataflow Server Configuration + ## @param skipper.existingConfigmap Name of existing ConfigMap with Skipper server configuration ## NOTE: When it's set the server.configuration.* and deployer.* ## parameters are ignored, ## - # existingConfigmap: - - ## Additional environment variables to set + existingConfigmap: + ## @param skipper.extraEnvVars Extra environment variables to be set on Skipper server container ## E.g: ## extraEnvVars: ## - name: FOO ## value: BAR ## extraEnvVars: [] - - ## ConfigMap with extra environment variables + ## @param skipper.extraEnvVarsCM Name of existing ConfigMap containing extra environment variables ## - # extraEnvVarsCM: - - ## Secret with extra environment variables + extraEnvVarsCM: + ## @param skipper.extraEnvVarsSecret Name of existing Secret containing extra environment variables ## - # extraEnvVarsSecret: - - ## Number of Skipper replicas to deploy. + extraEnvVarsSecret: + ## @param skipper.replicaCount Number of Skipper server replicas to deploy ## replicaCount: 1 - - ## StrategyType, can be set to RollingUpdate or Recreate by default. + ## @param skipper.strategyType Deployment Strategy Type ## strategyType: RollingUpdate - - ## Skipper pod affinity preset + ## @param skipper.podAffinityPreset Skipper pod affinity preset. Ignored if `skipper.affinity` is set. Allowed values: `soft` or `hard` ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity - ## Allowed values: soft, hard ## podAffinityPreset: "" - - ## Skipper pod anti-affinity preset + ## @param skipper.podAntiAffinityPreset Skipper pod anti-affinity preset. Ignored if `skipper.affinity` is set. Allowed values: `soft` or `hard` ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity - ## Allowed values: soft, hard ## podAntiAffinityPreset: soft - ## Skipper node affinity preset ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity - ## Allowed values: soft, hard ## nodeAffinityPreset: - ## Node affinity type - ## Allowed values: soft, hard + ## @param skipper.nodeAffinityPreset.type Skipper node affinity preset type. Ignored if `skipper.affinity` is set. Allowed values: `soft` or `hard` ## type: "" - ## Node label key to match + ## @param skipper.nodeAffinityPreset.key Skipper node label key to match Ignored if `skipper.affinity` is set. ## E.g. ## key: "kubernetes.io/e2e-az-name" ## key: "" - ## Node label values to match + ## @param skipper.nodeAffinityPreset.values Skipper node label values to match. Ignored if `skipper.affinity` is set. ## E.g. ## values: ## - e2e-az1 ## - e2e-az2 ## values: [] - - ## Affinity for Skipper pods assignment + ## @param skipper.affinity Skipper affinity for pod assignment ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity ## Note: skipper.podAffinityPreset, skipper.podAntiAffinityPreset, and skipper.nodeAffinityPreset will be ignored when it's set ## affinity: {} - - ## Node labels for Skipper pods assignment + ## @param skipper.nodeSelector Skipper node labels for pod assignment ## ref: https://kubernetes.io/docs/user-guide/node-selection/ ## nodeSelector: {} - - ## Tolerations for Skipper pods assignment + ## @param skipper.tolerations Skipper tolerations for pod assignment ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ ## tolerations: [] - - ## Annotations for Skipper pods. + ## @param skipper.podAnnotations Annotations for Skipper server pods ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ ## podAnnotations: {} - - ## Skipper pods' priority. + ## @param skipper.priorityClassName Controller priorityClassName ## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/ ## - # priorityClassName: "" - + priorityClassName: "" ## Skipper pods' Security Context. ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod + ## @param skipper.podSecurityContext.fsGroup Group ID for the volumes of the pod ## podSecurityContext: fsGroup: 1001 - ## Skipper containers' Security Context (only main container). ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container + ## @param skipper.containerSecurityContext.runAsUser Set Dataflow Skipper container's Security Context runAsUser ## containerSecurityContext: runAsUser: 1001 - ## Skipper containers' resource requests and limits. ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## We usually recommend not to specify default resources and to leave this as a conscious + ## choice for the user. This also increases chances charts run on environments with little + ## resources, such as Minikube. If you do want to specify resources, uncomment the following + ## lines, adjust them as necessary, and remove the curly braces after 'resources:'. + ## @param skipper.resources.limits The resources limits for the Skipper server container + ## @param skipper.resources.requests The requested resources for the Skipper server container ## resources: - # We usually recommend not to specify default resources and to leave this as a conscious - # choice for the user. This also increases chances charts run on environments with little - # resources, such as Minikube. If you do want to specify resources, uncomment the following - # lines, adjust them as necessary, and remove the curly braces after 'resources:'. + ## Example: + ## limits: + ## cpu: 100m + ## memory: 128Mi limits: {} - # cpu: 100m - # memory: 128Mi + ## Examples: + ## requests: + ## cpu: 100m + ## memory: 128Mi requests: {} - # cpu: 100m - # memory: 128Mi - - ## Skipper pods' liveness and readiness probes. Evaluated as a template. - ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes + ## Configure extra options for liveness probe + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes + ## @param skipper.livenessProbe.enabled Enable livenessProbe + ## @param skipper.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe + ## @param skipper.livenessProbe.periodSeconds Period seconds for livenessProbe + ## @param skipper.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe + ## @param skipper.livenessProbe.failureThreshold Failure threshold for livenessProbe + ## @param skipper.livenessProbe.successThreshold Success threshold for livenessProbe ## livenessProbe: enabled: true @@ -587,6 +605,15 @@ skipper: periodSeconds: 20 failureThreshold: 6 successThreshold: 1 + ## Configure extra options for readiness probe + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes + ## @param skipper.readinessProbe.enabled Enable readinessProbe + ## @param skipper.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe + ## @param skipper.readinessProbe.periodSeconds Period seconds for readinessProbe + ## @param skipper.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe + ## @param skipper.readinessProbe.failureThreshold Failure threshold for readinessProbe + ## @param skipper.readinessProbe.successThreshold Success threshold for readinessProbe + ## readinessProbe: enabled: true initialDelaySeconds: 120 @@ -594,49 +621,50 @@ skipper: periodSeconds: 20 failureThreshold: 6 successThreshold: 1 - - ## Custom Liveness probes for Skipper pods + ## @param skipper.customLivenessProbe Override default liveness probe ## customLivenessProbe: {} - - ## Custom Rediness probes Skipper pods + ## @param skipper.customReadinessProbe Override default readiness probe ## customReadinessProbe: {} - ## Skipper Service parameters. ## service: - ## Service type. + ## @param skipper.service.type Kubernetes service type ## type: ClusterIP - ## Service port. + ## @param skipper.service.port Service HTTP port ## port: 80 - ## Specify the nodePort value for the LoadBalancer and NodePort service types. + ## @param skipper.service.nodePort Service HTTP node port ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport ## - # nodePort: - ## Service clusterIP. + nodePort: + ## @param skipper.service.clusterIP Skipper server service cluster IP + ## e.g: + ## clusterIP: None ## - # clusterIP: None - ## Enable client source IP preservation + clusterIP: + ## @param skipper.service.externalTrafficPolicy Enable client source IP preservation ## ref http://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip ## externalTrafficPolicy: Cluster - ## Set the LoadBalancer service type to internal only. + ## @param skipper.service.loadBalancerIP Load balancer IP if service type is `LoadBalancer` ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer ## - # loadBalancerIP: - ## Load Balancer sources. + loadBalancerIP: + ## @param skipper.service.loadBalancerSourceRanges Address that are allowed when service is LoadBalancer + ## Set the LoadBalancer service type to internal only ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service + ## e.g: + ## loadBalancerSourceRanges: + ## - 10.10.10.0/24 ## - # loadBalancerSourceRanges: - # - 10.10.10.0/24 - ## Provide any additional annotations which may be required. Evaluated as a template. + loadBalancerSourceRanges: [] + ## @param skipper.service.annotations Annotations for Skipper server service ## annotations: {} - - ## Add init containers to the Dataflow Skipper pods. + ## @param skipper.initContainers Add init containers to the Dataflow Skipper pods ## Example: ## initContainers: ## - name: your-image-name @@ -647,8 +675,7 @@ skipper: ## containerPort: 1234 ## initContainers: {} - - ## Add sidecars to the Skipper pods. + ## @param skipper.sidecars Add sidecars to the Skipper pods ## Example: ## sidecars: ## - name: your-image-name @@ -659,101 +686,328 @@ skipper: ## containerPort: 1234 ## sidecars: {} - ## Skipper Pod Disruption Budget configuration ## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/ ## pdb: + ## @param skipper.pdb.create Enable/disable a Pod Disruption Budget creation + ## create: false - ## Min number of pods that must still be available after the eviction + ## @param skipper.pdb.minAvailable Minimum number/percentage of pods that should remain scheduled ## minAvailable: 1 - ## Max number of pods that can be unavailable after the eviction + ## @param skipper.pdb.maxUnavailable Maximum number/percentage of pods that may be made unavailable ## - # maxUnavailable: 1 - + maxUnavailable: ## Skipper Autoscaling parameters. ## autoscaling: + ## @param skipper.autoscaling.enabled Enable autoscaling for Skipper server + ## @param skipper.autoscaling.minReplicas Minimum number of Skipper server replicas + ## @param skipper.autoscaling.maxReplicas Maximum number of Skipper server replicas + ## @param skipper.autoscaling.targetCPU Target CPU utilization percentage + ## @param skipper.autoscaling.targetMemory Target Memory utilization percentage + ## enabled: false - # minReplicas: 1 - # maxReplicas: 11 - # targetCPU: 50 - # targetMemory: 50 - - ## Extra volumes to mount - # extraVolumes: - # - name: sample - # emptyDir: {} - # - # extraVolumeMounts: - # - name: sample - # mountPath: /temp/sample - + minReplicas: + maxReplicas: + targetCPU: + targetMemory: + ## @param skipper.extraVolumes Extra Volumes to be set on the Skipper Pod + ## e.g: + ## extraVolumes: + ## - name: sample + ## emptyDir: {} + ## + extraVolumes: [] + ## @param skipper.extraVolumeMounts Extra VolumeMounts to be set on the Skipper Container + ## e.g: + ## extraVolumeMounts: + ## - name: sample + ## mountPath: /temp/sample + ## + extraVolumeMounts: [] ## Java Debug Wire Protocol (JDWP) parameters. ## jdwp: - ## Set to true to enable Java debugger. + ## @param skipper.jdwp.enabled Enable Java Debug Wire Protocol (JDWP) ## enabled: false - ## Specify port for remote debugging. + ## @param skipper.jdwp.port JDWP TCP port for remote debugging ## port: 5005 - -## ## External Skipper Configuration -## ## All of these values are ignored when skipper.enabled is set to true ## externalSkipper: - ## External Skipper server host and port + ## @param externalSkipper.host Host of a external Skipper Server ## host: localhost + ## @param externalSkipper.port External Skipper Server port number + ## port: 7577 +## @section Deployer parameters + ## Spring Cloud Deployer for Kubernetes parameters. ## deployer: ## Streaming applications resource requests and limits. ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## @param deployer.resources.limits [object] Streaming applications resource limits + ## @param deployer.resources.requests Streaming applications resource requests ## resources: limits: cpu: 500m memory: 1024Mi + ## Examples: + ## requests: + ## cpu: 100m + ## memory: 128Mi requests: {} - # cpu: 100m - # memory: 128Mi + ## Configure extra options for liveness probe + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes + ## @param deployer.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe + ## readinessProbe: initialDelaySeconds: 120 + ## Configure extra options for readiness probe + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes + ## @param deployer.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe + ## livenessProbe: initialDelaySeconds: 90 - ## The node selectors to apply to the streaming applications deployments in "key:value" format. + ## @param deployer.nodeSelector The node selectors to apply to the streaming applications deployments in "key:value" format ## Multiple node selectors are comma separated. ## nodeSelector: '' + ## @param deployer.tolerations Streaming applications tolerations + ## tolerations: {} - ## Extra volume mounts. + ## @param deployer.volumeMounts Streaming applications extra volume mounts ## volumeMounts: {} - ## Extra volumes. + ## @param deployer.volumes Streaming applications extra volumes ## volumes: {} - ## List of extra environment variables to set for any deployed app container. These environments will not override + ## @param deployer.environmentVariables Streaming applications environment variables ## RabbitMQ/Kafka envs. Multiple values are comma separated. ## environmentVariables: '' ## Streams containers' Security Context. This security context will be use in every deployed stream. ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container + ## @param deployer.podSecurityContext.runAsUser Set Dataflow Streams container's Security Context runAsUser ## podSecurityContext: runAsUser: 1001 +## @section RBAC parameters + +## K8s Service Account. +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ +## +serviceAccount: + ## @param serviceAccount.create Enable the creation of a ServiceAccount for Dataflow server and Skipper server pods + ## + create: true + ## @param serviceAccount.name Name of the created serviceAccount + ## If not set and create is true, a name is generated using the scdf.fullname template + ## + name: "" +## Role Based Access +## ref: https://kubernetes.io/docs/admin/authorization/rbac/ +## +rbac: + ## @param rbac.create Whether to create and use RBAC resources or not + ## binding Spring Cloud Dataflow ServiceAccount to a role + ## that allows pods querying the K8s API + ## + create: true + +## @section Metrics parameters + +## Prometheus metrics +## +metrics: + ## @param metrics.enabled Enable Prometheus metrics + ## + enabled: false + ## Bitnami Prometheus Rsocket Proxy image + ## ref: https://hub.docker.com/r/bitnami/prometheus-rsocket-proxy/tags/ + ## @param metrics.image.registry Prometheus Rsocket Proxy image registry + ## @param metrics.image.repository Prometheus Rsocket Proxy image repository + ## @param metrics.image.tag Prometheus Rsocket Proxy image tag (immutable tags are recommended) + ## @param metrics.image.pullPolicy Prometheus Rsocket Proxy image pull policy + ## @param metrics.image.pullSecrets Specify docker-registry secret names as an array + ## + image: + registry: docker.io + repository: bitnami/prometheus-rsocket-proxy + tag: 1.3.0-debian-10-r187 + ## Specify a imagePullPolicy. Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' + ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images + ## + pullPolicy: IfNotPresent + ## Optionally specify an array of imagePullSecrets. + ## Secrets must be manually created in the namespace. + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + ## e.g: + ## pullSecrets: + ## - myRegistryKeySecretName + ## + pullSecrets: [] + ## Prometheus Rsocket Proxy containers' resource requests and limits. + ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## We usually recommend not to specify default resources and to leave this as a conscious + ## choice for the user. This also increases chances charts run on environments with little + ## resources, such as Minikube. If you do want to specify resources, uncomment the following + ## lines, adjust them as necessary, and remove the curly braces after 'resources:'. + ## @param metrics.resources.limits The resources limits for the Prometheus Rsocket Proxy container + ## @param metrics.resources.requests The requested resources for the Prometheus Rsocket Proxy container + ## + resources: + ## Example: + ## limits: + ## cpu: 100m + ## memory: 128Mi + limits: {} + ## Examples: + ## requests: + ## cpu: 100m + ## memory: 128Mi + requests: {} + ## @param metrics.replicaCount Number of Prometheus Rsocket Proxy replicas to deploy + ## + replicaCount: 1 + ## @param metrics.podAffinityPreset Prometheus Rsocket Proxy pod affinity preset. Ignored if `metrics.affinity` is set. Allowed values: `soft` or `hard` + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity + ## + podAffinityPreset: "" + ## @param metrics.podAntiAffinityPreset Prometheus Rsocket Proxy pod anti-affinity preset. Ignored if `metrics.affinity` is set. Allowed values: `soft` or `hard` + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity + ## + podAntiAffinityPreset: soft + ## Prometheus Rsocket Proxy node affinity preset + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity + ## + nodeAffinityPreset: + ## @param metrics.nodeAffinityPreset.type Prometheus Rsocket Proxy node affinity preset type. Ignored if `metrics.affinity` is set. Allowed values: `soft` or `hard` + ## + type: "" + ## @param metrics.nodeAffinityPreset.key Prometheus Rsocket Proxy node label key to match Ignored if `metrics.affinity` is set. + ## E.g. + ## key: "kubernetes.io/e2e-az-name" + ## + key: "" + ## @param metrics.nodeAffinityPreset.values Prometheus Rsocket Proxy node label values to match. Ignored if `metrics.affinity` is set. + ## E.g. + ## values: + ## - e2e-az1 + ## - e2e-az2 + ## + values: [] + ## @param metrics.affinity Prometheus Rsocket Proxy affinity for pod assignment + ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity + ## Note: metrics.podAffinityPreset, metrics.podAntiAffinityPreset, and metrics.nodeAffinityPreset will be ignored when it's set + ## + affinity: {} + ## @param metrics.nodeSelector Prometheus Rsocket Proxy node labels for pod assignment + ## ref: https://kubernetes.io/docs/user-guide/node-selection/ + ## + nodeSelector: {} + ## @param metrics.tolerations Prometheus Rsocket Proxy tolerations for pod assignment + ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ + ## + tolerations: [] + ## @param metrics.podAnnotations Annotations for Prometheus Rsocket Proxy pods + ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ + ## + podAnnotations: {} + ## @param metrics.priorityClassName Prometheus Rsocket Proxy pods' priority. + ## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/ + ## + priorityClassName: "" + service: + ## @param metrics.service.httpPort Prometheus Rsocket Proxy HTTP port + ## + httpPort: 8080 + ## @param metrics.service.rsocketPort Prometheus Rsocket Proxy Rsocket port + ## + rsocketPort: 7001 + ## @param metrics.service.annotations [object] Annotations for the Prometheus Rsocket Proxy service + ## + annotations: + prometheus.io/scrape: 'true' + prometheus.io/port: '{{ .Values.metrics.service.httpPort }}' + prometheus.io/path: '/metrics/proxy' + ## Prometheus Operator ServiceMonitor configuration + ## + serviceMonitor: + ## @param metrics.serviceMonitor.enabled if `true`, creates a Prometheus Operator ServiceMonitor (also requires `metrics.enabled` to be `true`) + ## + enabled: false + ## @param metrics.serviceMonitor.extraLabels Labels to add to ServiceMonitor, in case prometheus operator is configured with serviceMonitorSelector + ## ref: https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#prometheusspec + ## + extraLabels: {} + ## @param metrics.serviceMonitor.namespace Namespace in which ServiceMonitor is created if different from release + ## + namespace: + ## @param metrics.serviceMonitor.interval Interval at which metrics should be scraped. + ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint + ## e.g: + ## interval: 10s + ## + interval: + ## @param metrics.serviceMonitor.scrapeTimeout Timeout after which the scrape is ended + ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint + ## e.g: + ## scrapeTimeout: 10s + ## + scrapeTimeout: + ## Prometheus Rsocket Proxy Pod Disruption Budget configuration + ## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/ + ## + pdb: + ## @param metrics.pdb.create Enable/disable a Pod Disruption Budget creation + ## + create: false + ## @param metrics.pdb.minAvailable Minimum number/percentage of pods that should remain scheduled + ## + minAvailable: 1 + ## @param metrics.pdb.maxUnavailable Maximum number/percentage of pods that may be made unavailable + ## + maxUnavailable: + ## Prometheus Rsocket Proxy Autoscaling parameters. + ## @param metrics.autoscaling.enabled Enable autoscaling for Prometheus Rsocket Proxy + ## @param metrics.autoscaling.minReplicas Minimum number of Prometheus Rsocket Proxy replicas + ## @param metrics.autoscaling.maxReplicas Maximum number of Prometheus Rsocket Proxy replicas + ## @param metrics.autoscaling.targetCPU Target CPU utilization percentage + ## @param metrics.autoscaling.targetMemory Target Memory utilization percentage + ## + autoscaling: + enabled: false + minReplicas: + maxReplicas: + targetCPU: + targetMemory: + +## @section Init Container parameters + ## Init containers parameters: ## wait-for-backends: Wait for the database and other services (such as Kafka or RabbitMQ) used when enabling streaming ## waitForBackends: + ## @param waitForBackends.enabled Wait for the database and other services (such as Kafka or RabbitMQ) used when enabling streaming + ## enabled: true + ## @param waitForBackends.image.registry Init container wait-for-backend image registry + ## @param waitForBackends.image.repository Init container wait-for-backend image name + ## @param waitForBackends.image.tag Init container wait-for-backend image tag + ## @param waitForBackends.image.pullPolicy Init container wait-for-backend image pull policy + ## @param waitForBackends.image.pullSecrets Specify docker-registry secret names as an array + ## image: registry: docker.io repository: bitnami/kubectl @@ -772,221 +1026,62 @@ waitForBackends: pullSecrets: [] ## Init container resource requests and limits. ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## We usually recommend not to specify default resources and to leave this as a conscious + ## choice for the user. This also increases chances charts run on environments with little + ## resources, such as Minikube. If you do want to specify resources, uncomment the following + ## lines, adjust them as necessary, and remove the curly braces after 'resources:'. + ## @param waitForBackends.resources.limits Init container wait-for-backend resource limits + ## @param waitForBackends.resources.requests Init container wait-for-backend resource requests ## resources: - ## We usually recommend not to specify default resources and to leave this as a conscious - ## choice for the user. This also increases chances charts run on environments with little - ## resources, such as Minikube. If you do want to specify resources, uncomment the following - ## lines, adjust them as necessary, and remove the curly braces after 'resources:'. - ## - limits: {} - # cpu: 100m - # memory: 128Mi - requests: {} - # cpu: 100m - # memory: 128Mi - -## K8s Service Account. -## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ -## -serviceAccount: - ## Specifies whether a ServiceAccount should be created. - ## - create: true - ## The name of the ServiceAccount to use. - ## If not set and create is true, a name is generated using the scdf.fullname template - ## - # name: - -## Role Based Access -## ref: https://kubernetes.io/docs/admin/authorization/rbac/ -## -rbac: - ## Specifies whether RBAC rules should be created - ## binding Spring Cloud Dataflow ServiceAccount to a role - ## that allows pods querying the K8s API - ## - create: true - -## Prometheus metrics -## -metrics: - enabled: false - ## Bitnami Prometheus Rsocket Proxy image - ## ref: https://hub.docker.com/r/bitnami/prometheus-rsocket-proxy/tags/ - ## - image: - registry: docker.io - repository: bitnami/prometheus-rsocket-proxy - tag: 1.3.0-debian-10-r187 - ## Specify a imagePullPolicy. Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' - ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images - ## - pullPolicy: IfNotPresent - ## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace) - ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ - ## - # pullSecrets: - # - myRegistryKeySecretName - ## Prometheus Rsocket Proxy containers' resource requests and limits. - ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ - ## - resources: - # We usually recommend not to specify default resources and to leave this as a conscious - # choice for the user. This also increases chances charts run on environments with little - # resources, such as Minikube. If you do want to specify resources, uncomment the following - # lines, adjust them as necessary, and remove the curly braces after 'resources:'. + ## Example: + ## limits: + ## cpu: 100m + ## memory: 128Mi limits: {} + ## Examples: + ## requests: + ## cpu: 100m + ## memory: 128Mi requests: {} - ## Number of Prometheus Rsocket Proxy replicas to deploy. - ## - replicaCount: 1 +## @section Database parameters - ## Prometheus Rsocket Proxy pod affinity preset - ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity - ## Allowed values: soft, hard - ## - podAffinityPreset: "" - - ## Prometheus Rsocket Proxy pod anti-affinity preset - ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity - ## Allowed values: soft, hard - ## - podAntiAffinityPreset: soft - - ## Prometheus Rsocket Proxy node affinity preset - ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity - ## Allowed values: soft, hard - ## - nodeAffinityPreset: - ## Node affinity type - ## Allowed values: soft, hard - ## - type: "" - ## Node label key to match - ## E.g. - ## key: "kubernetes.io/e2e-az-name" - ## - key: "" - ## Node label values to match - ## E.g. - ## values: - ## - e2e-az1 - ## - e2e-az2 - ## - values: [] - - ## Affinity for Prometheus Rsocket Proxy pods assignment - ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity - ## Note: metrics.podAffinityPreset, metrics.podAntiAffinityPreset, and metrics.nodeAffinityPreset will be ignored when it's set - ## - affinity: {} - - ## Node labels for Prometheus Rsocket Proxy pods assignment - ## ref: https://kubernetes.io/docs/user-guide/node-selection/ - ## - nodeSelector: {} - - ## Tolerations for Prometheus Rsocket Proxy pods assignment - ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ - ## - tolerations: [] - - ## Annotations for Prometheus Rsocket Proxy pods. - ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ - ## - podAnnotations: {} - - ## Prometheus Rsocket Proxy pods' priority. - ## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/ - ## - # priorityClassName: "" - - service: - ## Prometheus Rsocket Proxy HTTP port - ## - httpPort: 8080 - ## Prometheus Rsocket Proxy Rsocket port - ## - rsocketPort: 7001 - ## Annotations for the Prometheus Rsocket Proxy service - ## - annotations: - prometheus.io/scrape: 'true' - prometheus.io/port: '{{ .Values.metrics.service.httpPort }}' - prometheus.io/path: '/metrics/proxy' - ## Prometheus Operator ServiceMonitor configuration - ## - serviceMonitor: - enabled: false - ## Labels to add to ServiceMonitor, in case prometheus operator is configured with serviceMonitorSelector - ## ref: https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#prometheusspec - ## - extraLabels: {} - ## Namespace in which ServiceMonitor is created if different from release - ## - # namespace: monitoring - ## Interval at which metrics should be scraped. - ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint - ## - # interval: 10s - ## Timeout after which the scrape is ended - ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint - ## - # scrapeTimeout: 10s - - ## Prometheus Rsocket Proxy Pod Disruption Budget configuration - ## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/ - ## - pdb: - create: false - ## Min number of pods that must still be available after the eviction - ## - minAvailable: 1 - ## Max number of pods that can be unavailable after the eviction - ## - # maxUnavailable: 1 - - ## Prometheus Rsocket Proxy Autoscaling parameters. - ## - autoscaling: - enabled: false - # minReplicas: 1 - # maxReplicas: 11 - # targetCPU: 50 - # targetMemory: 50 - -## ## MariaDB chart configuration -## ## https://github.com/bitnami/charts/blob/master/bitnami/mariadb/values.yaml ## mariadb: + ## @param mariadb.enabled Enable/disable MariaDB chart installation + ## enabled: true - ## MariaDB architecture. Allowed values: standalone or replication + ## @param mariadb.architecture MariaDB architecture. Allowed values: `standalone` or `replication` ## architecture: standalone ## Custom user/db credentials ## auth: - ## MariaDB root password + ## @param mariadb.auth.rootPassword Password for the MariaDB `root` user ## ref: https://github.com/bitnami/bitnami-docker-mariadb#setting-the-root-password-on-first-run ## rootPassword: '' - ## MariaDB custom user and database + ## @param mariadb.auth.username Username of new user to create ## ref: https://github.com/bitnami/bitnami-docker-mariadb/blob/master/README.md#creating-a-database-on-first-run - ## ref: https://github.com/bitnami/bitnami-docker-mariadb/blob/master/README.md#creating-a-database-user-on-first-run ## username: dataflow + ## @param mariadb.auth.password Password for the new user + ## password: change-me - ## Database to create + ## @param mariadb.auth.database Database name to create ## ref: https://github.com/bitnami/bitnami-docker-mariadb#creating-a-database-on-first-run ## database: dataflow + ## @param mariadb.auth.forcePassword Force users to specify required passwords in the database + ## forcePassword: false + ## @param mariadb.auth.usePasswordFiles Mount credentials as a file instead of using an environment variable + ## usePasswordFiles: false - ## initdb scripts: specify dictionary of scripts to be run at first boot + ## @param mariadb.initdbScripts [object] Specify dictionary of scripts to be run at first boot ## We can only create one database on MariaDB using parameters. However, when streaming ## is enabled we need a second database for Skipper. ## Improvements: support creating N users/databases on MariaDB chart. @@ -997,116 +1092,139 @@ mariadb: CREATE DATABASE IF NOT EXISTS `skipper`; GRANT ALL ON skipper.* to 'skipper'@'%'; FLUSH PRIVILEGES; - -## ## External Database Configuration -## ## All of these values are ignored when mariadb.enabled is set to true ## externalDatabase: - - ## Database server host and port + ## @param externalDatabase.host Host of the external database ## host: localhost - port: 3306 - ## Database driver and scheme + ## @param externalDatabase.port External database port number + ## + port: 3306 + ## @param externalDatabase.driver The fully qualified name of the JDBC Driver class + ## + driver: + ## @param externalDatabase.scheme The scheme is a vendor-specific or shared protocol string that follows the "jdbc:" of the URL + ## + scheme: + ## @param externalDatabase.password Password for the above username ## - # driver: - # scheme: password: '' - # existingPasswordSecret: name-of-existing-secret - # existingPasswordKey: key in existingPasswordSecret, defaults to "datasource-password" + ## @param externalDatabase.existingPasswordSecret Existing secret with database password + ## + existingPasswordSecret: + ## @param externalDatabase.existingPasswordKey Key of the existing secret with database password, defaults to `datasource-password` + ## + existingPasswordKey: ## Data Flow user and database ## dataflow: - ## Database JDBC URL + ## @param externalDatabase.dataflow.url JDBC URL for dataflow server. Overrides external scheme, host, port, database, and jdbc parameters. ## This provides a mechanism to define a fully customized JDBC URL for the data flow server rather than having it ## derived from the common, individual attributes. This property, when defined, has precedence over the ## individual attributes (scheme, host, port, database) ## url: "" + ## @param externalDatabase.dataflow.database Name of the existing database to be used by Dataflow server + ## database: dataflow + ## @param externalDatabase.dataflow.username Existing username in the external db to be used by Dataflow server + ## username: dataflow ## Skipper and database ## skipper: - ## Database JDBC URL + ## @param externalDatabase.skipper.url JDBC URL for skipper. Overrides external scheme, host, port, database, and jdbc parameters. ## This provides a mechanism to define a fully customized JDBC URL for skipper rather than having it ## derived from the common, individual attributes. This property, when defined, has precedence over the ## individual attributes (scheme, host, port, database) ## url: "" + ## @param externalDatabase.skipper.database Name of the existing database to be used by Skipper server + ## database: skipper + ## @param externalDatabase.skipper.username Existing username in the external db to be used by Skipper server + ## username: skipper - ## Hibernate Dialect + ## @param externalDatabase.hibernateDialect Hibernate Dialect used by Dataflow/Skipper servers ## e.g: org.hibernate.dialect.MariaDB102Dialect ## hibernateDialect: '' -## +## @section RabbitMQ chart parameters + ## RabbitMQ chart configuration -## ## https://github.com/bitnami/charts/blob/master/bitnami/rabbitmq/values.yaml ## rabbitmq: + ## @param rabbitmq.enabled Enable/disable RabbitMQ chart installation + ## enabled: true + ## @param rabbitmq.auth.username RabbitMQ username + ## auth: username: user - -## ## External RabbitMQ Configuration -## ## All of these values are ignored when rabbitmq.enabled is set to true ## externalRabbitmq: - ## Enables or disables external RabbitMQ, can be disabled when Kafka is using + ## @param externalRabbitmq.enabled Enable/disable external RabbitMQ ## enabled: false - ## RabbitMQ host and port + ## @param externalRabbitmq.host Host of the external RabbitMQ ## host: localhost + ## @param externalRabbitmq.port External RabbitMQ port number + ## port: 5672 - ## RabbitMQ username and password, password will be saved in a kubernetes secret + ## @param externalRabbitmq.username External RabbitMQ username ## username: guest + ## @param externalRabbitmq.password External RabbitMQ password. It will be saved in a kubernetes secret + ## password: guest - # vhost: / - # existingPasswordSecret: name-of-existing-secret + ## @param externalRabbitmq.vhost External RabbitMQ virtual host. It will be saved in a kubernetes secret + ## e.g: + ## vhost: / + ## + vhost: + ## @param externalRabbitmq.existingPasswordSecret Existing secret with RabbitMQ password. It will be saved in a kubernetes secret + ## + existingPasswordSecret: + +## @section Kafka chart parameters -## ## Kafka chart configuration -## ## https://github.com/bitnami/charts/blob/master/bitnami/kafka/values.yaml ## kafka: + ## @param kafka.enabled Enable/disable Kafka chart installation + ## enabled: false + ## @param kafka.replicaCount Number of Kafka brokers + ## replicaCount: 1 + ## @param kafka.offsetsTopicReplicationFactor Kafka Secret Key + ## offsetsTopicReplicationFactor: 1 - ## ## Zookeeper chart configuration - ## ## https://github.com/bitnami/charts/blob/master/bitnami/zookeeper/values.yaml + ## @param kafka.zookeeper.replicaCount Number of Zookeeper replicas ## zookeeper: replicaCount: 1 - -## ## External Kafka Configuration -## ## All of these values are ignored when kafka.enabled is set to true ## externalKafka: + ## @param externalKafka.enabled Enable/disable external Kafka + ## enabled: false - - ## External Kafka brokers + ## @param externalKafka.brokers External Kafka brokers ## Multiple brokers can be provided in a comma separated list, e.g. host1:port1,host2:port2 ## brokers: localhost:9092 - - ## External Zookeeper nodes + ## @param externalKafka.zkNodes External Zookeeper nodes ## zkNodes: localhost:2181 - -## Extra objects to deploy (value evaluated as a template) -## -extraDeploy: [] diff --git a/bitnami/suitecrm/Chart.yaml b/bitnami/suitecrm/Chart.yaml index 5400a90393..44ebc68f4f 100644 --- a/bitnami/suitecrm/Chart.yaml +++ b/bitnami/suitecrm/Chart.yaml @@ -29,4 +29,4 @@ name: suitecrm sources: - https://github.com/bitnami/bitnami-docker-suitecrm - https://www.suitecrm.com/ -version: 9.3.14 +version: 9.3.15 diff --git a/bitnami/suitecrm/README.md b/bitnami/suitecrm/README.md index 0cce322a7f..e4fb51dc80 100644 --- a/bitnami/suitecrm/README.md +++ b/bitnami/suitecrm/README.md @@ -48,197 +48,216 @@ The command removes all the Kubernetes components associated with the chart and ## Parameters -The following table lists the configurable parameters of the SuiteCRM chart and their default values per section/component: - ### Global parameters -| Parameter | Description | Default | -|---------------------------|-------------------------------------------------|---------------------------------------------------------| -| `global.imageRegistry` | Global Docker image registry | `nil` | -| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | -| `global.storageClass` | Global storage class for dynamic provisioning | `nil` | +| Name | Description | Value | +| ------------------------- | ----------------------------------------------- | ----- | +| `global.imageRegistry` | Global Docker image registry | `nil` | +| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` | +| `global.storageClass` | Global StorageClass for Persistent Volume(s) | `nil` | + ### Common parameters -| Parameter | Description | Default | -|---------------------|------------------------------------------------------------------------------|---------------------------------------------------------| -| `image.registry` | SuiteCRM image registry | `docker.io` | -| `image.repository` | SuiteCRM Image name | `bitnami/suitecrm` | -| `image.tag` | SuiteCRM Image tag | `{TAG_NAME}` | -| `image.pullPolicy` | SuiteCRM image pull policy | `IfNotPresent` | -| `image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | -| `image.debug` | Specify if debug logs should be enabled | `false` | -| `nameOverride` | String to partially override suitecrm.fullname template | `nil` | -| `fullnameOverride` | String to fully override suitecrm.fullname template | `nil` | -| `commonLabels` | Labels to add to all deployed objects | `nil` | -| `commonAnnotations` | Annotations to add to all deployed objects | `[]` | -| `extraDeploy` | Array of extra objects to deploy with the release (evaluated as a template). | `nil` | -| `kubeVersion` | Force target Kubernetes version (using Helm capabilities if not set) | `nil` | +| Name | Description | Value | +| ------------------- | ------------------------------------------------------------------------------------------------------------ | ----- | +| `kubeVersion` | Force target Kubernetes version (using Helm capabilities if not set) | `nil` | +| `nameOverride` | String to partially override suitecrm.fullname template (will maintain the release name) | `nil` | +| `fullnameOverride` | String to fully override suitecrm.fullname template | `nil` | +| `extraDeploy` | Array with extra yaml to deploy with the chart. Evaluated as a template | `[]` | +| `commonAnnotations` | Common annotations to add to all SuiteCRM resources (sub-charts are not considered). Evaluated as a template | `{}` | +| `commonLabels` | Common labels to add to all SuiteCRM resources (sub-charts are not considered). Evaluated as a template | `{}` | + ### SuiteCRM parameters -| Parameter | Description | Default | -|--------------------------------------|-----------------------------------------------------------------------------------------------------------------------|---------------------------------------------| -| `affinity` | Map of node/pod affinities | `{}` | -| `allowEmptyPassword` | Allow DB blank passwords | `yes` | -| `args` | Override default container args (useful when using custom images) | `nil` | -| `command` | Override default container command (useful when using custom images) | `nil` | -| `containerPorts.http` | Sets http port inside NGINX container | `8080` | -| `containerPorts.https` | Sets https port inside NGINX container | `8443` | -| `containerSecurityContext.enabled` | Enable SuiteCRM containers' Security Context | `true` | -| `containerSecurityContext.runAsUser` | SuiteCRM containers' Security Context | `1001` | -| `customLivenessProbe` | Override default liveness probe | `nil` | -| `customReadinessProbe` | Override default readiness probe | `nil` | -| `customStartupProbe` | Override default startup probe | `nil` | -| `existingSecret` | Name of a secret with the application password | `nil` | -| `extraEnvVarsCM` | ConfigMap containing extra env vars | `nil` | -| `extraEnvVarsSecret` | Secret containing extra env vars (in case of sensitive data) | `nil` | -| `extraEnvVars` | Extra environment variables | `nil` | -| `extraVolumeMounts` | Array of extra volume mounts to be added to the container (evaluated as template). Normally used with `extraVolumes`. | `nil` | -| `extraVolumes` | Array of extra volumes to be added to the deployment (evaluated as template). Requires setting `extraVolumeMounts` | `nil` | -| `initContainers` | Add additional init containers to the pod (evaluated as a template) | `nil` | -| `lifecycleHooks` | LifecycleHook to set additional configuration at startup Evaluated as a template | `` | -| `livenessProbe` | Liveness probe configuration | `Check values.yaml file` | -| `hostAliases` | Add deployment host aliases | `Check values.yaml` | -| `nodeAffinityPreset.type` | Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` | -| `nodeAffinityPreset.key` | Node label key to match Ignored if `affinity` is set. | `""` | -| `nodeAffinityPreset.values` | Node label values to match. Ignored if `affinity` is set. | `[]` | -| `nodeSelector` | Node labels for pod assignment | `{}` (The value is evaluated as a template) | -| `suitecrmHost` | SuiteCRM host to create application URLs (when ingress, it will be ignored) | `nil` | -| `suitecrmUsername` | User of the application | `user` | -| `suitecrmPassword` | Application password | _random 10 character alphanumeric string_ | -| `suitecrmEmail` | Admin email | `user@example.com` | -| `suitecrmLastName` | Last name | `Last` | -| `suitecrmSmtpHost` | SMTP host | `nil` | -| `suitecrmSmtpPort` | SMTP port | `nil` | -| `suitecrmSmtpUser` | SMTP user | `nil` | -| `suitecrmSmtpPassword` | SMTP password | `nil` | -| `suitecrmSmtpProtocol` | SMTP protocol [`ssl`, `tls`] | `nil` | -| `suitecrmValidateUserIP` | Whether to validate the user IP address or not | `no` | -| `suitecrmSkipInstall` | Skip SuiteCRM installation wizard (`no` / `yes`) | `false` | -| `podAffinityPreset` | Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` | -| `podAntiAffinityPreset` | Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `soft` | -| `podAnnotations` | Pod annotations | `{}` | -| `podLabels` | Add additional labels to the pod (evaluated as a template) | `nil` | -| `podSecurityContext.enabled` | Enable SuiteCRM pods' Security Context | `true` | -| `podSecurityContext.fsGroup` | SuiteCRM pods' group ID | `1001` | -| `readinessProbe` | Readiness probe configuration | `Check values.yaml file` | -| `replicaCount` | Number of SuiteCRM Pods to run | `1` | -| `resources` | CPU/Memory resource requests/limits | Memory: `512Mi`, CPU: `300m` | -| `sidecars` | Attach additional containers to the pod (evaluated as a template) | `nil` | -| `smtpHost` | SMTP host | `nil` | -| `smtpPort` | SMTP port | `nil` (but suitecrm internal default is 25) | -| `smtpProtocol` | SMTP Protocol (options: ssl,tls, nil) | `nil` | -| `smtpUser` | SMTP user | `nil` | -| `smtpPassword` | SMTP password | `nil` | -| `startupProbe` | Startup probe configuration | `Check values.yaml file` | -| `tolerations` | Tolerations for pod assignment | `[]` (The value is evaluated as a template) | -| `updateStrategy` | Deployment update strategy | `nil` | +| Name | Description | Value | +| ------------------------------------ | ----------------------------------------------------------------------------------------- | ----------------------- | +| `image.registry` | SuiteCRM image registry | `docker.io` | +| `image.repository` | SuiteCRM image repository | `bitnami/suitecrm` | +| `image.tag` | SuiteCRM image tag (immutable tags are recommended) | `7.11.20-debian-10-r22` | +| `image.pullPolicy` | SuiteCRM image pull policy | `IfNotPresent` | +| `image.pullSecrets` | Specify docker-registry secret names as an array | `[]` | +| `image.debug` | Specify if debug logs should be enabled | `false` | +| `replicaCount` | Number of replicas (requires ReadWriteMany PVC support) | `1` | +| `suitecrmSkipInstall` | Skip SuiteCRM installation wizard. Useful for migrations and restoring from SQL dump | `false` | +| `suitecrmValidateUserIP` | Whether to validate the user IP address or not | `false` | +| `suitecrmHost` | SuiteCRM host to create application URLs | `nil` | +| `suitecrmUsername` | User of the application | `user` | +| `suitecrmPassword` | Application password | `nil` | +| `suitecrmEmail` | Admin email | `user@example.com` | +| `allowEmptyPassword` | Allow DB blank passwords | `false` | +| `command` | Override default container command (useful when using custom images) | `nil` | +| `args` | Override default container args (useful when using custom images) | `nil` | +| `hostAliases` | Deployment pod host aliases | `[]` | +| `updateStrategy.type` | Update strategy - only really applicable for deployments with RWO PVs attached | `RollingUpdate` | +| `extraEnvVars` | An array to add extra environment variables | `[]` | +| `extraEnvVarsCM` | ConfigMap containing extra environment variables | `nil` | +| `extraEnvVarsSecret` | Secret containing extra environment variables | `nil` | +| `extraVolumes` | Extra volumes to add to the deployment. Requires setting `extraVolumeMounts` | `[]` | +| `extraVolumeMounts` | Extra volume mounts to add to the container. Requires setting `extraVolumeMounts | `[]` | +| `initContainers` | Extra init containers to add to the deployment | `[]` | +| `sidecars` | Extra sidecar containers to add to the deployment | `[]` | +| `tolerations` | Tolerations for pod assignment. Evaluated as a template. | `[]` | +| `existingSecret` | Name of a secret with the application password | `nil` | +| `suitecrmSmtpHost` | SMTP host | `nil` | +| `suitecrmSmtpPort` | SMTP port | `nil` | +| `suitecrmSmtpUser` | SMTP user | `nil` | +| `suitecrmSmtpPassword` | SMTP password | `nil` | +| `suitecrmSmtpProtocol` | SMTP protocol [`ssl`, `tls`] | `nil` | +| `suitecrmNotifyAddress` | SuiteCRM notify address | `nil` | +| `suitecrmNotifyName` | SuiteCRM notify name | `nil` | +| `containerPorts` | Container ports | `{}` | +| `sessionAffinity` | Control where client requests go, to the same pod or round-robin | `None` | +| `podAffinityPreset` | Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `podAntiAffinityPreset` | Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `soft` | +| `nodeAffinityPreset.type` | Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `nodeAffinityPreset.key` | Node label key to match Ignored if `affinity` is set. | `""` | +| `nodeAffinityPreset.values` | Node label values to match. Ignored if `affinity` is set. | `[]` | +| `affinity` | Affinity for pod assignment | `{}` | +| `nodeSelector` | Node labels for pod assignment. Evaluated as a template. | `{}` | +| `resources.requests` | The requested resources for the container | `{}` | +| `podSecurityContext.enabled` | Enable SuiteCRM pods' Security Context | `true` | +| `podSecurityContext.fsGroup` | SuiteCRM pods' group ID | `1001` | +| `containerSecurityContext.enabled` | Enable SuiteCRM containers' Security Context | `true` | +| `containerSecurityContext.runAsUser` | SuiteCRM containers' Security Context | `1001` | +| `livenessProbe.enabled` | Enable livenessProbe | `true` | +| `livenessProbe.path` | Request path for livenessProbe | `/index.php` | +| `livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `600` | +| `livenessProbe.periodSeconds` | Period seconds for livenessProbe | `10` | +| `livenessProbe.timeoutSeconds` | Timeout seconds for livenessProbe | `5` | +| `livenessProbe.failureThreshold` | Failure threshold for livenessProbe | `6` | +| `livenessProbe.successThreshold` | Success threshold for livenessProbe | `1` | +| `readinessProbe.enabled` | Enable readinessProbe | `true` | +| `readinessProbe.path` | Request path for readinessProbe | `/index.php` | +| `readinessProbe.initialDelaySeconds` | Initial delay seconds for readinessProbe | `30` | +| `readinessProbe.periodSeconds` | Period seconds for readinessProbe | `5` | +| `readinessProbe.timeoutSeconds` | Timeout seconds for readinessProbe | `3` | +| `readinessProbe.failureThreshold` | Failure threshold for readinessProbe | `6` | +| `readinessProbe.successThreshold` | Success threshold for readinessProbe | `1` | +| `startupProbe.enabled` | Enable startupProbe | `false` | +| `startupProbe.path` | Request path for startupProbe | `/index.php` | +| `startupProbe.initialDelaySeconds` | Initial delay seconds for startupProbe | `0` | +| `startupProbe.periodSeconds` | Period seconds for startupProbe | `10` | +| `startupProbe.timeoutSeconds` | Timeout seconds for startupProbe | `3` | +| `startupProbe.failureThreshold` | Failure threshold for startupProbe | `60` | +| `startupProbe.successThreshold` | Success threshold for startupProbe | `1` | +| `customLivenessProbe` | Override default liveness probe | `{}` | +| `customReadinessProbe` | Override default readiness probe | `{}` | +| `customStartupProbe` | Override default startup probe | `{}` | +| `lifecycleHooks` | lifecycleHooks for the container to automate configuration before or after startup | `nil` | +| `podAnnotations` | Pod annotations | `{}` | +| `podLabels` | Pod extra labels | `{}` | + ### Database parameters -| Parameter | Description | Default | -|---------------------------------------------|------------------------------------------------------------------------------------------|------------------------------------------------| -| `mariadb.enabled` | Whether to use the MariaDB chart | `true` | -| `mariadb.architecture` | MariaDB architecture (`standalone` or `replication`) | `standalone` | -| `mariadb.auth.rootPassword` | Password for the MariaDB `root` user | _random 10 character alphanumeric string_ | -| `mariadb.auth.database` | Database name to create | `bitnami_suitecrm` | -| `mariadb.auth.username` | Database user to create | `bn_suitecrm` | -| `mariadb.auth.password` | Password for the database | _random 10 character long alphanumeric string_ | -| `mariadb.primary.persistence.enabled` | Enable database persistence using PVC | `true` | -| `mariadb.primary.persistence.existingClaim` | Name of an existing `PersistentVolumeClaim` for MariaDB primary replicas | `nil` | -| `mariadb.primary.persistence.accessModes` | Database Persistent Volume Access Modes | `[ReadWriteOnce]` | -| `mariadb.primary.persistence.size` | Database Persistent Volume Size | `8Gi` | -| `mariadb.primary.persistence.hostPath` | Set path in case you want to use local host path volumes (not recommended in production) | `nil` | -| `mariadb.primary.persistence.storageClass` | MariaDB primary persistent volume storage Class | `nil` | -| `externalDatabase.user` | Existing username in the external db | `bn_suitecrm` | -| `externalDatabase.password` | Password for the above username | `""` | -| `externalDatabase.database` | Name of the existing database | `bitnami_suitecrm` | -| `externalDatabase.host` | Host of the existing database | `nil` | -| `externalDatabase.port` | Port of the existing database | `3306` | +| Name | Description | Value | +| ------------------------------------------- | ---------------------------------------------------------------------------------------- | ------------------ | +| `mariadb.enabled` | Whether to deploy a mariadb server to satisfy the applications database requirements | `true` | +| `mariadb.architecture` | MariaDB architecture. Allowed values: `standalone` or `replication` | `standalone` | +| `mariadb.auth.rootPassword` | Password for the MariaDB `root` user | `""` | +| `mariadb.auth.database` | Database name to create | `bitnami_suitecrm` | +| `mariadb.auth.username` | Database user to create | `bn_suitecrm` | +| `mariadb.auth.password` | Password for the database | `""` | +| `mariadb.primary.persistence.enabled` | Enable database persistence using PVC | `true` | +| `mariadb.primary.persistence.storageClass` | MariaDB data Persistent Volume Storage Class | `nil` | +| `mariadb.primary.persistence.accessModes` | Database Persistent Volume Access Modes | `[]` | +| `mariadb.primary.persistence.size` | Database Persistent Volume Size | `8Gi` | +| `mariadb.primary.persistence.hostPath` | Set path in case you want to use local host path volumes (not recommended in production) | `nil` | +| `mariadb.primary.persistence.existingClaim` | Name of an existing `PersistentVolumeClaim` for MariaDB primary replicas | `nil` | +| `externalDatabase.host` | Host of the existing database | `nil` | +| `externalDatabase.port` | Port of the existing database | `3306` | +| `externalDatabase.user` | Existing username in the external database | `bn_suitecrm` | +| `externalDatabase.password` | Password for the above username | `nil` | +| `externalDatabase.database` | Name of the existing database | `bitnami_suitecrm` | + ### Persistence parameters -| Parameter | Description | Default | -|-----------------------------|------------------------------------------|---------------------------------------------| -| `persistence.enabled` | Enable persistence using PVC | `true` | -| `persistence.storageClass` | PVC Storage Class for SuiteCRM volume | `nil` (uses alpha storage class annotation) | -| `persistence.existingClaim` | An Existing PVC name for SuiteCRM volume | `nil` (uses alpha storage class annotation) | -| `persistence.hostPath` | Host mount path for SuiteCRM volume | `nil` (will not mount to a host path) | -| `persistence.accessMode` | PVC Access Mode for SuiteCRM volume | `ReadWriteOnce` | -| `persistence.size` | PVC Storage Request for SuiteCRM volume | `8Gi` | +| Name | Description | Value | +| --------------------------- | ---------------------------------------- | --------------- | +| `persistence.enabled` | Enable persistence using PVC | `true` | +| `persistence.storageClass` | PVC Storage Class for SuiteCRM volume | `nil` | +| `persistence.accessMode` | PVC Access Mode for SuiteCRM volume | `ReadWriteOnce` | +| `persistence.accessMode` | PVC Access Mode for SuiteCRM volume | `ReadWriteOnce` | +| `persistence.size` | PVC Storage Request for SuiteCRM volume | `8Gi` | +| `persistence.existingClaim` | An Existing PVC name for SuiteCRM volume | `nil` | +| `persistence.hostPath` | Host mount path for SuiteCRM volume | `nil` | + ### Volume Permissions parameters -| Parameter | Description | Default | -|---------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------| -| `volumePermissions.enabled` | Enable init container that changes volume permissions in the data directory (for cases where the default k8s `runAsUser` and `fsUser` values do not work) | `false` | -| `volumePermissions.image.registry` | Init container volume-permissions image registry | `docker.io` | -| `volumePermissions.image.repository` | Init container volume-permissions image name | `bitnami/bitnami-shell` | -| `volumePermissions.image.tag` | Init container volume-permissions image tag | `"10"` | -| `volumePermissions.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | -| `volumePermissions.image.pullPolicy` | Init container volume-permissions image pull policy | `Always` | -| `volumePermissions.resources` | Init container resource requests/limit | `nil` | +| Name | Description | Value | +| -------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------- | +| `volumePermissions.enabled` | Enable init container that changes volume permissions in the data directory (for cases where the default k8s `runAsUser` and `fsUser` values do not work) | `false` | +| `volumePermissions.image.registry` | Init container volume-permissions image registry | `docker.io` | +| `volumePermissions.image.repository` | Init container volume-permissions image repository | `bitnami/bitnami-shell` | +| `volumePermissions.image.tag` | Init container volume-permissions image tag | `10-debian-10-r123` | +| `volumePermissions.image.pullPolicy` | Init container volume-permissions image pull policy | `Always` | +| `volumePermissions.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` | +| `volumePermissions.resources.limits` | The resources limits for the container | `{}` | +| `volumePermissions.resources.requests` | The requested resources for the container | `{}` | + ### Traffic Exposure Parameters -| Parameter | Description | Default | -|----------------------------------|---------------------------------------------------------------|--------------------------| -| `service.type` | Kubernetes Service type | `LoadBalancer` | -| `service.port` | Service HTTP port | `80` | -| `service.httpsPort` | Service HTTPS port | `443` | -| `service.externalTrafficPolicy` | Enable client source IP preservation | `Cluster` | -| `service.nodePorts.http` | Kubernetes http node port | `""` | -| `service.nodePorts.https` | Kubernetes https node port | `""` | -| `ingress.enabled` | Enable ingress controller resource | `false` | -| `ingress.certManager` | Add annotations for cert-manager | `false` | -| `ingress.hostname` | Default host for the ingress resource | `suitecrm.local` | -| `ingress.annotations` | Ingress annotations | `{}` | -| `ingress.hosts[0].name` | Hostname to your SuiteCRM installation | `nil` | -| `ingress.hosts[0].path` | Path within the url structure | `nil` | -| `ingress.tls[0].hosts[0]` | TLS hosts | `nil` | -| `ingress.tls[0].secretName` | TLS Secret (certificates) | `nil` | -| `ingress.secrets[0].name` | TLS Secret Name | `nil` | -| `ingress.secrets[0].certificate` | TLS Secret Certificate | `nil` | -| `ingress.secrets[0].key` | TLS Secret Key | `nil` | -| `ingress.apiVersion` | Force Ingress API version (automatically detected if not set) | `` | -| `ingress.path` | Ingress path | `/` | -| `ingress.pathType` | Ingress path type | `ImplementationSpecific` | +| Name | Description | Value | +| ------------------------------- | --------------------------------------------------------------------------------------------- | ------------------------ | +| `service.type` | Kubernetes Service type | `LoadBalancer` | +| `service.port` | Service HTTP port | `8080` | +| `service.httpsPort` | Service HTTPS port | `8443` | +| `service.nodePorts.http` | Kubernetes HTTP node port | `""` | +| `service.nodePorts.https` | Kubernetes HTTPS node port | `""` | +| `service.externalTrafficPolicy` | Enable client source IP preservation | `Cluster` | +| `ingress.enabled` | Enable ingress controller resource | `false` | +| `ingress.certManager` | Set this to true in order to add the corresponding annotations for cert-manager | `false` | +| `ingress.hostname` | Default host for the ingress resource | `suitecrm.local` | +| `ingress.annotations` | Ingress annotations | `{}` | +| `ingress.hosts` | The list of additional hostnames to be covered with this ingress record. | `nil` | +| `ingress.tls` | The tls configuration for the ingress | `nil` | +| `ingress.secrets` | If you're providing your own certificates, please use this to add the certificates as secrets | `nil` | +| `ingress.apiVersion` | Force Ingress API version (automatically detected if not set) | `nil` | +| `ingress.path` | Ingress path | `/` | +| `ingress.pathType` | Ingress path type | `ImplementationSpecific` | + ### Metrics parameters -| Parameter | Description | Default | -|-----------------------------|--------------------------------------------------|--------------------------------------------------------------| -| `metrics.enabled` | Start a side-car prometheus exporter | `false` | -| `metrics.image.registry` | Apache exporter image registry | `docker.io` | -| `metrics.image.repository` | Apache exporter image name | `bitnami/apache-exporter` | -| `metrics.image.tag` | Apache exporter image tag | `{TAG_NAME}` | -| `metrics.image.pullPolicy` | Image pull policy | `IfNotPresent` | -| `metrics.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | -| `metrics.podAnnotations` | Additional annotations for Metrics exporter pod | `{prometheus.io/scrape: "true", prometheus.io/port: "9117"}` | -| `metrics.resources` | Exporter resource requests/limit | {} | +| Name | Description | Value | +| --------------------------- | ---------------------------------------------------------- | ------------------------- | +| `metrics.enabled` | Start a side-car prometheus exporter | `false` | +| `metrics.image.registry` | Apache exporter image registry | `docker.io` | +| `metrics.image.repository` | Apache exporter image repository | `bitnami/apache-exporter` | +| `metrics.image.tag` | Apache exporter image tag (immutable tags are recommended) | `0.9.0-debian-10-r21` | +| `metrics.image.pullPolicy` | Image pull policy | `IfNotPresent` | +| `metrics.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` | +| `metrics.resources` | Metrics exporter resource requests and limits | `{}` | +| `metrics.podAnnotations` | Additional annotations for Metrics exporter pod | `{}` | + ### Certificate injection parameters -| Parameter | Description | Default | -|------------------------------------------------------|----------------------------------------------------------------------|------------------------------------------| -| `certificates.customCertificate.certificateSecret` | Secret containing the certificate and key to add | `""` | -| `certificates.customCertificate.chainSecret.name` | Name of the secret containing the certificate chain | `""` | -| `certificates.customCertificate.chainSecret.key` | Key of the certificate chain file inside the secret | `""` | -| `certificates.customCertificate.certificateLocation` | Location in the container to store the certificate | `/etc/ssl/certs/ssl-cert-snakeoil.pem` | -| `certificates.customCertificate.keyLocation` | Location in the container to store the private key | `/etc/ssl/private/ssl-cert-snakeoil.key` | -| `certificates.customCertificate.chainLocation` | Location in the container to store the certificate chain | `/etc/ssl/certs/chain.pem` | -| `certificates.customCAs` | Defines a list of secrets to import into the container trust store | `[]` | -| `certificates.image.registry` | Container sidecar registry | `docker.io` | -| `certificates.image.repository` | Container sidecar image | `bitnami/bitnami-shell` | -| `certificates.image.tag` | Container sidecar image tag | `"10"` | -| `certificates.image.pullPolicy` | Container sidecar image pull policy | `IfNotPresent` | -| `certificates.image.pullSecrets` | Container sidecar image pull secrets | `image.pullSecrets` | -| `certificates.args` | Override default container args (useful when using custom images) | `nil` | -| `certificates.command` | Override default container command (useful when using custom images) | `nil` | -| `certificates.extraEnvVars` | Container sidecar extra environment variables (eg proxy) | `[]` | -| `certificates.extraEnvVarsCM` | ConfigMap containing extra env vars | `nil` | -| `certificates.extraEnvVarsSecret` | Secret containing extra env vars (in case of sensitive data) | `nil` | +| Name | Description | Value | +| ---------------------------------------------------- | ------------------------------------------------------------------------- | ---------------------------------------- | +| `certificates.customCertificate.certificateSecret` | Secret containing the certificate and key to add | `""` | +| `certificates.customCertificate.chainSecret.name` | Name of the secret containing the certificate chain | `nil` | +| `certificates.customCertificate.chainSecret.key` | Key of the certificate chain file inside the secret | `nil` | +| `certificates.customCertificate.certificateLocation` | Location in the container to store the certificate | `/etc/ssl/certs/ssl-cert-snakeoil.pem` | +| `certificates.customCertificate.keyLocation` | Location in the container to store the private key | `/etc/ssl/private/ssl-cert-snakeoil.key` | +| `certificates.customCertificate.chainLocation` | Location in the container to store the certificate chain | `/etc/ssl/certs/mychain.pem` | +| `certificates.customCAs` | Defines a list of secrets to import into the container trust store | `[]` | +| `certificates.command` | Override default container command (useful when using custom images) | `nil` | +| `certificates.args` | Override default container args (useful when using custom images) | `nil` | +| `certificates.extraEnvVars` | Container sidecar extra environment variables | `[]` | +| `certificates.extraEnvVarsCM` | ConfigMap containing extra environment variables | `nil` | +| `certificates.extraEnvVarsSecret` | Secret containing extra environment variables (in case of sensitive data) | `nil` | +| `certificates.image.registry` | Container sidecar registry | `docker.io` | +| `certificates.image.repository` | Container sidecar image repository | `bitnami/bitnami-shell` | +| `certificates.image.tag` | Container sidecar image tag (immutable tags are recommended) | `10-debian-10-r123` | +| `certificates.image.pullPolicy` | Container sidecar image pull policy | `IfNotPresent` | +| `certificates.image.pullSecrets` | Container sidecar image pull secrets | `[]` | + The above parameters map to the env variables defined in [bitnami/suitecrm](http://github.com/bitnami/bitnami-docker-suitecrm). For more information please refer to the [bitnami/suitecrm](http://github.com/bitnami/bitnami-docker-suitecrm) image documentation. diff --git a/bitnami/suitecrm/values.yaml b/bitnami/suitecrm/values.yaml index 28a1e34bd1..bc31d208d9 100644 --- a/bitnami/suitecrm/values.yaml +++ b/bitnami/suitecrm/values.yaml @@ -1,19 +1,52 @@ +## @section Global parameters ## Global Docker image parameters ## Please, note that this will override the image parameters, including dependencies, configured to use the global value -## Current available global Docker image parameters: imageRegistry and imagePullSecrets -## -# global: -# imageRegistry: myRegistryName -# imagePullSecrets: -# - myRegistryKeySecretName -# storageClass: myStorageClass +## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass -## Force target Kubernetes version (using Helm capabilites if not set) +## @param global.imageRegistry Global Docker image registry +## @param global.imagePullSecrets Global Docker registry secret names as an array +## @param global.storageClass Global StorageClass for Persistent Volume(s) +## +global: + imageRegistry: + ## E.g. + ## imagePullSecrets: + ## - myRegistryKeySecretName + ## + imagePullSecrets: [] + storageClass: + +## @section Common parameters + +## @param kubeVersion Force target Kubernetes version (using Helm capabilities if not set) ## kubeVersion: +## @param nameOverride String to partially override suitecrm.fullname template (will maintain the release name) +## +nameOverride: +## @param fullnameOverride String to fully override suitecrm.fullname template +## +fullnameOverride: +## @param extraDeploy Array with extra yaml to deploy with the chart. Evaluated as a template +## +extraDeploy: [] +## @param commonAnnotations Common annotations to add to all SuiteCRM resources (sub-charts are not considered). Evaluated as a template +## +commonAnnotations: {} +## @param commonLabels Common labels to add to all SuiteCRM resources (sub-charts are not considered). Evaluated as a template +## +commonLabels: {} + +## @section SuiteCRM parameters ## Bitnami SuiteCRM image version ## ref: https://hub.docker.com/r/bitnami/suitecrm/tags/ +## @param image.registry SuiteCRM image registry +## @param image.repository SuiteCRM image repository +## @param image.tag SuiteCRM image tag (immutable tags are recommended) +## @param image.pullPolicy SuiteCRM image pull policy +## @param image.pullSecrets Specify docker-registry secret names as an array +## @param image.debug Specify if debug logs should be enabled ## image: registry: docker.io @@ -27,203 +60,307 @@ image: ## Optionally specify an array of imagePullSecrets. ## Secrets must be manually created in the namespace. ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + ## e.g: + ## pullSecrets: + ## - myRegistryKeySecretName ## - pullSecrets: - # - myRegistryKeySecretName + pullSecrets: [] ## Set to true if you would like to see extra information on logs ## debug: false - -## String to partially override suitecrm.fullname template (will maintain the release name) -## -nameOverride: - -## String to fully override suitecrm.fullname template -## -fullnameOverride: - -## Number of replicas (requires ReadWriteMany PVC support) +## @param replicaCount Number of replicas (requires ReadWriteMany PVC support) ## replicaCount: 1 - -## Skip SuiteCRM installation wizard. Useful for migrations and restoring from SQL dump +## @param suitecrmSkipInstall Skip SuiteCRM installation wizard. Useful for migrations and restoring from SQL dump ## ref: https://github.com/bitnami/bitnami-docker-suitecrm#configuration ## suitecrmSkipInstall: false - -## SuiteCRM validate user IP +## @param suitecrmValidateUserIP Whether to validate the user IP address or not ## ref: https://github.com/bitnami/bitnami-docker-suitecrm#configuration ## suitecrmValidateUserIP: false - -## SuiteCRM host to create application URLs +## @param suitecrmHost SuiteCRM host to create application URLs ## ref: https://github.com/bitnami/bitnami-docker-suitecrm#configuration ## suitecrmHost: - -## User of the application +## @param suitecrmUsername User of the application ## ref: https://github.com/bitnami/bitnami-docker-suitecrm#configuration ## suitecrmUsername: user - -## Application password +## @param suitecrmPassword Application password ## Defaults to a random 10-character alphanumeric string if not set ## ref: https://github.com/bitnami/bitnami-docker-suitecrm#configuration ## suitecrmPassword: - -## Admin email +## @param suitecrmEmail Admin email ## ref: https://github.com/bitnami/bitnami-docker-suitecrm#configuration ## suitecrmEmail: user@example.com - -## Set to `yes` to allow the container to be started with blank passwords +## @param allowEmptyPassword Allow DB blank passwords ## ref: https://github.com/bitnami/bitnami-docker-suitecrm#environment-variables ## allowEmptyPassword: false - -## Container command (using container default if not set) +## @param command Override default container command (useful when using custom images) ## command: -## Container args (using container default if ot set) +## @param args Override default container args (useful when using custom images) ## args: - -## Common annotations to add to all SuiteCRM resources (sub-charts are not considered). Evaluated as a template -## -commonAnnotations: {} - -## Common labels to add to all SuiteCRM resources (sub-charts are not considered). Evaluated as a template -## -commonLabels: {} - -## Deployment pod host aliases +## @param hostAliases [array] Deployment pod host aliases ## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/ ## hostAliases: - # Necessary for apache-exporter to work + ## Necessary for apache-exporter to work + ## - ip: "127.0.0.1" hostnames: - "status.localhost" - -## Update strategy - only really applicable for deployments with RWO PVs attached +## @param updateStrategy.type Update strategy - only really applicable for deployments with RWO PVs attached ## If replicas = 1, an update can get "stuck", as the previous pod remains attached to the ## PV, and the "incoming" pod can never start. Changing the strategy to "Recreate" will ## terminate the single previous pod, so that the new, incoming pod can attach to the PV ## updateStrategy: type: RollingUpdate - -## An array to add extra env vars +## @param extraEnvVars An array to add extra environment variables ## For example: +## - name: BEARER_AUTH +## value: true ## extraEnvVars: [] -# - name: BEARER_AUTH -# value: true - -## ConfigMap with extra environment variables +## @param extraEnvVarsCM ConfigMap containing extra environment variables ## extraEnvVarsCM: - -## Secret with extra environment variables +## @param extraEnvVarsSecret Secret containing extra environment variables ## extraEnvVarsSecret: - -## Extra volumes to add to the deployment +## @param extraVolumes Extra volumes to add to the deployment. Requires setting `extraVolumeMounts` ## extraVolumes: [] - -## Extra volume mounts to add to the container +## @param extraVolumeMounts Extra volume mounts to add to the container. Requires setting `extraVolumeMounts ## extraVolumeMounts: [] - -## Extra init containers to add to the deployment +## @param initContainers Extra init containers to add to the deployment ## initContainers: [] - -## Extra sidecar containers to add to the deployment +## @param sidecars Extra sidecar containers to add to the deployment ## sidecars: [] - -## Tolerations for pod assignment. Evaluated as a template. +## @param tolerations Tolerations for pod assignment. Evaluated as a template. ## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ ## tolerations: [] - -## Use existing secret for the application password +## @param existingSecret Name of a secret with the application password ## existingSecret: - -## -## External database configuration -## -externalDatabase: - ## Database host - ## - host: - - ## Database host - ## - port: 3306 - - ## Database user - ## - user: bn_suitecrm - - ## Database password - ## - password: - - ## Database name - ## - database: bitnami_suitecrm - ## SMTP mail delivery configuration ## ref: https://github.com/bitnami/bitnami-docker-suitecrm/#smtp-configuration +## @param suitecrmSmtpHost SMTP host +## @param suitecrmSmtpPort SMTP port +## @param suitecrmSmtpUser SMTP user +## @param suitecrmSmtpPassword SMTP password +## @param suitecrmSmtpProtocol SMTP protocol [`ssl`, `tls`] +## @param suitecrmNotifyAddress SuiteCRM notify address +## @param suitecrmNotifyName SuiteCRM notify name ## -# suitecrmSmtpHost: -# suitecrmSmtpPort: -# suitecrmSmtpUser: -# suitecrmSmtpPassword: -# suitecrmSmtpProtocol: -# suitecrmNotifyAddress: user@example.com -# suitecrmNotifyName: UserName +suitecrmSmtpHost: +suitecrmSmtpPort: +suitecrmSmtpUser: +suitecrmSmtpPassword: +suitecrmSmtpProtocol: +suitecrmNotifyAddress: +suitecrmNotifyName: +## @param containerPorts [object] Container ports +## +containerPorts: + http: 8080 + https: 8443 +## @param sessionAffinity Control where client requests go, to the same pod or round-robin +## Values: ClientIP or None +## ref: https://kubernetes.io/docs/user-guide/services/ +## +sessionAffinity: "None" +## @param podAffinityPreset Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` +## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity +## +podAffinityPreset: "" +## @param podAntiAffinityPreset Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` +## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity +## +podAntiAffinityPreset: soft +## Node affinity preset +## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity +## +nodeAffinityPreset: + ## @param nodeAffinityPreset.type Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard` + ## + type: "" + ## @param nodeAffinityPreset.key Node label key to match Ignored if `affinity` is set. + ## E.g. + ## key: "kubernetes.io/e2e-az-name" + ## + key: "" + ## @param nodeAffinityPreset.values Node label values to match. Ignored if `affinity` is set. + ## E.g. + ## values: + ## - e2e-az1 + ## - e2e-az2 + ## + values: [] +## @param affinity Affinity for pod assignment +## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity +## Note: podAffinityPreset, podAntiAffinityPreset, and nodeAffinityPreset will be ignored when it's set +## +affinity: {} +## @param nodeSelector Node labels for pod assignment. Evaluated as a template. +## ref: https://kubernetes.io/docs/user-guide/node-selection/ +## +nodeSelector: {} +## Container resource requests and limits +## ref: http://kubernetes.io/docs/user-guide/compute-resources/ +## We usually recommend not to specify default resources and to leave this as a conscious +## choice for the user. This also increases chances charts run on environments with little +## resources, such as Minikube. If you do want to specify resources, uncomment the following +## lines, adjust them as necessary, and remove the curly braces after 'resources:'. +## @param resources.requests The requested resources for the container +## +resources: + ## Examples: + ## requests: + ## cpu: 300m + ## memory: 512Mi + requests: {} +## Configure Pods Security Context +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod +## @param podSecurityContext.enabled Enable SuiteCRM pods' Security Context +## @param podSecurityContext.fsGroup SuiteCRM pods' group ID +## +podSecurityContext: + enabled: true + fsGroup: 1001 +## Configure Container Security Context (only main container) +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container +## @param containerSecurityContext.enabled Enable SuiteCRM containers' Security Context +## @param containerSecurityContext.runAsUser SuiteCRM containers' Security Context +## +containerSecurityContext: + enabled: true + runAsUser: 1001 +## Configure extra options for liveness probe +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes +## @param livenessProbe.enabled Enable livenessProbe +## @param livenessProbe.path Request path for livenessProbe +## @param livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe +## @param livenessProbe.periodSeconds Period seconds for livenessProbe +## @param livenessProbe.timeoutSeconds Timeout seconds for livenessProbe +## @param livenessProbe.failureThreshold Failure threshold for livenessProbe +## @param livenessProbe.successThreshold Success threshold for livenessProbe +## +livenessProbe: + enabled: true + path: /index.php + initialDelaySeconds: 600 + periodSeconds: 10 + timeoutSeconds: 5 + failureThreshold: 6 + successThreshold: 1 +## Configure extra options for readiness probe +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes +## @param readinessProbe.enabled Enable readinessProbe +## @param readinessProbe.path Request path for readinessProbe +## @param readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe +## @param readinessProbe.periodSeconds Period seconds for readinessProbe +## @param readinessProbe.timeoutSeconds Timeout seconds for readinessProbe +## @param readinessProbe.failureThreshold Failure threshold for readinessProbe +## @param readinessProbe.successThreshold Success threshold for readinessProbe +## +readinessProbe: + enabled: true + path: /index.php + initialDelaySeconds: 30 + periodSeconds: 5 + timeoutSeconds: 3 + failureThreshold: 6 + successThreshold: 1 +## Configure extra options for startupProbe probe +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes +## @param startupProbe.enabled Enable startupProbe +## @param startupProbe.path Request path for startupProbe +## @param startupProbe.initialDelaySeconds Initial delay seconds for startupProbe +## @param startupProbe.periodSeconds Period seconds for startupProbe +## @param startupProbe.timeoutSeconds Timeout seconds for startupProbe +## @param startupProbe.failureThreshold Failure threshold for startupProbe +## @param startupProbe.successThreshold Success threshold for startupProbe +## +startupProbe: + enabled: false + path: /index.php + initialDelaySeconds: 0 + periodSeconds: 10 + timeoutSeconds: 3 + failureThreshold: 60 + successThreshold: 1 +## @param customLivenessProbe Override default liveness probe +## +customLivenessProbe: {} +## @param customReadinessProbe Override default readiness probe +## +customReadinessProbe: {} +## @param customStartupProbe Override default startup probe +## +customStartupProbe: {} +## @param lifecycleHooks lifecycleHooks for the container to automate configuration before or after startup +## +lifecycleHooks: +## @param podAnnotations Pod annotations +## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ +## +podAnnotations: {} +## @param podLabels Pod extra labels +## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ +## +podLabels: {} + +## @section Database parameters ## MariaDB chart configuration -## ## https://github.com/bitnami/charts/blob/master/bitnami/mariadb/values.yaml ## mariadb: - ## Whether to deploy a mariadb server to satisfy the applications database requirements. To use an external database set this to false and configure the externalDatabase parameters + ## @param mariadb.enabled Whether to deploy a mariadb server to satisfy the applications database requirements + ## To use an external database set this to false and configure the externalDatabase parameters ## enabled: true - - ## MariaDB architecture. Allowed values: standalone or replication + ## @param mariadb.architecture MariaDB architecture. Allowed values: `standalone` or `replication` ## architecture: standalone - ## MariaDB Authentication parameters ## auth: - ## MariaDB root password + ## @param mariadb.auth.rootPassword Password for the MariaDB `root` user ## ref: https://github.com/bitnami/bitnami-docker-mariadb#setting-the-root-password-on-first-run ## rootPassword: "" - ## MariaDB custom user and database + ## @param mariadb.auth.database Database name to create ## ref: https://github.com/bitnami/bitnami-docker-mariadb/blob/master/README.md#creating-a-database-on-first-run - ## ref: https://github.com/bitnami/bitnami-docker-mariadb/blob/master/README.md#creating-a-database-user-on-first-run ## database: bitnami_suitecrm + ## @param mariadb.auth.username Database user to create + ## ref: https://github.com/bitnami/bitnami-docker-mariadb/blob/master/README.md#creating-a-database-user-on-first-run + ## username: bn_suitecrm + ## @param mariadb.auth.password Password for the database + ## password: "" - primary: ## Enable persistence using Persistent Volume Claims ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/ ## persistence: + ## @param mariadb.primary.persistence.enabled Enable database persistence using PVC + ## enabled: true - ## mariadb data Persistent Volume Storage Class + ## @param mariadb.primary.persistence.storageClass MariaDB data Persistent Volume Storage Class ## If defined, storageClassName: ## If set to "-", storageClassName: "", which disables dynamic provisioning ## If undefined (the default) or set to null, no storageClassName spec is @@ -231,221 +368,92 @@ mariadb: ## GKE, AWS & OpenStack) ## storageClass: + ## @param mariadb.primary.persistence.accessModes Database Persistent Volume Access Modes + ## accessModes: - ReadWriteOnce + ## @param mariadb.primary.persistence.size Database Persistent Volume Size + ## size: 8Gi - ## Set path in case you want to use local host path volumes (not recommended in production) + ## @param mariadb.primary.persistence.hostPath Set path in case you want to use local host path volumes (not recommended in production) ## hostPath: - ## Use an existing PVC + ## @param mariadb.primary.persistence.existingClaim Name of an existing `PersistentVolumeClaim` for MariaDB primary replicas ## existingClaim: - -## Container ports +## External database configuration ## -containerPorts: - http: 8080 - https: 8443 +externalDatabase: + ## @param externalDatabase.host Host of the existing database + ## + host: + ## @param externalDatabase.port Port of the existing database + ## + port: 3306 + ## @param externalDatabase.user Existing username in the external database + ## + user: bn_suitecrm + ## @param externalDatabase.password Password for the above username + ## + password: + ## @param externalDatabase.database Name of the existing database + ## + database: bitnami_suitecrm -## Kubernetes configuration -## For minikube, set this to NodePort, elsewhere use LoadBalancer -## -service: - type: LoadBalancer - # HTTP Port - port: 8080 - # HTTPS Port - httpsPort: 8443 - ## clusterIP: "" - ## Control hosts connecting to "LoadBalancer" only - ## loadBalancerSourceRanges: - ## - 0.0.0.0/0 - ## loadBalancerIP for the SuiteCRM Service (optional, cloud specific) - ## ref: http://kubernetes.io/docs/user-guide/services/#type-loadbalancer - ## loadBalancerIP: - ## - ## nodePorts: - ## http: - ## https: - ## - nodePorts: - http: "" - https: "" - ## Enable client source IP preservation - ## ref http://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip - ## - externalTrafficPolicy: Cluster - -## Configure the ingress resource that allows you to access the -## SuiteCRM installation. Set up the URL -## ref: http://kubernetes.io/docs/user-guide/ingress/ -## -ingress: - ## Set to true to enable ingress record generation - ## - enabled: false - - ## Set this to true in order to add the corresponding annotations for cert-manager - ## - certManager: false - - ## When the ingress is enabled, a host pointing to this will be created - ## - hostname: suitecrm.local - - ## Ingress annotations done as key:value pairs - ## For a full list of possible ingress annotations, please see - ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md - ## - ## If tls is set to true, annotation ingress.kubernetes.io/secure-backends: "true" will automatically be set - ## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set - ## - annotations: {} - # kubernetes.io/ingress.class: nginx - - ## The list of additional hostnames to be covered with this ingress record. - ## Most likely the hostname above will be enough, but in the event more hosts are needed, this is an array - ## hosts: - ## - name: suitecrm.local - ## path: / - ## - hosts: - ## The tls configuration for the ingress - ## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls - ## - - ## tls: - ## - hosts: - ## - suitecrm.local - ## secretName: suitecrm.local-tls - ## - tls: - - secrets: - ## If you're providing your own certificates, please use this to add the certificates as secrets - ## key and certificate should start with -----BEGIN CERTIFICATE----- or - ## -----BEGIN RSA PRIVATE KEY----- - ## - ## name should line up with a tlsSecret set further up - ## If you're using cert-manager, this is unneeded, as it will create the secret for you if it is not set - ## - ## It is also possible to create and manage the certificates outside of this helm chart - ## Please see README.md for more information - ## - # - name: suitecrm.local-tls - # key: - # certificate: - - ## Override API Version (automatically detected if not set) - ## - apiVersion: - - ## Ingress Path - ## - path: / - - ## Ingress Path type - ## - pathType: ImplementationSpecific - -## Control where client requests go, to the same pod or round-robin -## Values: ClientIP or None -## ref: https://kubernetes.io/docs/user-guide/services/ -## -sessionAffinity: "None" +## @section Persistence parameters ## Enable persistence using Persistent Volume Claims ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/ ## persistence: + ## @param persistence.enabled Enable persistence using PVC + ## enabled: true - ## SuiteCRM Data Persistent Volume Storage Class + ## @param persistence.storageClass PVC Storage Class for SuiteCRM volume ## If defined, storageClassName: ## If set to "-", storageClassName: "", which disables dynamic provisioning ## If undefined (the default) or set to null, no storageClassName spec is ## set, choosing the default provisioner. (gp2 on AWS, standard on ## GKE, AWS & OpenStack) ## - # storageClass: "-" - - ## A manually managed Persistent Volume and Claim + storageClass: + ## @param persistence.accessMode PVC Access Mode for SuiteCRM volume ## Requires persistence.enabled: true ## If defined, PVC must be created manually before volume will be bound ## + ## @param persistence.accessMode PVC Access Mode for SuiteCRM volume + ## accessMode: ReadWriteOnce + ## @param persistence.size PVC Storage Request for SuiteCRM volume + ## size: 8Gi - - ## A manually managed Persistent Volume Claim + ## @param persistence.existingClaim An Existing PVC name for SuiteCRM volume ## Requires persistence.enabled: true ## If defined, PVC must be created manually before volume will be bound ## - # existingClaim: - - ## If defined, the suitecrm-data volume will mount to the specified hostPath. + existingClaim: + ## @param persistence.hostPath Host mount path for SuiteCRM volume ## Requires persistence.enabled: true ## Requires persistence.existingClaim: nil|false ## Default: nil. ## hostPath: -## Pod affinity preset -## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity -## Allowed values: soft, hard -## -podAffinityPreset: "" - -## Pod anti-affinity preset -## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity -## Allowed values: soft, hard -## -podAntiAffinityPreset: soft - -## Node affinity preset -## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity -## Allowed values: soft, hard -## -nodeAffinityPreset: - ## Node affinity type - ## Allowed values: soft, hard - ## - type: "" - ## Node label key to match - ## E.g. - ## key: "kubernetes.io/e2e-az-name" - ## - key: "" - ## Node label values to match - ## E.g. - ## values: - ## - e2e-az1 - ## - e2e-az2 - ## - values: [] - -## Affinity for pod assignment -## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity -## Note: podAffinityPreset, podAntiAffinityPreset, and nodeAffinityPreset will be ignored when it's set -## -affinity: {} - -## Node labels for pod assignment. Evaluated as a template. -## ref: https://kubernetes.io/docs/user-guide/node-selection/ -## -nodeSelector: {} - -## Configure resource requests and limits -## ref: http://kubernetes.io/docs/user-guide/compute-resources/ -## -resources: {} -# requests: -# memory: 512Mi -# cpu: 300m +## @section Volume Permissions parameters ## Init containers parameters: ## volumePermissions: Change the owner and group of the persistent volume mountpoint to runAsUser:fsGroup values from the securityContext section. ## volumePermissions: + ## @param volumePermissions.enabled Enable init container that changes volume permissions in the data directory (for cases where the default k8s `runAsUser` and `fsUser` values do not work) + ## enabled: false + ## @param volumePermissions.image.registry Init container volume-permissions image registry + ## @param volumePermissions.image.repository Init container volume-permissions image repository + ## @param volumePermissions.image.tag Init container volume-permissions image tag + ## @param volumePermissions.image.pullPolicy Init container volume-permissions image pull policy + ## @param volumePermissions.image.pullSecrets Specify docker-registry secret names as an array + ## image: registry: docker.io repository: bitnami/bitnami-shell @@ -459,94 +467,138 @@ volumePermissions: ## - myRegistryKeySecretName ## Init containers' resource requests and limits ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## We usually recommend not to specify default resources and to leave this as a conscious + ## choice for the user. This also increases chances charts run on environments with little + ## resources, such as Minikube. If you do want to specify resources, uncomment the following + ## lines, adjust them as necessary, and remove the curly braces after 'resources:'. + ## @param volumePermissions.resources.limits The resources limits for the container + ## @param volumePermissions.resources.requests The requested resources for the container ## resources: - ## We usually recommend not to specify default resources and to leave this as a conscious - ## choice for the user. This also increases chances charts run on environments with little - ## resources, such as Minikube. If you do want to specify resources, uncomment the following - ## lines, adjust them as necessary, and remove the curly braces after 'resources:'. - ## + ## Example: + ## limits: + ## cpu: 100m + ## memory: 128Mi limits: {} - ## cpu: 100m - ## memory: 128Mi - ## + ## Examples: + ## requests: + ## cpu: 100m + ## memory: 128Mi requests: {} - ## cpu: 100m - ## memory: 128Mi - ## -## Configure Pods Security Context -## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod -## -podSecurityContext: - enabled: true - fsGroup: 1001 +## @section Traffic Exposure Parameters -## Configure Container Security Context (only main container) -## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container +## Kubernetes configuration +## For minikube, set this to NodePort, elsewhere use LoadBalancer ## -containerSecurityContext: - enabled: true - runAsUser: 1001 - -## Configure extra options for liveness, readiness and startup probes -## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes +service: + ## @param service.type Kubernetes Service type + ## + type: LoadBalancer + ## @param service.port Service HTTP port + ## + port: 8080 + ## @param service.httpsPort Service HTTPS port + ## + httpsPort: 8443 + ## clusterIP: "" + ## Control hosts connecting to "LoadBalancer" only + ## loadBalancerSourceRanges: + ## - 0.0.0.0/0 + ## loadBalancerIP for the SuiteCRM Service (optional, cloud specific) + ## ref: http://kubernetes.io/docs/user-guide/services/#type-loadbalancer + ## loadBalancerIP: + ## @param service.nodePorts.http Kubernetes HTTP node port + ## @param service.nodePorts.https Kubernetes HTTPS node port + ## nodePorts: + ## http: + ## https: + ## + nodePorts: + http: "" + https: "" + ## @param service.externalTrafficPolicy Enable client source IP preservation + ## ref http://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip + ## + externalTrafficPolicy: Cluster +## Configure the ingress resource that allows you to access the +## SuiteCRM installation. Set up the URL +## ref: http://kubernetes.io/docs/user-guide/ingress/ ## -livenessProbe: - enabled: true - path: /index.php - initialDelaySeconds: 600 - periodSeconds: 10 - timeoutSeconds: 5 - failureThreshold: 6 - successThreshold: 1 -readinessProbe: - enabled: true - path: /index.php - initialDelaySeconds: 30 - periodSeconds: 5 - timeoutSeconds: 3 - failureThreshold: 6 - successThreshold: 1 -startupProbe: +ingress: + ## @param ingress.enabled Enable ingress controller resource + ## enabled: false - path: /index.php - initialDelaySeconds: 0 - periodSeconds: 10 - timeoutSeconds: 3 - failureThreshold: 60 - successThreshold: 1 + ## @param ingress.certManager Set this to true in order to add the corresponding annotations for cert-manager + ## + certManager: false + ## @param ingress.hostname Default host for the ingress resource + ## + hostname: suitecrm.local + ## @param ingress.annotations Ingress annotations + ## For a full list of possible ingress annotations, please see + ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md + ## + ## If tls is set to true, annotation ingress.kubernetes.io/secure-backends: "true" will automatically be set + ## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set + ## e.g: + ## kubernetes.io/ingress.class: nginx + ## + annotations: {} + ## @param ingress.hosts The list of additional hostnames to be covered with this ingress record. + ## Most likely the hostname above will be enough, but in the event more hosts are needed, this is an array + ## hosts: + ## - name: suitecrm.local + ## path: / + ## + hosts: + ## @param ingress.tls The tls configuration for the ingress + ## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls + ## tls: + ## - hosts: + ## - suitecrm.local + ## secretName: suitecrm.local-tls + ## + tls: + ## @param ingress.secrets If you're providing your own certificates, please use this to add the certificates as secrets + ## key and certificate should start with -----BEGIN CERTIFICATE----- or + ## -----BEGIN RSA PRIVATE KEY----- + ## + ## name should line up with a tlsSecret set further up + ## If you're using cert-manager, this is unneeded, as it will create the secret for you if it is not set + ## + ## It is also possible to create and manage the certificates outside of this helm chart + ## Please see README.md for more information + ## + ## - name: suitecrm.local-tls + ## key: + ## certificate: + ## + secrets: + ## @param ingress.apiVersion Force Ingress API version (automatically detected if not set) + ## + apiVersion: + ## @param ingress.path Ingress path + ## + path: / + ## @param ingress.pathType Ingress path type + ## + pathType: ImplementationSpecific -## Custom Liveness probe -## -customLivenessProbe: {} - -## Custom Readiness probe -## -customReadinessProbe: {} - -## Custom Startup probe -## -customStartupProbe: {} - -## lifecycleHooks for the container to automate configuration before or after startup. -## -lifecycleHooks: - -## Pod annotations -## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ -## -podAnnotations: {} - -## Pod extra labels -## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ -## -podLabels: {} +## @section Metrics parameters ## Prometheus Exporter / Metrics ## metrics: + ## @param metrics.enabled Start a side-car prometheus exporter + ## enabled: false + ## @param metrics.image.registry Apache exporter image registry + ## @param metrics.image.repository Apache exporter image repository + ## @param metrics.image.tag Apache exporter image tag (immutable tags are recommended) + ## @param metrics.image.pullPolicy Image pull policy + ## @param metrics.image.pullSecrets Specify docker-registry secret names as an array + ## image: registry: docker.io repository: bitnami/apache-exporter @@ -555,50 +607,68 @@ metrics: ## Optionally specify an array of imagePullSecrets. ## Secrets must be manually created in the namespace. ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + ## e.g: + ## pullSecrets: + ## - myRegistryKeySecretName ## - pullSecrets: - # - myRegistryKeySecretName - ## Metrics exporter resource requests and limits + pullSecrets: [] + ## @param metrics.resources Metrics exporter resource requests and limits ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ ## - # resources: {} - ## Metrics exporter pod Annotation and Labels + resources: {} + ## @param metrics.podAnnotations [object] Additional annotations for Metrics exporter pod ## podAnnotations: prometheus.io/scrape: "true" prometheus.io/port: "9117" -# Add custom certificates and certificate authorities to SuiteCRM container +## @section Certificate injection parameters + +## Add custom certificates and certificate authorities to SuiteCRM container +## certificates: + ## @param certificates.customCertificate.certificateSecret Secret containing the certificate and key to add + ## @param certificates.customCertificate.chainSecret.name Name of the secret containing the certificate chain + ## @param certificates.customCertificate.chainSecret.key Key of the certificate chain file inside the secret + ## @param certificates.customCertificate.certificateLocation Location in the container to store the certificate + ## @param certificates.customCertificate.keyLocation Location in the container to store the private key + ## @param certificates.customCertificate.chainLocation Location in the container to store the certificate chain + ## customCertificate: certificateSecret: "" - chainSecret: {} - # name: secret-name - # key: secret-key + chainSecret: + name: + key: certificateLocation: /etc/ssl/certs/ssl-cert-snakeoil.pem keyLocation: /etc/ssl/private/ssl-cert-snakeoil.key chainLocation: /etc/ssl/certs/mychain.pem + ## @param certificates.customCAs Defines a list of secrets to import into the container trust store + ## customCAs: [] - ## Override container command + ## @param certificates.command Override default container command (useful when using custom images) ## command: - ## Override container args + ## @param certificates.args Override default container args (useful when using custom images) + ## e.g: + ## - secret: custom-CA + ## - secret: more-custom-CAs ## args: - # - secret: custom-CA - # - secret: more-custom-CAs - ## An array to add extra env vars + ## @param certificates.extraEnvVars Container sidecar extra environment variables ## extraEnvVars: [] - - ## ConfigMap with extra environment variables + ## @param certificates.extraEnvVarsCM ConfigMap containing extra environment variables ## extraEnvVarsCM: - - ## Secret with extra environment variables + ## @param certificates.extraEnvVarsSecret Secret containing extra environment variables (in case of sensitive data) ## extraEnvVarsSecret: - + ## @param certificates.image.registry Container sidecar registry + ## @param certificates.image.repository Container sidecar image repository + ## @param certificates.image.tag Container sidecar image tag (immutable tags are recommended) + ## @param certificates.image.pullPolicy Container sidecar image pull policy + ## @param certificates.image.pullSecrets Container sidecar image pull secrets + ## image: registry: docker.io repository: bitnami/bitnami-shell @@ -608,10 +678,11 @@ certificates: ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images ## pullPolicy: IfNotPresent - # pullPolicy: + ## Optionally specify an array of imagePullSecrets. + ## Secrets must be manually created in the namespace. + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + ## e.g: + ## pullSecrets: + ## - myRegistryKeySecretName + ## pullSecrets: [] - # - myRegistryKeySecretName - -## Array with extra yaml to deploy with the chart. Evaluated as a template -## -extraDeploy: []