Files
charts/bitnami/harbor

Harbor

This Helm chart has been developed based on goharbor/harbor-helm chart but including some features common to the Bitnami chart library. For example, the following changes have been introduced:

  • Possibility to pull all the required images from a private registry through the Global Docker image parameters.
  • Redis™ and PostgreSQL are managed as chart dependencies.
  • Liveness and Readiness probes for all deployments are exposed to the values.yaml.
  • Uses new Helm chart labels formatting.
  • Uses Bitnami container images:
    • non-root by default
    • published for debian-10 and ol-7
  • This chart support the Harbor optional components Chartmuseum, Clair and Notary integrations.

TL;DR

$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install my-release bitnami/harbor

Introduction

This Helm chart installs Harbor in a Kubernetes cluster. Welcome to contribute to Helm Chart for Harbor.

Prerequisites

  • Kubernetes 1.12+
  • Helm 3.1.0
  • PV provisioner support in the underlying infrastructure
  • ReadWriteMany volumes for deployment scaling

Installing the Chart

Install the Harbor helm chart with a release name my-release:

$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install my-release bitnami/harbor

Uninstalling the Chart

To uninstall/delete the my-release deployment:

$ helm delete --purge my-release

Additionally, if persistence.resourcePolicy is set to keep, you should manually delete the PVCs.

Parameters

Global parameters

Name Description Value
global.imageRegistry Global Docker image registry ""
global.imagePullSecrets Global Docker registry secret names as an array []
global.storageClass Global storage class for dynamic provisioning ""

Common Parameters

Name Description Value
nameOverride String to partially override common.names.fullname template (will maintain the release name) ""
fullnameOverride String to fully override common.names.fullname template with a string ""
kubeVersion Force target Kubernetes version (using Helm capabilities if not set) ""
commonAnnotations Annotations to add to all deployed objects {}
commonLabels Labels to add to all deployed objects {}
extraDeploy Array of extra objects to deploy with the release (evaluated as a template). []

Harbor parameters

Name Description Value
volumePermissions.enabled Enable init container that changes volume permissions in the data directory (for cases where the default k8s runAsUser and fsUser values do not work) false
volumePermissions.image.registry Init container volume-permissions image registry docker.io
volumePermissions.image.repository Init container volume-permissions image name bitnami/bitnami-shell
volumePermissions.image.tag Init container volume-permissions image tag 10-debian-10-r173
volumePermissions.image.pullPolicy Init container volume-permissions image pull policy Always
volumePermissions.image.pullSecrets Specify docker-registry secret names as an array []
volumePermissions.resources.limits The resources limits for the container {}
volumePermissions.resources.requests The requested resources for the container {}
internalTLS.enabled Use TLS in all the supported containers: chartmuseum, clair, core, jobservice, portal, registry and trivy false
ipFamily.ipv6.enabled Enable listening on IPv6 ([::]) for nginx-based components (nginx,portal) true
ipFamily.ipv4.enabled Enable listening on IPv4 for nginx-based components (nginx,portal) true
caBundleSecretName The custom ca bundle secret name, the secret must contain key named "ca.crt" which will be injected into the trust store for chartmuseum, clair, core, jobservice, registry, trivy components. ""
externalURL The external URL for Harbor core service https://core.harbor.domain
containerSecurityContext.runAsUser Set container's Security Context runAsUser 1001
containerSecurityContext.runAsNonRoot Set container's Security Context runAsNonRoot true
podSecurityContext.fsGroup Set pod's Security Context fsGroup 1001
logLevel The log level used for Harbor services. Allowed values are [ fatal error
forcePassword Option to force users to specify passwords (core.secret, harborAdminPassword, and secretKey). That is required for 'helm upgrade' to work properly. false
harborAdminPassword The initial password of Harbor admin. Change it from portal after launching Harbor ""
proxy.httpProxy The URL of the HTTP proxy server ""
proxy.httpsProxy The URL of the HTTPS proxy server ""
proxy.noProxy The URLs that the proxy settings not apply to 127.0.0.1,localhost,.local,.internal
proxy.components The component list that the proxy settings apply to []

Traffic Exposure Parameters

Name Description Value
service.type The way how to expose the service: Ingress, ClusterIP, NodePort or LoadBalancer LoadBalancer
service.tls.enabled Enable the tls or not (for external access) true
service.tls.existingSecret Existing secret name containing your own TLS certificates. The secret must contain the keys: tls.crt - the certificate (required), tls.key - the private key (required), ca.crt - the certificate of CA (optional). Self-signed TLS certificates will be used otherwise. ""
service.tls.notaryExistingSecret By default, the Notary service will use the same cert and key as described above. Fill the name of secret if you want to use a separated one. Only needed when the service.type is ingress. ""
service.tls.commonName The common name used to generate the certificate, it's necessary when the service.type is ClusterIP or NodePort and service.tls.existingSecret is null core.harbor.domain
service.ports.http The service port Harbor listens on when serving with HTTP 80
service.ports.https The service port Harbor listens on when serving with HTTPS 443
service.ports.notary The service port Notary listens on. Only needed when notary.enabled is set to true 4443
service.nodePorts Service parameters when type is "nodePort" {}
service.loadBalancerIP Load Balancer IP ""
service.annotations The annotations attached to the loadBalancer service {}
service.loadBalancerSourceRanges List of IP address ranges to assign to loadBalancerSourceRanges []
service.externalTrafficPolicy Enable client source IP preservation ""
ingress.enabled Deploy ingress rules false
ingress.pathType Ingress path type ImplementationSpecific
ingress.apiVersion Override ingress api version ""
ingress.certManager Add annotations for cert-manager false
ingress.hosts The list of hostnames to be covered with this ingress record {}
ingress.controller The ingress controller type. Currently supports default, gce and ncp default
ingress.annotations Ingress annotations done as key:value pairs {}

Persistence Parameters

Name Description Value
persistence.enabled Enable the data persistence or not true
persistence.resourcePolicy Setting it to keep to avoid removing PVCs during a helm delete operation. Leaving it empty will delete PVCs after the chart deleted keep
persistence.persistentVolumeClaim.registry.existingClaim Use the existing PVC which must be created manually before bound, and specify the subPath if the PVC is shared with other components ""
persistence.persistentVolumeClaim.registry.storageClass Specify the storageClass used to provision the volume. Or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning ""
persistence.persistentVolumeClaim.registry.subPath The sub path used in the volume ""
persistence.persistentVolumeClaim.registry.accessMode The access mode of the volume ReadWriteOnce
persistence.persistentVolumeClaim.registry.size The size of the volume 5Gi
persistence.persistentVolumeClaim.jobservice.existingClaim Use the existing PVC which must be created manually before bound, and specify the subPath if the PVC is shared with other components ""
persistence.persistentVolumeClaim.jobservice.storageClass Specify the storageClass used to provision the volume. Or the default StorageClass will be used(the default). Set it to - to disable dynamic provisioning ""
persistence.persistentVolumeClaim.jobservice.subPath The sub path used in the volume ""
persistence.persistentVolumeClaim.jobservice.accessMode The access mode of the volume ReadWriteOnce
persistence.persistentVolumeClaim.jobservice.size The size of the volume 1Gi
persistence.persistentVolumeClaim.chartmuseum.existingClaim Use the existing PVC which must be created manually before bound, and specify the subPath if the PVC is shared with other components ""
persistence.persistentVolumeClaim.chartmuseum.storageClass Specify the storageClass used to provision the volume. Or the default StorageClass will be used(the default). Set it to - to disable dynamic provisioning ""
persistence.persistentVolumeClaim.chartmuseum.subPath The sub path used in the volume ""
persistence.persistentVolumeClaim.chartmuseum.accessMode The access mode of the volume ReadWriteOnce
persistence.persistentVolumeClaim.chartmuseum.size The size of the volume 5Gi
persistence.persistentVolumeClaim.trivy.storageClass Specify the storageClass used to provision the volume. Or the default StorageClass will be used(the default). Set it to - to disable dynamic provisioning ""
persistence.persistentVolumeClaim.trivy.accessMode The access mode of the volume ReadWriteOnce
persistence.persistentVolumeClaim.trivy.size The size of the volume 5Gi
persistence.imageChartStorage.caBundleSecretName Specify the caBundleSecretName if the storage service uses a self-signed certificate. The secret must contain keys named ca.crt which will be injected into the trust store of registry's and chartmuseum's containers. ""
persistence.imageChartStorage.disableredirect The configuration for managing redirects from content backends. For backends which do not supported it (such as using MinIO® for s3 storage type), please set it to true to disable redirects. Refer to the guide for more information about the detail false
persistence.imageChartStorage.type The type of storage for images and charts: filesystem, azure, gcs, s3, swift or oss. The type must be filesystem if you want to use persistent volumes for registry and chartmuseum. Refer to the guide for more information about the detail filesystem
persistence.imageChartStorage.filesystem.rootdirectory Filesystem storage type setting: Storage root directory /storage
persistence.imageChartStorage.filesystem.maxthreads Filesystem storage type setting: Maximum threads directory ""
persistence.imageChartStorage.azure.accountname Azure storage type setting: Name of the Azure account accountname
persistence.imageChartStorage.azure.accountkey Azure storage type setting: Key of the Azure account base64encodedaccountkey
persistence.imageChartStorage.azure.container Azure storage type setting: Container containername
persistence.imageChartStorage.azure.storagePrefix Azure storage type setting: Storage prefix /azure/harbor/charts
persistence.imageChartStorage.azure.realm Azure storage type setting: Realm of the Azure account ""
persistence.imageChartStorage.gcs.bucket GCS storage type setting: Bucket name bucketname
persistence.imageChartStorage.gcs.encodedkey GCS storage type setting: Base64 encoded key base64-encoded-json-key-file
persistence.imageChartStorage.gcs.rootdirectory GCS storage type setting: Root directory name ""
persistence.imageChartStorage.gcs.chunksize GCS storage type setting: Chunk size name ""
persistence.imageChartStorage.s3.region S3 storage type setting: Region us-west-1
persistence.imageChartStorage.s3.bucket S3 storage type setting: Bucket name bucketname
persistence.imageChartStorage.s3.accesskey S3 storage type setting: Access key name ""
persistence.imageChartStorage.s3.secretkey S3 storage type setting: Secret Key name ""
persistence.imageChartStorage.s3.regionendpoint S3 storage type setting: Region Endpoint ""
persistence.imageChartStorage.s3.encrypt S3 storage type setting: Encrypt ""
persistence.imageChartStorage.s3.keyid S3 storage type setting: Key ID ""
persistence.imageChartStorage.s3.secure S3 storage type setting: Secure ""
persistence.imageChartStorage.s3.skipverify S3 storage type setting: TLS skip verification ""
persistence.imageChartStorage.s3.v4auth S3 storage type setting: V4 authorization ""
persistence.imageChartStorage.s3.chunksize S3 storage type setting: V4 authorization ""
persistence.imageChartStorage.s3.rootdirectory S3 storage type setting: Root directory name ""
persistence.imageChartStorage.s3.storageClass S3 storage type setting: Storage class ""
persistence.imageChartStorage.s3.sse S3 storage type setting: SSE name ""
persistence.imageChartStorage.swift.authurl Swift storage type setting: Authentication url https://storage.myprovider.com/v3/auth
persistence.imageChartStorage.swift.username Swift storage type setting: Authentication url ""
persistence.imageChartStorage.swift.password Swift storage type setting: Password ""
persistence.imageChartStorage.swift.container Swift storage type setting: Container ""
persistence.imageChartStorage.swift.region Swift storage type setting: Region ""
persistence.imageChartStorage.swift.tenant Swift storage type setting: Tenant ""
persistence.imageChartStorage.swift.tenantid Swift storage type setting: TenantID ""
persistence.imageChartStorage.swift.domain Swift storage type setting: Domain ""
persistence.imageChartStorage.swift.domainid Swift storage type setting: DomainID ""
persistence.imageChartStorage.swift.trustid Swift storage type setting: TrustID ""
persistence.imageChartStorage.swift.insecureskipverify Swift storage type setting: Verification ""
persistence.imageChartStorage.swift.chunksize Swift storage type setting: Chunk ""
persistence.imageChartStorage.swift.prefix Swift storage type setting: Prefix ""
persistence.imageChartStorage.swift.secretkey Swift storage type setting: Secre Key ""
persistence.imageChartStorage.swift.accesskey Swift storage type setting: Access Key ""
persistence.imageChartStorage.swift.authversion Swift storage type setting: Auth ""
persistence.imageChartStorage.swift.endpointtype Swift storage type setting: Endpoint ""
persistence.imageChartStorage.swift.tempurlcontainerkey Swift storage type setting: Temp URL container key ""
persistence.imageChartStorage.swift.tempurlmethods Swift storage type setting: Temp URL methods ""
persistence.imageChartStorage.oss.accesskeyid OSS storage type setting: Access key ID ""
persistence.imageChartStorage.oss.accesskeysecret OSS storage type setting: Access key secret name containing the token ""
persistence.imageChartStorage.oss.region OSS storage type setting: Region name ""
persistence.imageChartStorage.oss.bucket OSS storage type setting: Bucket name ""
persistence.imageChartStorage.oss.endpoint OSS storage type setting: Endpoint ""
persistence.imageChartStorage.oss.internal OSS storage type setting: Internal ""
persistence.imageChartStorage.oss.encrypt OSS storage type setting: Encrypt ""
persistence.imageChartStorage.oss.secure OSS storage type setting: Secure ""
persistence.imageChartStorage.oss.chunksize OSS storage type setting: Chunk ""
persistence.imageChartStorage.oss.rootdirectory OSS storage type setting: Directory ""
persistence.imageChartStorage.oss.secretkey OSS storage type setting: Secret key ""

Nginx Parameters

Name Description Value
nginxImage.registry Registry for Nginx image docker.io
nginxImage.repository Repository for Nginx image bitnami/nginx
nginxImage.tag Tag for Nginx image 1.21.1-debian-10-r46
nginxImage.pullPolicy Harbor Portal image pull policy IfNotPresent
nginxImage.pullSecrets Specify docker-registry secret names as an array []
nginxImage.debug Specify if debug logs should be enabled false
nginx.command Override default container command (useful when using custom images) []
nginx.args Override default container args (useful when using custom images) []
nginx.replicas The replica count 1
nginx.updateStrategy.type Update strategy - only really applicable for deployments with RWO PVs attached RollingUpdate
nginx.customLivenessProbe Override default liveness probe {}
nginx.customReadinessProbe Override default readiness probe {}
nginx.extraEnvVars Array containing extra env vars []
nginx.extraEnvVarsCM ConfigMap containing extra env vars ""
nginx.extraEnvVarsSecret Secret containing extra env vars (in case of sensitive data) ""
nginx.extraVolumes Array of extra volumes to be added to the deployment (evaluated as template). Requires setting extraVolumeMounts []
nginx.extraVolumeMounts Array of extra volume mounts to be added to the container (evaluated as template). Normally used with extraVolumes. []
nginx.hostAliases Specify hostAliases for the Pod to use []
nginx.initContainers Add additional init containers to the pod (evaluated as a template) []
nginx.sidecars Attach additional containers to the pod (evaluated as a template) []
nginx.resources.limits The resources limits for the container {}
nginx.resources.requests The requested resources for the container {}
nginx.podAffinityPreset NGINX Pod affinity preset. Ignored if affinity is set. Allowed values: soft or hard ""
nginx.podAntiAffinityPreset NGINX Pod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hard soft
nginx.nodeAffinityPreset.type NGINX Node affinity preset type. Ignored if affinity is set. Allowed values: soft or hard ""
nginx.nodeAffinityPreset.key NGINX Node label key to match Ignored if affinity is set. ""
nginx.nodeAffinityPreset.values NGINX Node label values to match. Ignored if affinity is set. []
nginx.affinity NGINX Affinity for pod assignment {}
nginx.nodeSelector NGINX Node labels for pod assignment {}
nginx.tolerations NGINX Tolerations for pod assignment []
nginx.podLabels Add additional labels to the pod (evaluated as a template) {}
nginx.podAnnotations Annotations to add to the nginx pod {}
nginx.behindReverseProxy If nginx is behind another reverse proxy, set to true false
nginx.lifecycleHooks LifecycleHook to set additional configuration at startup, e.g. LDAP settings via REST API. Evaluated as a template {}
nginx.livenessProbe.enabled Enable livenessProbe true
nginx.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 20
nginx.livenessProbe.periodSeconds Period seconds for livenessProbe 10
nginx.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 5
nginx.livenessProbe.failureThreshold Failure threshold for livenessProbe 6
nginx.livenessProbe.successThreshold Success threshold for livenessProbe 1
nginx.readinessProbe.enabled Enable readinessProbe true
nginx.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 20
nginx.readinessProbe.periodSeconds Period seconds for readinessProbe 10
nginx.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 5
nginx.readinessProbe.failureThreshold Failure threshold for readinessProbe 6
nginx.readinessProbe.successThreshold Success threshold for readinessProbe 1

Harbor Portal Parameters

Name Description Value
portalImage.registry Registry for portal image docker.io
portalImage.repository Repository for portal image bitnami/harbor-portal
portalImage.tag Tag for portal image 2.3.2-debian-10-r2
portalImage.pullPolicy Harbor Portal image pull policy IfNotPresent
portalImage.pullSecrets Specify docker-registry secret names as an array []
portalImage.debug Specify if debug logs should be enabled false
portal.command Override default container command (useful when using custom images) []
portal.args Override default container args (useful when using custom images) []
portal.replicas The replica count 1
portal.updateStrategy.type Update strategy - only really applicable for deployments with RWO PVs attached RollingUpdate
portal.tls.existingSecret Name of a secret with the certificates for internal TLS access. Requires internalTLS.enabled to be set to true. If this values is not set it will be automatically generated ""
portal.lifecycleHooks LifecycleHook to set additional configuration at startup, e.g. LDAP settings via REST API. Evaluated as a template {}
portal.customLivenessProbe Override default liveness probe {}
portal.customReadinessProbe Override default readiness probe {}
portal.extraEnvVars Array containing extra env vars []
portal.extraEnvVarsCM ConfigMap containing extra env vars ""
portal.extraEnvVarsSecret Secret containing extra env vars (in case of sensitive data) ""
portal.extraVolumes Array of extra volumes to be added to the deployment (evaluated as template). Requires setting extraVolumeMounts []
portal.extraVolumeMounts Array of extra volume mounts to be added to the container (evaluated as template). Normally used with extraVolumes. []
portal.hostAliases Specify hostAliases for the Pod to use []
portal.initContainers Add additional init containers to the pod (evaluated as a template) []
portal.sidecars Attach additional containers to the pod (evaluated as a template) []
portal.resources.limits The resources limits for the container {}
portal.resources.requests The requested resources for the container {}
portal.podAffinityPreset Harbor Portal Pod affinity preset. Ignored if affinity is set. Allowed values: soft or hard ""
portal.podAntiAffinityPreset Harbor Portal Pod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hard soft
portal.nodeAffinityPreset.type Harbor Portal Node affinity preset type. Ignored if affinity is set. Allowed values: soft or hard ""
portal.nodeAffinityPreset.key Harbor Portal Node label key to match Ignored if affinity is set. ""
portal.nodeAffinityPreset.values Harbor Portal Node label values to match. Ignored if affinity is set. []
portal.affinity Harbor Portal Affinity for pod assignment {}
portal.nodeSelector Harbor Portal Node labels for pod assignment {}
portal.tolerations Harbor Portal Tolerations for pod assignment []
portal.podLabels Add additional labels to the pod (evaluated as a template) {}
portal.podAnnotations Annotations to add to the portal pod {}
portal.automountServiceAccountToken Automount service account token false
portal.livenessProbe.enabled Enable livenessProbe true
portal.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 20
portal.livenessProbe.periodSeconds Period seconds for livenessProbe 10
portal.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 5
portal.livenessProbe.failureThreshold Failure threshold for livenessProbe 6
portal.livenessProbe.successThreshold Success threshold for livenessProbe 1
portal.readinessProbe.enabled Enable readinessProbe true
portal.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 20
portal.readinessProbe.periodSeconds Period seconds for readinessProbe 10
portal.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 5
portal.readinessProbe.failureThreshold Failure threshold for readinessProbe 6
portal.readinessProbe.successThreshold Success threshold for readinessProbe 1

Harbor Core Parameters

Name Description Value
coreImage.registry Registry for core image docker.io
coreImage.repository Repository for Harbor core image bitnami/harbor-core
coreImage.tag Tag for Harbor core image 2.3.2-debian-10-r2
coreImage.pullPolicy Harbor Core image pull policy IfNotPresent
coreImage.pullSecrets Specify docker-registry secret names as an array []
coreImage.debug Specify if debug logs should be enabled false
core.command Override default container command (useful when using custom images) []
core.args Override default container args (useful when using custom images) []
core.uaaSecretName If using external UAA auth which has a self signed cert, you can provide a pre-created secret containing it under the key ca.crt. ""
core.secretKey The key used for encryption. Must be a string of 16 chars ""
core.updateStrategy.type Update strategy - only really applicable for deployments with RWO PVs attached RollingUpdate
core.replicas The replica count 1
core.tls.existingSecret Name of a secret with the certificates for internal TLS access. Requires internalTLS.enabled to be set to true. If this values is not set it will be automatically generated ""
core.customLivenessProbe Override default liveness probe {}
core.customReadinessProbe Override default readiness probe {}
core.customStartupProbe Override default Startup Probe probe {}
core.extraEnvVars Array containing extra env vars []
core.extraEnvVarsCM ConfigMap containing extra env vars ""
core.extraEnvVarsSecret Secret containing extra env vars (in case of sensitive data) ""
core.extraVolumes Array of extra volumes to be added to the deployment (evaluated as template). Requires setting extraVolumeMounts []
core.extraVolumeMounts Array of extra volume mounts to be added to the container (evaluated as template). Normally used with extraVolumes. []
core.hostAliases Specify hostAliases for the Pod to use []
core.initContainers Add additional init containers to the pod (evaluated as a template) []
core.sidecars Attach additional containers to the pod (evaluated as a template) []
core.resources.limits The resources limits for the container {}
core.resources.requests The requested resources for the container {}
core.podAffinityPreset Harbor core Pod affinity preset. Ignored if affinity is set. Allowed values: soft or hard ""
core.podAntiAffinityPreset Harbor core Pod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hard soft
core.nodeAffinityPreset.type Harbor core Node affinity preset type. Ignored if affinity is set. Allowed values: soft or hard ""
core.nodeAffinityPreset.key Harbor core Node label key to match Ignored if affinity is set. ""
core.nodeAffinityPreset.values Harbor core Node label values to match. Ignored if affinity is set. []
core.affinity Harbor core Affinity for pod assignment {}
core.nodeSelector Harbor core Node labels for pod assignment {}
core.tolerations Harbor core Tolerations for pod assignment []
core.podLabels Add additional labels to the pod (evaluated as a template) {}
core.podAnnotations Annotations to add to the core pod {}
core.secret Secret used when the core server communicates with other components. If a secret key is not specified, Helm will generate one. Must be a string of 16 chars. ""
core.secretName Fill the name of a kubernetes secret if you want to use your own TLS certificate and private key for token encryption/decryption. The secret must contain two keys named: tls.crt - the certificate and tls.key - the private key. The default key pair will be used if it isn't set ""
core.csrfKey The CSRF key. Will be generated automatically if it isn't specified ""
core.lifecycleHooks LifecycleHook to set additional configuration at startup, e.g. LDAP settings via REST API. Evaluated as a template {}
core.automountServiceAccountToken Automount service account token false
core.livenessProbe.enabled Enable livenessProbe true
core.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 20
core.livenessProbe.periodSeconds Period seconds for livenessProbe 10
core.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 5
core.livenessProbe.failureThreshold Failure threshold for livenessProbe 6
core.livenessProbe.successThreshold Success threshold for livenessProbe 1
core.readinessProbe.enabled Enable readinessProbe true
core.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 20
core.readinessProbe.periodSeconds Period seconds for readinessProbe 10
core.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 5
core.readinessProbe.failureThreshold Failure threshold for readinessProbe 6
core.readinessProbe.successThreshold Success threshold for readinessProbe 1
core.startupProbe.enabled Enable startupProbe true
core.startupProbe.initialDelaySeconds Initial delay seconds for startupProbe 10
core.startupProbe.periodSeconds Period seconds for startupProbe 10
core.startupProbe.timeoutSeconds Timeout seconds for startupProbe 5
core.startupProbe.failureThreshold Failure threshold for startupProbe 30
core.startupProbe.successThreshold Success threshold for startupProbe 1

Harbor Jobservice Parameters

Name Description Value
jobserviceImage.registry Registry for jobservice image docker.io
jobserviceImage.repository Repository for jobservice image bitnami/harbor-jobservice
jobserviceImage.tag Tag for jobservice image 2.3.2-debian-10-r2
jobserviceImage.pullPolicy Harbor Jobservice image pull policy IfNotPresent
jobserviceImage.pullSecrets Specify docker-registry secret names as an array []
jobserviceImage.debug Specify if debug logs should be enabled false
jobservice.command Override default container command (useful when using custom images) []
jobservice.args Override default container args (useful when using custom images) []
jobservice.replicas The replica count 1
jobservice.updateStrategy.type The update strategy for deployments with persistent volumes: RollingUpdate or Recreate. Set it as Recreate when RWM for volumes isn't supported RollingUpdate
jobservice.maxJobWorkers The max job workers 10
jobservice.jobLogger The logger for jobs: file, database or stdout file
jobservice.tls.existingSecret Name of a secret with the certificates for internal TLS access. Requires internalTLS.enabled to be set to true. If this values is not set it will be automatically generated ""
jobservice.lifecycleHooks LifecycleHook to set additional configuration at startup, e.g. LDAP settings via REST API. Evaluated as a template {}
jobservice.customLivenessProbe Override default liveness probe {}
jobservice.customReadinessProbe Override default readiness probe {}
jobservice.extraEnvVars Array containing extra env vars []
jobservice.extraEnvVarsCM ConfigMap containing extra env vars ""
jobservice.extraEnvVarsSecret Secret containing extra env vars (in case of sensitive data) ""
jobservice.extraVolumes Array of extra volumes to be added to the deployment (evaluated as template). Requires setting extraVolumeMounts []
jobservice.extraVolumeMounts Array of extra volume mounts to be added to the container (evaluated as template). Normally used with extraVolumes. []
jobservice.hostAliases Specify hostAliases for the Pod to use []
jobservice.initContainers Add additional init containers to the pod (evaluated as a template) []
jobservice.sidecars Attach additional containers to the pod (evaluated as a template) []
jobservice.resources.limits The resources limits for the container {}
jobservice.resources.requests The requested resources for the container {}
jobservice.podAffinityPreset Harbor Jobservice Pod affinity preset. Ignored if affinity is set. Allowed values: soft or hard ""
jobservice.podAntiAffinityPreset Harbor Jobservice Pod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hard soft
jobservice.nodeAffinityPreset.type Harbor Jobservice Node affinity preset type. Ignored if affinity is set. Allowed values: soft or hard ""
jobservice.nodeAffinityPreset.key Harbor Jobservice Node label key to match Ignored if affinity is set. ""
jobservice.nodeAffinityPreset.values Harbor Jobservice Node label values to match. Ignored if affinity is set. []
jobservice.affinity Harbor Jobservice Affinity for pod assignment {}
jobservice.nodeSelector Harbor Jobservice Node labels for pod assignment {}
jobservice.tolerations Harbor Jobservice Tolerations for pod assignment []
jobservice.podLabels Add additional labels to the pod (evaluated as a template) {}
jobservice.podAnnotations Annotations to add to the jobservice pod {}
jobservice.secret Secret used when the job service communicates with other components. If a secret key is not specified, Helm will generate one. Must be a string of 16 chars. ""
jobservice.automountServiceAccountToken Automount service account token false
jobservice.livenessProbe.enabled Enable livenessProbe true
jobservice.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 20
jobservice.livenessProbe.periodSeconds Period seconds for livenessProbe 10
jobservice.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 5
jobservice.livenessProbe.failureThreshold Failure threshold for livenessProbe 6
jobservice.livenessProbe.successThreshold Success threshold for livenessProbe 1
jobservice.readinessProbe.enabled Enable readinessProbe true
jobservice.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 20
jobservice.readinessProbe.periodSeconds Period seconds for readinessProbe 10
jobservice.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 5
jobservice.readinessProbe.failureThreshold Failure threshold for readinessProbe 6
jobservice.readinessProbe.successThreshold Success threshold for readinessProbe 1

Harbor Registry Parameters

Name Description Value
registryImage.registry Registry for registry image docker.io
registryImage.repository Repository for registry image bitnami/harbor-registry
registryImage.tag Tag for registry image 2.3.2-debian-10-r2
registryImage.pullPolicy Harbor Registry image pull policy IfNotPresent
registryImage.pullSecrets Specify docker-registry secret names as an array []
registryImage.debug Specify if debug logs should be enabled false
registryctlImage.registry Registry for registryctl image docker.io
registryctlImage.repository Repository for registryctl controller image bitnami/harbor-registryctl
registryctlImage.tag Tag for registrycrtl controller image 2.3.2-debian-10-r2
registryctlImage.pullPolicy Harbor Registryctl image pull policy IfNotPresent
registryctlImage.pullSecrets Specify docker-registry secret names as an array []
registryctlImage.debug Specify if debug logs should be enabled false
registry.replicas The replica count 1
registry.updateStrategy.type The update strategy for deployments with persistent volumes: RollingUpdate or Recreate. Set it as Recreate when RWM for volumes isn't supported RollingUpdate
registry.tls.existingSecret Name of a secret with the certificates for internal TLS access. Requires internalTLS.enabled to be set to true. If this values is not set it will be automatically generated ""
registry.server.command Override default container command (useful when using custom images) []
registry.server.args Override default container args (useful when using custom images) []
registry.server.extraEnvVars Array containing extra env vars []
registry.server.extraEnvVarsCM ConfigMap containing extra env vars ""
registry.server.extraEnvVarsSecret Secret containing extra env vars (in case of sensitive data) ""
registry.server.extraVolumeMounts Array of extra volume mounts to be added to the container (evaluated as template). Normally used with extraVolumes. []
registry.server.lifecycleHooks LifecycleHook to set additional configuration at startup, e.g. LDAP settings via REST API. Evaluated as a template {}
registry.server.resources.limits The resources limits for the container {}
registry.server.resources.requests The requested resources for the container {}
registry.server.livenessProbe.enabled Enable livenessProbe true
registry.server.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 10
registry.server.livenessProbe.periodSeconds Period seconds for livenessProbe 10
registry.server.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 5
registry.server.livenessProbe.failureThreshold Failure threshold for livenessProbe 6
registry.server.livenessProbe.successThreshold Success threshold for livenessProbe 1
registry.server.readinessProbe.enabled Enable readinessProbe true
registry.server.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 10
registry.server.readinessProbe.periodSeconds Period seconds for readinessProbe 10
registry.server.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 5
registry.server.readinessProbe.failureThreshold Failure threshold for readinessProbe 6
registry.server.readinessProbe.successThreshold Success threshold for readinessProbe 1
registry.server.customLivenessProbe Override default liveness probe {}
registry.server.customReadinessProbe Override default readiness probe {}
registry.controller.command Override default container command (useful when using custom images) []
registry.controller.args Override default container args (useful when using custom images) []
registry.controller.extraEnvVars Array containing extra env vars []
registry.controller.extraEnvVarsCM ConfigMap containing extra env vars ""
registry.controller.extraEnvVarsSecret Secret containing extra env vars (in case of sensitive data) ""
registry.controller.extraVolumeMounts Array of extra volume mounts to be added to the container (evaluated as template). Normally used with extraVolumes. []
registry.controller.lifecycleHooks LifecycleHook to set additional configuration at startup, e.g. LDAP settings via REST API. Evaluated as a template {}
registry.controller.resources.limits The resources limits for the container {}
registry.controller.resources.requests The requested resources for the container {}
registry.controller.livenessProbe.enabled Enable livenessProbe true
registry.controller.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 10
registry.controller.livenessProbe.periodSeconds Period seconds for livenessProbe 10
registry.controller.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 5
registry.controller.livenessProbe.failureThreshold Failure threshold for livenessProbe 6
registry.controller.livenessProbe.successThreshold Success threshold for livenessProbe 1
registry.controller.readinessProbe.enabled Enable readinessProbe true
registry.controller.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 10
registry.controller.readinessProbe.periodSeconds Period seconds for readinessProbe 10
registry.controller.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 5
registry.controller.readinessProbe.failureThreshold Failure threshold for readinessProbe 6
registry.controller.readinessProbe.successThreshold Success threshold for readinessProbe 1
registry.controller.customLivenessProbe Override default liveness probe {}
registry.controller.customReadinessProbe Override default readiness probe {}
registry.extraVolumes Array of extra volumes to be added to the deployment (evaluated as template). Requires setting extraVolumeMounts []
registry.hostAliases Specify hostAliases for the Pod to use []
registry.initContainers Add additional init containers to the pod (evaluated as a template) []
registry.sidecars Attach additional containers to the pod (evaluated as a template) []
registry.podAffinityPreset Harbor Registry Pod affinity preset. Ignored if affinity is set. Allowed values: soft or hard ""
registry.podAntiAffinityPreset Harbor Registry Pod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hard soft
registry.nodeAffinityPreset.type Harbor Registry Node affinity preset type. Ignored if affinity is set. Allowed values: soft or hard ""
registry.nodeAffinityPreset.key Harbor Registry Node label key to match Ignored if affinity is set. ""
registry.nodeAffinityPreset.values Harbor Registry Node label values to match. Ignored if affinity is set. []
registry.affinity Harbor Registry Affinity for pod assignment {}
registry.nodeSelector Harbor Registry Node labels for pod assignment {}
registry.tolerations Harbor Registry Tolerations for pod assignment []
registry.podLabels Add additional labels to the pod (evaluated as a template) {}
registry.podAnnotations Annotations to add to the registry pod {}
registry.automountServiceAccountToken Automount service account token false
registry.secret Secret is used to secure the upload state from client and registry storage backend. See: https://github.com/docker/distribution/blob/master/docs/configuration.md ""
registry.relativeurls Make the registry return relative URLs in Location headers. The client is responsible for resolving the correct URL. false
registry.credentials.username The username for accessing the registry instance, which is hosted by htpasswd auth mode. More details see official docs harbor_registry_user
registry.credentials.password The password for accessing the registry instance, which is hosted by htpasswd auth mode. More details see official docs. It is suggested you update this value before installation. harbor_registry_password
registry.credentials.htpasswd The content of htpasswd file based on the value of registry.credentials.username registry.credentials.password. Currently helm does not support bcrypt in the template script, if the credential is updated you need to manually generated by calling harbor_registry_user:$2y$10$9L4Tc0DJbFFMB6RdSCunrOpTHdwhid4ktBJmLD00bYgqkkGOvll3m
registry.middleware.enabled Middleware is used to add support for a CDN between backend storage and docker pull recipient. See false
registry.middleware.type CDN type for the middleware cloudFront
registry.middleware.cloudFront.baseurl CloudFront CDN settings: Base URL example.cloudfront.net
registry.middleware.cloudFront.keypairid CloudFront CDN settings: Keypair ID KEYPAIRID
registry.middleware.cloudFront.duration CloudFront CDN settings: Duration 3000s
registry.middleware.cloudFront.ipfilteredby CloudFront CDN settings: IP filters none
registry.middleware.cloudFront.privateKeySecret CloudFront CDN settings: Secret name with the private key my-secret

ChartMuseum Parameters

Name Description Value
chartMuseumImage.registry Registry for ChartMuseum image docker.io
chartMuseumImage.repository Repository for clair image bitnami/chartmuseum
chartMuseumImage.tag Tag for ChartMuseum image 0.13.1-debian-10-r149
chartMuseumImage.pullPolicy ChartMuseum image pull policy IfNotPresent
chartMuseumImage.pullSecrets Specify docker-registry secret names as an array []
chartMuseumImage.debug Specify if debug logs should be enabled false
chartmuseum.enabled Enable ChartMuseum true
chartmuseum.command Override default container command (useful when using custom images) []
chartmuseum.args Override default container args (useful when using custom images) []
chartmuseum.replicas Number of ChartMuseum replicas 1
chartmuseum.updateStrategy.type Update strategy - only really applicable for deployments with RWO PVs attached RollingUpdate
chartmuseum.useRedisCache Specify if ChartMuseum will use redis cache true
chartmuseum.absoluteUrl Specify an absolute URL for ChartMuseum registry false
chartmuseum.chartRepoName Specify the endpoint for the chartmuseum registry. Only applicable if chartmuseum.absoluteUrl is true chartsRepo
chartmuseum.depth Support for multitenancy. More info here 1
chartmuseum.logJson Print logs on JSON format false
chartmuseum.disableMetrics Disable prometheus metrics exposure false
chartmuseum.disableApi Disable all the routes prefixed with /api false
chartmuseum.disableStatefiles Disable use of index-cache.yaml false
chartmuseum.allowOverwrite Allow chart versions to be re-uploaded without force querystring true
chartmuseum.anonymousGet Allow anonymous GET operations false
chartmuseum.tls.existingSecret Name of a secret with the certificates for internal TLS access. Requires internalTLS.enabled to be set to true. If this values is not set it will be automatically generated ""
chartmuseum.contextPath Set the base context path for ChartMuseum ""
chartmuseum.indexLimit Limit the number of parallels indexes for ChartMuseum ""
chartmuseum.chartPostFormFieldName Form field which will be queried for the chart file content ""
chartmuseum.provPostFormFieldName Form field which will be queried for the provenance file content ""
chartmuseum.maxStorageObjects Maximum storage objects ""
chartmuseum.maxUploadSize Maximum upload size ""
chartmuseum.storageTimestampTolerance Timestamp tolerance size 1s
chartmuseum.lifecycleHooks LifecycleHook to set additional configuration at startup, e.g. LDAP settings via REST API. Evaluated as a template {}
chartmuseum.customLivenessProbe Override default liveness probe {}
chartmuseum.customReadinessProbe Override default readiness probe {}
chartmuseum.extraEnvVars Array containing extra env vars []
chartmuseum.extraEnvVarsCM ConfigMap containing extra env vars ""
chartmuseum.extraEnvVarsSecret Secret containing extra env vars (in case of sensitive data) ""
chartmuseum.extraVolumes Array of extra volumes to be added to the deployment (evaluated as template). Requires setting extraVolumeMounts []
chartmuseum.extraVolumeMounts Array of extra volume mounts to be added to the container (evaluated as template). Normally used with extraVolumes. []
chartmuseum.hostAliases Specify hostAliases for the Pod to use []
chartmuseum.initContainers Add additional init containers to the pod (evaluated as a template) []
chartmuseum.sidecars Attach additional containers to the pod (evaluated as a template) []
chartmuseum.resources.limits The resources limits for the container {}
chartmuseum.resources.requests The requested resources for the container {}
chartmuseum.podAffinityPreset ChartMuseum Pod affinity preset. Ignored if affinity is set. Allowed values: soft or hard ""
chartmuseum.podAntiAffinityPreset ChartMuseum Pod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hard soft
chartmuseum.nodeAffinityPreset.type ChartMuseum Node affinity preset type. Ignored if affinity is set. Allowed values: soft or hard ""
chartmuseum.nodeAffinityPreset.key ChartMuseum Node label key to match Ignored if affinity is set. ""
chartmuseum.nodeAffinityPreset.values ChartMuseum Node label values to match. Ignored if affinity is set. []
chartmuseum.affinity ChartMuseum Affinity for pod assignment {}
chartmuseum.nodeSelector ChartMuseum Node labels for pod assignment {}
chartmuseum.tolerations ChartMuseum Tolerations for pod assignment []
chartmuseum.podLabels Add additional labels to the pod (evaluated as a template) {}
chartmuseum.podAnnotations Annotations to add to the chartmuseum pod {}
chartmuseum.automountServiceAccountToken Automount service account token false
chartmuseum.livenessProbe.enabled Enable livenessProbe true
chartmuseum.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 30
chartmuseum.livenessProbe.periodSeconds Period seconds for livenessProbe 10
chartmuseum.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 20
chartmuseum.livenessProbe.failureThreshold Failure threshold for livenessProbe 10
chartmuseum.livenessProbe.successThreshold Success threshold for livenessProbe 1
chartmuseum.readinessProbe.enabled Enable readinessProbe true
chartmuseum.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 30
chartmuseum.readinessProbe.periodSeconds Period seconds for readinessProbe 10
chartmuseum.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 20
chartmuseum.readinessProbe.failureThreshold Failure threshold for readinessProbe 10
chartmuseum.readinessProbe.successThreshold Success threshold for readinessProbe 1

Clair Parameters

Name Description Value
clairImage.registry Registry for clair image docker.io
clairImage.repository Repository for clair image bitnami/harbor-clair
clairImage.tag Tag for clair image 2.3.2-debian-10-r2
clairImage.pullPolicy Harbor clair image pull policy IfNotPresent
clairImage.pullSecrets Specify docker-registry secret names as an array []
clairImage.debug Specify if debug logs should be enabled false
clairAdapterImage.registry Registry for clair adapter image docker.io
clairAdapterImage.repository Repository for clair adapter image bitnami/harbor-adapter-clair
clairAdapterImage.tag Tag for clair adapter image 2.3.2-debian-10-r2
clairAdapterImage.pullPolicy Harbor clair adapter image pull policy IfNotPresent
clairAdapterImage.pullSecrets Specify docker-registry secret names as an array []
clairAdapterImage.debug Specify if debug logs should be enabled false
clair.enabled Enable Clair scanner. Add it as an additional interrogation service by following https://goharbor.io/docs/latest/administration/vulnerability-scanning/pluggable-scanners false
clair.replicas The replica count 1
clair.tls.existingSecret Name of a secret with the certificates for internal TLS access. Requires internalTLS.enabled to be set to true. If this values is not set it will be automatically generated ""
clair.updateStrategy.type Update strategy - only really applicable for deployments with RWO PVs attached RollingUpdate
clair.httpProxy The http proxy used to update vulnerabilities database from internet ""
clair.httpsProxy The https proxy used to update vulnerabilities database from internet ""
clair.updatersInterval The interval of clair updaters (hours), set to 0 to disable 12
clair.adapter.command Override default container command (useful when using custom images) []
clair.adapter.args Override default container args (useful when using custom images) []
clair.adapter.extraEnvVars Array containing extra env vars []
clair.adapter.extraEnvVarsCM ConfigMap containing extra env vars ""
clair.adapter.extraEnvVarsSecret Secret containing extra env vars (in case of sensitive data) ""
clair.adapter.extraVolumeMounts Array of extra volume mounts to be added to the container (evaluated as template). Normally used with extraVolumes. []
clair.adapter.livenessProbe.enabled Enable livenessProbe true
clair.adapter.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 20
clair.adapter.livenessProbe.periodSeconds Period seconds for livenessProbe 10
clair.adapter.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 5
clair.adapter.livenessProbe.failureThreshold Failure threshold for livenessProbe 6
clair.adapter.livenessProbe.successThreshold Success threshold for livenessProbe 1
clair.adapter.readinessProbe.enabled Enable readinessProbe true
clair.adapter.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 20
clair.adapter.readinessProbe.periodSeconds Period seconds for readinessProbe 10
clair.adapter.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 5
clair.adapter.readinessProbe.failureThreshold Failure threshold for readinessProbe 6
clair.adapter.readinessProbe.successThreshold Success threshold for readinessProbe 1
clair.adapter.lifecycleHooks LifecycleHook to set additional configuration at startup, e.g. LDAP settings via REST API. Evaluated as a template {}
clair.adapter.customLivenessProbe Override default liveness probe {}
clair.adapter.customReadinessProbe Override default readiness probe {}
clair.adapter.resources.limits The resources limits for the container {}
clair.adapter.resources.requests The requested resources for the container {}
clair.server.command Override default container command (useful when using custom images) []
clair.server.args Override default container args (useful when using custom images) []
clair.server.livenessProbe.enabled Enable livenessProbe true
clair.server.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 20
clair.server.livenessProbe.periodSeconds Period seconds for livenessProbe 10
clair.server.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 5
clair.server.livenessProbe.failureThreshold Failure threshold for livenessProbe 6
clair.server.livenessProbe.successThreshold Success threshold for livenessProbe 1
clair.server.readinessProbe.enabled Enable readinessProbe true
clair.server.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 20
clair.server.readinessProbe.periodSeconds Period seconds for readinessProbe 10
clair.server.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 5
clair.server.readinessProbe.failureThreshold Failure threshold for readinessProbe 6
clair.server.readinessProbe.successThreshold Success threshold for readinessProbe 1
clair.server.lifecycleHooks LifecycleHook to set additional configuration at startup, e.g. LDAP settings via REST API. Evaluated as a template {}
clair.server.customLivenessProbe Override default liveness probe {}
clair.server.customReadinessProbe Override default readiness probe {}
clair.server.extraEnvVars Array containing extra env vars []
clair.server.extraEnvVarsCM ConfigMap containing extra env vars ""
clair.server.extraEnvVarsSecret Secret containing extra env vars (in case of sensitive data) ""
clair.server.extraVolumeMounts Array of extra volume mounts to be added to the container (evaluated as template). Normally used with extraVolumes. []
clair.server.resources.limits The resources limits for the container {}
clair.server.resources.requests The requested resources for the container {}
clair.extraVolumes Array of extra volumes to be added to the deployment (evaluated as template). Requires setting extraVolumeMounts []
clair.hostAliases Specify hostAliases for the Pod to use []
clair.initContainers Add additional init containers to the pod (evaluated as a template) []
clair.sidecars Attach additional containers to the pod (evaluated as a template) []
clair.podAffinityPreset Harbor Clair Pod affinity preset. Ignored if affinity is set. Allowed values: soft or hard ""
clair.podAntiAffinityPreset Harbor Clair Pod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hard soft
clair.nodeAffinityPreset.type Harbor Clair Node affinity preset type. Ignored if affinity is set. Allowed values: soft or hard ""
clair.nodeAffinityPreset.key Harbor Clair Node label key to match Ignored if affinity is set. ""
clair.nodeAffinityPreset.values Harbor Clair Node label values to match. Ignored if affinity is set. []
clair.affinity Harbor Clair Affinity for pod assignment {}
clair.nodeSelector Harbor Clair Node labels for pod assignment {}
clair.tolerations Harbor Clair Tolerations for pod assignment []
clair.podLabels Add additional labels to the pod (evaluated as a template) {}
clair.podAnnotations Annotations to add to the clair pod {}
clair.automountServiceAccountToken Automount service account token false

Notary Parameters

Name Description Value
notaryServerImage.registry Registry for notary server image docker.io
notaryServerImage.repository Repository for notary server image bitnami/harbor-notary-server
notaryServerImage.tag Tag for notary server image 2.3.2-debian-10-r2
notaryServerImage.pullPolicy Harbor notary server image pull policy IfNotPresent
notaryServerImage.pullSecrets Specify docker-registry secret names as an array []
notaryServerImage.debug Specify if debug logs should be enabled false
notarySignerImage.registry Registry for notary signer images docker.io
notarySignerImage.repository Repository for notary signer image bitnami/harbor-notary-signer
notarySignerImage.tag Tag for notary signer image 2.3.2-debian-10-r2
notarySignerImage.pullPolicy Harbor notary signer image pull policy IfNotPresent
notarySignerImage.pullSecrets Specify docker-registry secret names as an array []
notarySignerImage.debug Specify if debug logs should be enabled false
notary.enabled Enable Notary true
notary.server.command Override default container command (useful when using custom images) []
notary.server.args Override default container args (useful when using custom images) []
notary.server.replicas The replica count 1
notary.server.updateStrategy.type Update strategy - only really applicable for deployments with RWO PVs attached RollingUpdate
notary.server.extraEnvVars Array containing extra env vars []
notary.server.extraEnvVarsCM ConfigMap containing extra env vars ""
notary.server.extraEnvVarsSecret Secret containing extra env vars (in case of sensitive data) ""
notary.server.extraVolumeMounts Array of extra volume mounts to be added to the container (evaluated as template). Normally used with extraVolumes. []
notary.server.hostAliases HostAliases to add to the deployment []
notary.server.lifecycleHooks LifecycleHook to set additional configuration at startup, e.g. LDAP settings via REST API. Evaluated as a template {}
notary.server.resources.limits The resources limits for the container {}
notary.server.resources.requests The requested resources for the container {}
notary.server.livenessProbe.enabled Enable livenessProbe true
notary.server.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 10
notary.server.livenessProbe.periodSeconds Period seconds for livenessProbe 10
notary.server.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 5
notary.server.livenessProbe.failureThreshold Failure threshold for livenessProbe 6
notary.server.livenessProbe.successThreshold Success threshold for livenessProbe 1
notary.server.readinessProbe.enabled Enable readinessProbe true
notary.server.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 10
notary.server.readinessProbe.periodSeconds Period seconds for readinessProbe 10
notary.server.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 5
notary.server.readinessProbe.failureThreshold Failure threshold for readinessProbe 6
notary.server.readinessProbe.successThreshold Success threshold for readinessProbe 1
notary.server.customLivenessProbe Override default liveness probe {}
notary.server.customReadinessProbe Override default readiness probe {}
notary.server.extraVolumes Array of extra volumes to be added to the deployment (evaluated as template). Requires setting extraVolumeMounts []
notary.server.initContainers Add additional init containers to the pod (evaluated as a template) []
notary.server.sidecars Attach additional containers to the pod (evaluated as a template) []
notary.server.podAffinityPreset Notary server Pod affinity preset. Ignored if affinity is set. Allowed values: soft or hard ""
notary.server.podAntiAffinityPreset Notary server Pod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hard soft
notary.server.nodeAffinityPreset.type Notary server Node affinity preset type. Ignored if affinity is set. Allowed values: soft or hard ""
notary.server.nodeAffinityPreset.key Notary server Node label key to match Ignored if affinity is set. ""
notary.server.nodeAffinityPreset.values Notary server Node label values to match. Ignored if affinity is set. []
notary.server.affinity Notary server Affinity for pod assignment {}
notary.server.nodeSelector Notary server Node labels for pod assignment {}
notary.server.tolerations Notary server Tolerations for pod assignment []
notary.server.podLabels Add additional labels to the pod (evaluated as a template) {}
notary.server.podAnnotations Annotations to add to the notary pod {}
notary.server.automountServiceAccountToken Automount service account token false
notary.signer.command Override default container command (useful when using custom images) []
notary.signer.args Override default container args (useful when using custom images) []
notary.signer.replicas The replica count 1
notary.signer.updateStrategy.type Update strategy - only really applicable for deployments with RWO PVs attached RollingUpdate
notary.signer.extraEnvVars Array containing extra env vars []
notary.signer.extraEnvVarsCM ConfigMap containing extra env vars ""
notary.signer.extraEnvVarsSecret Secret containing extra env vars (in case of sensitive data) ""
notary.signer.extraVolumeMounts Array of extra volume mounts to be added to the container (evaluated as template). Normally used with extraVolumes. []
notary.signer.resources.limits The resources limits for the container {}
notary.signer.resources.requests The requested resources for the container {}
notary.signer.extraVolumes Array of extra volumes to be added to the deployment (evaluated as template). Requires setting extraVolumeMounts []
notary.signer.hostAliases HostAliases to add to the deployment []
notary.signer.initContainers Add additional init containers to the pod (evaluated as a template) []
notary.signer.sidecars Attach additional containers to the pod (evaluated as a template) []
notary.signer.podAffinityPreset Notary signer Pod affinity preset. Ignored if affinity is set. Allowed values: soft or hard ""
notary.signer.podAntiAffinityPreset Notary signer Pod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hard soft
notary.signer.nodeAffinityPreset.type Notary signer Node affinity preset type. Ignored if affinity is set. Allowed values: soft or hard ""
notary.signer.nodeAffinityPreset.key Notary signer Node label key to match Ignored if affinity is set. ""
notary.signer.nodeAffinityPreset.values Notary signer Node label values to match. Ignored if affinity is set. []
notary.signer.affinity Notary signer Affinity for pod assignment {}
notary.signer.nodeSelector Notary signer Node labels for pod assignment {}
notary.signer.tolerations Notary signer Tolerations for pod assignment []
notary.signer.podLabels Add additional labels to the pod (evaluated as a template) {}
notary.signer.podAnnotations Annotations to add to the notary.signer pod {}
notary.signer.livenessProbe.enabled Enable livenessProbe true
notary.signer.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 10
notary.signer.livenessProbe.periodSeconds Period seconds for livenessProbe 10
notary.signer.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 5
notary.signer.livenessProbe.failureThreshold Failure threshold for livenessProbe 6
notary.signer.livenessProbe.successThreshold Success threshold for livenessProbe 1
notary.signer.readinessProbe.enabled Enable readinessProbe true
notary.signer.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 10
notary.signer.readinessProbe.periodSeconds Period seconds for readinessProbe 10
notary.signer.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 5
notary.signer.readinessProbe.failureThreshold Failure threshold for readinessProbe 6
notary.signer.readinessProbe.successThreshold Success threshold for readinessProbe 1
notary.signer.lifecycleHooks LifecycleHook to set additional configuration at startup, e.g. LDAP settings via REST API. Evaluated as a template {}
notary.signer.customLivenessProbe Override default liveness probe {}
notary.signer.customReadinessProbe Override default readiness probe {}
notary.signer.automountServiceAccountToken Automount service account token false
notary.secretName Fill the name of a kubernetes secret if you want to use your own TLS certificate authority, certificate and private key for notary communications. The secret must contain keys named notary-signer-ca.crt, notary-signer.key and notary-signer.crt that contain the CA, certificate and private key. They will be generated if not set. ""

Harbor Trivy Parameters

Name Description Value
trivyImage.registry Registry for trivy image docker.io
trivyImage.repository Repository for trivy image bitnami/harbor-adapter-trivy
trivyImage.tag Tag for trivy image 2.3.2-debian-10-r2
trivyImage.pullPolicy Harbor trivy image pull policy IfNotPresent
trivyImage.pullSecrets Specify docker-registry secret names as an array []
trivyImage.debug Specify if debug logs should be enabled false
trivy.enabled Enable Trivy true
trivy.replicas The replica count 1
trivy.command Override default container command (useful when using custom images) []
trivy.args Override default container args (useful when using custom images) []
trivy.tls.existingSecret Name of a secret with the certificates for internal TLS access. Requires internalTLS.enabled to be set to true. If this values is not set it will be automatically generated ""
trivy.updateStrategy.type Update strategy RollingUpdate
trivy.debugMode The flag to enable Trivy debug mode false
trivy.vulnType Comma-separated list of vulnerability types. Possible values os and library. os,library
trivy.automountServiceAccountToken Automount service account token in the Trivy containers false
trivy.severity Comma-separated list of severities to be checked UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL
trivy.ignoreUnfixed The flag to display only fixed vulnerabilities false
trivy.insecure The flag to skip verifying registry certificate false
trivy.gitHubToken The GitHub access token to download Trivy DB ""
trivy.skipUpdate The flag to disable Trivy DB downloads from GitHub false
trivy.cacheDir Directory to store the cache /bitnami/harbor-adapter-trivy/.cache
trivy.resources The resources to allocate for container {}
trivy.extraEnvVars Array containing extra env vars []
trivy.extraEnvVarsCM ConfigMap containing extra env vars ""
trivy.extraEnvVarsSecret Secret containing extra env vars (in case of sensitive data) ""
trivy.extraVolumes Array of extra volumes to be added to the deployment (evaluated as template). Requires setting extraVolumeMounts []
trivy.extraVolumeMounts Array of extra volume mounts to be added to the container (evaluated as template). Normally used with extraVolumes. []
trivy.hostAliases Specify hostAliases for the Pod to use []
trivy.initContainers Add additional init containers to the pod (evaluated as a template) []
trivy.sidecars Attach additional containers to the pod (evaluated as a template) []
trivy.livenessProbe.enabled Enable livenessProbe true
trivy.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 20
trivy.livenessProbe.periodSeconds Period seconds for livenessProbe 10
trivy.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 5
trivy.livenessProbe.failureThreshold Failure threshold for livenessProbe 6
trivy.livenessProbe.successThreshold Success threshold for livenessProbe 1
trivy.readinessProbe.enabled Enable readinessProbe true
trivy.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 20
trivy.readinessProbe.periodSeconds Period seconds for readinessProbe 10
trivy.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 5
trivy.readinessProbe.failureThreshold Failure threshold for readinessProbe 6
trivy.readinessProbe.successThreshold Success threshold for readinessProbe 1
trivy.lifecycleHooks LifecycleHook to set additional configuration at startup, e.g. LDAP settings via REST API. Evaluated as a template {}
trivy.customLivenessProbe Override default liveness probe {}
trivy.customReadinessProbe Override default readiness probe {}
trivy.podAffinityPreset Trivy Pod affinity preset. Ignored if affinity is set. Allowed values: soft or hard ""
trivy.podAntiAffinityPreset Trivy Pod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hard soft
trivy.nodeAffinityPreset.type Trivy Node affinity preset type. Ignored if affinity is set. Allowed values: soft or hard ""
trivy.nodeAffinityPreset.key Trivy Node label key to match Ignored if affinity is set. ""
trivy.nodeAffinityPreset.values Trivy Node label values to match. Ignored if affinity is set. []
trivy.affinity Trivy Affinity for pod assignment {}
trivy.nodeSelector Trivy Node labels for pod assignment {}
trivy.tolerations Trivy Tolerations for pod assignment []
trivy.podLabels Add additional labels to the pod (evaluated as a template) {}
trivy.podAnnotations Annotations to add to the trivy pod {}

PostgreSQL Parameters

Name Description Value
postgresql.enabled If external database is used, set it to false true
postgresql.nameOverride String to partially override common.names.fullname template with a string (will prepend the release name) ""
postgresql.postgresqlUsername Postgresql username postgres
postgresql.postgresqlPassword Postgresql password not-secure-database-password
postgresql.existingSecret Set Postgresql password via an existing secret ""
postgresql.postgresqlExtendedConf Extended runtime config parameters (appended to main or default configuration) {}
postgresql.replication.enabled Enable replicated postgresql false
postgresql.persistence.enabled Enable persistence for PostgreSQL true
postgresql.initdbScripts Initdb scripts to create Harbor databases {}
externalDatabase.host Host of the external database localhost
externalDatabase.user Existing username in the external db bn_harbor
externalDatabase.password Password for the above username ""
externalDatabase.port Port of the external database 5432
externalDatabase.sslmode External database ssl mode disable
externalDatabase.coreDatabase External database name for core ""
externalDatabase.clairDatabase External database name for clair ""
externalDatabase.clairUsername External database username for clair ""
externalDatabase.clairPassword External database password for clair ""
externalDatabase.notaryServerDatabase External database name for notary server ""
externalDatabase.notaryServerUsername External database username for notary server ""
externalDatabase.notaryServerPassword External database password for notary server ""
externalDatabase.notarySignerDatabase External database name for notary signer ""
externalDatabase.notarySignerUsername External database username for notary signer ""
externalDatabase.notarySignerPassword External database password for notary signer ""

Redis™ Parameters

Name Description Value
redis.enabled If external redis is used, set it to false true
redis.nameOverride String to partially override common.names.fullname template with a string (will prepend the release name) ""
redis.auth.enabled Use redis password false
redis.auth.password Redis password ""
redis.architecture Cluster settings standalone
redis.master.persistence.enabled Enable persistence for master Redis true
redis.replica.persistence.enabled Enable persistence for replica Redis true
externalRedis.host Host of the external redis localhost
externalRedis.port Port of the external redis 6379
externalRedis.sentinel.enabled If external redis with sentinal is used, set it to true false
externalRedis.sentinel.masterSet Name of sentinel masterSet if sentinel is used mymaster
externalRedis.sentinel.hosts Sentinel hosts and ports in the format ""
externalRedis.password Password for the external redis ""
externalRedis.coreDatabaseIndex Index for core database 0
externalRedis.jobserviceDatabaseIndex Index for jobservice database 1
externalRedis.registryDatabaseIndex Index for registry database 2
externalRedis.chartmuseumDatabaseIndex Index for chartmuseum database 3
externalRedis.clairAdapterDatabaseIndex Index for chartmuseum database 4
externalRedis.trivyAdapterDatabaseIndex Index for chartmuseum database 5

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

$ helm install my-release \
  --set harborAdminPassword=password \
    bitnami/harbor

The above command sets the Harbor administrator account password to password.

NOTE: Once this chart is deployed, it is not possible to change the application's access credentials, such as usernames or passwords, using Helm. To change these application credentials after deployment, delete any persistent volumes (PVs) used by the chart and re-deploy it, or use the application's built-in administrative tools if available.

Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,

$ helm install my-release -f values.yaml bitnami/harbor

Configuration and installation details

Rolling VS Immutable tags

It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.

Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.

Configure the way how to expose Harbor service:

  • Ingress: The ingress controller must be installed in the Kubernetes cluster. Notes: if the TLS is disabled, the port must be included in the command when pulling/pushing images. Refer to issue #5291 for the detail.
  • ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster.
  • NodePort: Exposes the service on each Nodes IP at a static port (the NodePort). Youll be able to contact the NodePort service, from outside the cluster, by requesting NodeIP:NodePort.
  • LoadBalancer: Exposes the service externally using a cloud providers load balancer.

Sidecars and Init Containers

If you have a need for additional containers to run within the same pod as any of the Harbor components (e.g. an additional metrics or logging exporter), you can do so via the sidecars config parameter inside each component subsection. Simply define your container according to the Kubernetes container spec.

core:
  sidecars:
    - name: your-image-name
      image: your-image
      imagePullPolicy: Always
      ports:
        - name: portname
        containerPort: 1234

Similarly, you can add extra init containers using the initContainers parameter.

core:
  initContainers:
    - name: your-image-name
      image: your-image
      imagePullPolicy: Always
      ports:
        - name: portname
          containerPort: 1234

Adding extra environment variables

In case you want to add extra environment variables (useful for advanced operations like custom init scripts), you can use the extraEnvVars property inside each component subsection.

core:
  extraEnvVars:
    - name: LOG_LEVEL
      value: error

Alternatively, you can use a ConfigMap or a Secret with the environment variables. To do so, use the extraEnvVarsCM or the extraEnvVarsSecret values inside each component subsection.

Configure the external URL:

The external URL for Harbor core service is used to:

  1. populate the docker/helm commands showed on portal
  2. populate the token service URL returned to docker/notary client

Format: protocol://domain[:port]. Usually:

  • if expose the service via Ingress, the domain should be the value of ingress.hosts.core
  • if expose the service via ClusterIP, the domain should be the value of service.clusterIP.name
  • if expose the service via NodePort, the domain should be the IP address of one Kubernetes node
  • if expose the service via LoadBalancer, set the domain as your own domain name and add a CNAME record to map the domain name to the one you got from the cloud provider

If Harbor is deployed behind the proxy, set it as the URL of proxy.

Configure data persistence:

  • Disable: The data does not survive the termination of a pod.
  • Persistent Volume Claim(default): A default StorageClass is needed in the Kubernetes cluster to dynamically provision the volumes. Specify another StorageClass in the storageClass or set existingClaim if you have already existing persistent volumes to use.
  • External Storage(only for images and charts): For images and charts, the external storages are supported: azure, gcs, s3 swift and oss.

Configure the secrets:

  • Secret keys: Secret keys are used for secure communication between components. Fill core.secret, jobservice.secret and registry.secret to configure.
  • Certificates: Used for token encryption/decryption. Fill core.secretName to configure.

Secrets and certificates must be setup to avoid changes on every Helm upgrade (see: #107).

Setting Pod's affinity

This chart allows you to set your custom affinity using the XXX.affinity parameter(s). Find more information about Pod's affinity in the kubernetes documentation.

As an alternative, you can use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the bitnami/common chart. To do so, set the XXX.podAffinityPreset, XXX.podAntiAffinityPreset, or XXX.nodeAffinityPreset parameters.

Adjust permissions of persistent volume mountpoint

As the images run as non-root by default, it is necessary to adjust the ownership of the persistent volumes so that the containers can write data into it.

By default, the chart is configured to use Kubernetes Security Context to automatically change the ownership of the volume. However, this feature does not work in all Kubernetes distributions. As an alternative, this chart supports using an initContainer to change the ownership of the volume before mounting it in the final destination.

You can enable this initContainer by setting volumePermissions.enabled to true.

Troubleshooting

Find more information about how to deal with common errors related to Bitnamis Helm charts in this troubleshooting guide.

Upgrading

NOTE: In you are upgrading an installation that contains a high amount of data, it is recommended to disable the liveness/readiness probes as the migration can take a substantial amount of time.

To 10.0.0

This major updates the Redis™ subchart to it newest major, 14.0.0, which contains breaking changes. For more information on this subchart's major and the steps needed to migrate your data from your previous release, please refer to Redis™ upgrade notes..

To 9.7.0

This new version of the chart bumps the version of Harbor to 2.2.0 which deprecates built-in Clair. If you still want to use Clair, you will need to set clair.enabled to true and Clair scanner and the Harbor adapter will be deployed. Follow these steps to add it as an additional interrogation service for Harbor.

Please note that Clair might be fully deprecated from this chart in following updates.

To 9.0.0

On November 13, 2020, Helm v2 support was formally finished, this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL.

What changes were introduced in this major version?

  • Previous versions of this Helm Chart use apiVersion: v1 (installable by both Helm 2 and 3), this Helm Chart was updated to apiVersion: v2 (installable by Helm 3 only). Here you can find more information about the apiVersion field.
  • Move dependency information from the requirements.yaml to the Chart.yaml
  • After running helm dependency update, a Chart.lock file is generated containing the same structure used in the previous requirements.lock
  • The different fields present in the Chart.yaml file has been ordered alphabetically in a homogeneous way for all the Bitnami Helm Charts
  • This chart depends on the PostgreSQL 10 instead of PostgreSQL 9. Apart from the same changes that are described in this section, there are also other major changes due to the master/slave nomenclature was replaced by primary/readReplica. Here you can find more information about the changes introduced.

Considerations when upgrading to this version

  • If you want to upgrade to this version using Helm v2, this scenario is not supported as this version doesn't support Helm v2 anymore
  • If you installed the previous version with Helm v2 and wants to upgrade to this version with Helm v3, please refer to the official Helm documentation about migrating from Helm v2 to v3
  • If you want to upgrade to this version from a previous one installed with Helm v3, it should be done reusing the PVC used to hold the PostgreSQL data on your previous release. To do so, follow the instructions below (the following example assumes that the release name is harbor):

NOTE: Please, create a backup of your database before running any of those actions.

Export secrets and required values to update
$ export HARBOR_ADMIN_PASSWORD=$(kubectl get secret --namespace default harbor-core-envvars -o jsonpath="{.data.HARBOR_ADMIN_PASSWORD}" | base64 --decode)
$ export POSTGRESQL_PASSWORD=$(kubectl get secret --namespace default harbor-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
$ export POSTGRESQL_PVC=$(kubectl get pvc -l app.kubernetes.io/instance=harbor,app.kubernetes.io/name=postgresql,role=master -o jsonpath="{.items[0].metadata.name}")
Delete statefulsets

Delete PostgreSQL statefulset. Notice the option --cascade=false:

$ kubectl delete statefulsets.apps harbor-postgresql --cascade=false
Upgrade the chart release
$ helm upgrade harbor bitnami/harbor \
    --set harborAdminPassword=$HARBOR_ADMIN_PASSWORD \
    --set postgresql.postgresqlPassword=$POSTGRESQL_PASSWORD \
    --set postgresql.persistence.existingClaim=$POSTGRESQL_PVC
Force new statefulset to create a new pod for postgresql
$ kubectl delete pod harbor-postgresql-0

Finally, you should see the lines below in MariaDB container logs:

$ kubectl logs $(kubectl get pods -l app.kubernetes.io/instance=postgresql,app.kubernetes.io/name=postgresql,role=primary -o jsonpath="{.items[0].metadata.name}")
...
postgresql 08:05:12.59 INFO  ==> Deploying PostgreSQL with persisted data...
...

Useful links

To 8.0.0

Redis™ dependency version was bumped to the new major version 11.x.x, which introduced breaking changes regarding sentinel. By default, this Chart does not use of this feature and hence no issues are expected between upgrades. You may refer to Redis™ Upgrading Notes for further information.

To 7.0.0

This major version include a major change in the PostgreSQL subchart labeling. Backwards compatibility from previous versions to this one is not guarantee during the upgrade.

You can find more information about the changes in the PostgreSQL subchart and a way to workaround the helm upgrade issue in the "Upgrade to 9.0.0" section of the PostgreSQL README.

From 6.0.0 to 6.0.2

Due to an issue with Trivy volumeClaimTemplates, the upgrade needs to be done in two steps:

  • Upgrade the chart to 6.0.2 with trivy.enabled=false
$ helm upgrade bitnami/chart --version 6.0.2 --set trivy.enabled=false <REST OF THE UPGRADE PARAMETERS>
  • Execute a new upgrade setting trivy.enabled=true
$ helm upgrade bitnami/chart --set trivy.enabled=true <REST OF THE UPGRADE PARAMETERS>

To 6.0.0

The chart was changed to adapt to the common Bitnami chart standards. Now it includes common elements such as sidecar and init container support, custom commands, custom liveness/readiness probes, extra environment variables support, extra pod annotations and labels, among others. In addition, it adds a new Trivy deployment for image scanning.

No issues are expected between upgrades but please double check the updated parameter list as some of them could have been renamed. Please pay special attention to the following changes:

  • service.type=ingress is not allowed anymore. Instead, set the value ingress.enabled=true.
  • secretKey has been moved to core.secretKey.

To 4.0.0

PostgreSQL and Redis™ dependencies were updated to the use the latest major versions, 8.x.x and 10.x.x, respectively. These major versions do not include changes that should break backwards compatibilities, check the links below for more information:

To 3.0.0

Helm performs a lookup for the object based on its group (apps), version (v1), and kind (Deployment). Also known as its GroupVersionKind, or GVK. Changing the GVK is considered a compatibility breaker from Kubernetes' point of view, so you cannot "upgrade" those objects to the new GVK in-place. Earlier versions of Helm 3 did not perform the lookup correctly which has since been fixed to match the spec.

In c085d396a0 the apiVersion of the deployment resources was updated to apps/v1 in tune with the api's deprecated, resulting in compatibility breakage.

This major version signifies this change.

To 2.0.0

In this version, two major changes were performed:

For major releases of PostgreSQL, the internal data storage format is subject to change, thus complicating upgrades, you can see some errors like the following one in the logs:

Welcome to the Bitnami postgresql container
Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql
Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues
Send us your feedback at containers@bitnami.com

INFO  ==> ** Starting PostgreSQL setup **
NFO  ==> Validating settings in POSTGRESQL_* env vars..
INFO  ==> Initializing PostgreSQL database...
INFO  ==> postgresql.conf file not detected. Generating it...
INFO  ==> pg_hba.conf file not detected. Generating it...
INFO  ==> Deploying PostgreSQL with persisted data...
INFO  ==> Configuring replication parameters
INFO  ==> Loading custom scripts...
INFO  ==> Enabling remote connections
INFO  ==> Stopping PostgreSQL...
INFO  ==> ** PostgreSQL setup finished! **

INFO  ==> ** Starting PostgreSQL **
  [1] FATAL:  database files are incompatible with server
  [1] DETAIL:  The data directory was initialized by PostgreSQL version 10, which is not compatible with this version 11.3.

In this case, you should migrate the data from the old PostgreSQL chart to the new one following an approach similar to that described in this section from the official documentation. Basically, create a database dump in the old chart, move and restore it in the new one.