* [bitnami/harbor] Release 26.0.1 updating components versions Signed-off-by: Bitnami Bot <bitnami.bot@broadcom.com> * Update CHANGELOG.md Signed-off-by: Bitnami Bot <bitnami.bot@broadcom.com> * [bitnami/harbor] Release 26.0.1 updating components versions Signed-off-by: Bitnami Bot <bitnami.bot@broadcom.com> --------- Signed-off-by: Bitnami Bot <bitnami.bot@broadcom.com>
Bitnami package for Harbor
Harbor is an open source trusted cloud-native registry to store, sign, and scan content. It adds functionalities like security, identity, and management to the open source Docker distribution.
TL;DR
helm install my-release oci://registry-1.docker.io/bitnamicharts/harbor
Looking to use Harbor in production? Try VMware Tanzu Application Catalog, the commercial edition of the Bitnami catalog.
Introduction
This Helm chart installs Harbor in a Kubernetes cluster.
This Helm chart has been developed based on goharbor/harbor-helm chart but includes some features common to the Bitnami chart library. For example, the following changes have been introduced:
- It is possible to pull all the required images from a private registry through the Global Docker image parameters.
- Redis® and PostgreSQL are managed as chart dependencies.
- Liveness and Readiness probes for all deployments are exposed to the values.yaml.
- Uses new Helm chart label formatting.
- Uses Bitnami non-root container images by default.
- This chart supports the Harbor optional components.
Prerequisites
- Kubernetes 1.23+
- Helm 3.8.0+
- PV provisioner support in the underlying infrastructure
- ReadWriteMany volumes for deployment scaling
Installing the Chart
To install the chart with the release name my-release:
helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/harbor
Note: You need to substitute the placeholders
REGISTRY_NAMEandREPOSITORY_NAMEwith a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to useREGISTRY_NAME=registry-1.docker.ioandREPOSITORY_NAME=bitnamicharts.
Configuration and installation details
Resource requests and limits
Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the resources value (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.
To make this process easier, the chart contains the resourcesPreset values, which automatically sets the resources section according to different presets. Check these presets in the bitnami/common chart. However, in production workloads using resourcesPreset is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the official Kubernetes documentation.
Rolling VS Immutable tags
It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.
Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.
Prometheus metrics
This chart can be integrated with Prometheus by setting metrics.enabled to true. This will expose the Harbor native Prometheus port in both the containers and services. The services will also have the necessary annotations to be automatically scraped by Prometheus.
Prometheus requirements
It is necessary to have a working installation of Prometheus or Prometheus Operator for the integration to work. Install the Bitnami Prometheus helm chart or the Bitnami Kube Prometheus helm chart to easily have a working Prometheus in your cluster.
Integration with Prometheus Operator
The chart can deploy ServiceMonitor objects for integration with Prometheus Operator installations. To do so, set the value metrics.serviceMonitor.enabled=true. Ensure that the Prometheus Operator CustomResourceDefinitions are installed in the cluster or it will fail with the following error:
no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
Install the Bitnami Kube Prometheus helm chart for having the necessary CRDs and the Prometheus Operator.
Configure the way how to expose Harbor core
You can expose Harbor core using two methods:
- An Ingress Controller,
exposureTypeshould be set toingress.- An ingress controller must be installed in the Kubernetes cluster.
- If the TLS is disabled, the port must be included in the command when pulling/pushing images. Refer to issue #5291 for the detail.
- An NGINX Proxy,
exposureTypeshould be set toproxy. There are three ways to do so depending on the NGINX Proxy service type:- ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster:
- NodePort: Exposes the service on each Node's IP at a static port (the NodePort). You'll be able to contact the NodePort service, from outside the cluster, by requesting
NodeIP:NodePort. - LoadBalancer: Exposes the service externally using a cloud provider's load balancer.
Configure the external URL
The external URL for Harbor core service is used to:
- populate the docker/helm commands showed on portal
Format: protocol://domain[:port]. Usually:
- if expose Harbor core service via Ingress, the
domainshould be the value ofingress.core.hostname. - if expose Harbor core via NGINX proxy using a
ClusterIPservice type, thedomainshould be the value ofservice.clusterIP. - if expose Harbor core via NGINX proxy using a
NodePortservice type, thedomainshould be the IP address of one Kubernetes node. - if expose Harbor core via NGINX proxy using a
LoadBalancerservice type, set thedomainas your own domain name and add a CNAME record to map the domain name to the one you got from the cloud provider.
If Harbor is deployed behind the proxy, set it as the URL of proxy.
Update database schema
In order to update the database schema, the helm chart deploys a special Job that performs the migration. Enable this by setting the migration.enabled=true value.
This Job relies on helm hooks, so any upgrade operation will wait for this Job to succeed.
Securing traffic using TLS
It is possible to configure TLS communication in the core, jobservice, portal, registry and trivy components by setting internalTLS.enabled=true. The chart allows two configuration options:
- Provide your own secrets for Harbor components using the
*.tls.existingSecret(under thecore,jobservice,portal,registryand `trivy' sections) values. - Have the chart auto-generate the certificates. This is done when not setting the
*.tls.existingSecretvalues.
Additionally, it is possible to add a custom authority to each component trust store. This is done using the internalTLS.caBundleSecret value with the name of a secret containing the corresponding ca.crt file.
Backup and restore
To back up and restore Helm chart deployments on Kubernetes, you need to back up the persistent volumes from the source deployment and attach them to a new deployment using Velero, a Kubernetes backup/restore tool. Find the instructions for using Velero in this guide.
Sidecars and Init Containers
If you have a need for additional containers to run within the same pod as any of the Harbor components (e.g. an additional metrics or logging exporter), you can do so via the sidecars config parameter inside each component subsection. Simply define your container according to the Kubernetes container spec.
core:
sidecars:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
Similarly, you can add extra init containers using the initContainers parameter.
core:
initContainers:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
Adding extra environment variables
In case you want to add extra environment variables (useful for advanced operations like custom init scripts), you can use the extraEnvVars property inside each component subsection.
core:
extraEnvVars:
- name: LOG_LEVEL
value: error
Alternatively, you can use a ConfigMap or a Secret with the environment variables. To do so, use the extraEnvVarsCM or the extraEnvVarsSecret values inside each component subsection.
Configure data persistence
- Disable: The data does not survive the termination of a pod.
- Persistent Volume Claim(default): A default
StorageClassis needed in the Kubernetes cluster to dynamically provision the volumes. Specify another StorageClass in thestorageClassor setexistingClaimif you have already existing persistent volumes to use. - External Storage(only for images and charts): For images and charts, the external storages are supported:
azure,gcs,s3swiftandoss.
Configure the secrets
- Secrets: Secrets are used for encryption and to secure communication between components. Fill
core.secret,jobservice.secretandregistry.secretto configure then statically through the helm values. it expects the "key or password", not the secret name where secrets are stored. - Certificates: Used for token encryption/decryption. Fill
core.secretNameto configure.
Secrets and certificates must be setup to avoid changes on every Helm upgrade (see: #107).
If you want to manage full Secret objects by your own, you can use existingSecret & existingEnvVarsSecret parameters. This could be useful for some secure GitOps workflows, of course, you will have to ensure to define all expected keys for those secrets.
The core service have two Secret objects, the default one for data & communication which is very important as it's contains the data encryption key of your harbor instance ! and a second one which contains standard passwords, database access password, ...
Keep in mind that the HARBOR_ADMIN_PASSWORD is only used to boostrap your harbor instance, if you update it after the deployment, the password is updated in database, but the secret will remain the initial one.
Setting Pod's affinity
This chart allows you to set your custom affinity using the XXX.affinity parameter(s). Find more information about Pod's affinity in the kubernetes documentation.
As an alternative, you can use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the bitnami/common chart. To do so, set the XXX.podAffinityPreset, XXX.podAntiAffinityPreset, or XXX.nodeAffinityPreset parameters.
Adjust permissions of persistent volume mountpoint
As the images run as non-root by default, it is necessary to adjust the ownership of the persistent volumes so that the containers can write data into it.
By default, the chart is configured to use Kubernetes Security Context to automatically change the ownership of the volume. However, this feature does not work in all Kubernetes distributions. As an alternative, this chart supports using an initContainer to change the ownership of the volume before mounting it in the final destination.
You can enable this initContainer by setting volumePermissions.enabled to true.
Parameters
Global parameters
| Name | Description | Value |
|---|---|---|
global.imageRegistry |
Global Docker image registry | "" |
global.imagePullSecrets |
Global Docker registry secret names as an array | [] |
global.defaultStorageClass |
Global default StorageClass for Persistent Volume(s) | "" |
global.storageClass |
DEPRECATED: use global.defaultStorageClass instead | "" |
global.security.allowInsecureImages |
Allows skipping image verification | false |
global.compatibility.openshift.adaptSecurityContext |
Adapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation) | auto |
Common Parameters
| Name | Description | Value |
|---|---|---|
nameOverride |
String to partially override common.names.fullname template (will maintain the release name) | "" |
fullnameOverride |
String to fully override common.names.fullname template with a string | "" |
kubeVersion |
Force target Kubernetes version (using Helm capabilities if not set) | "" |
clusterDomain |
Kubernetes Cluster Domain | cluster.local |
commonAnnotations |
Annotations to add to all deployed objects | {} |
commonLabels |
Labels to add to all deployed objects | {} |
extraDeploy |
Array of extra objects to deploy with the release (evaluated as a template). | [] |
diagnosticMode.enabled |
Enable diagnostic mode (all probes will be disabled and the command will be overridden) | false |
diagnosticMode.command |
Command to override all containers in the the deployment(s)/statefulset(s) | ["sleep"] |
diagnosticMode.args |
Args to override all containers in the the deployment(s)/statefulset(s) | ["infinity"] |
Harbor common parameters
| Name | Description | Value |
|---|---|---|
adminPassword |
The initial password of Harbor admin. Change it from portal after launching Harbor | "" |
externalURL |
The external URL for Harbor Core service | https://core.harbor.domain |
proxy.httpProxy |
The URL of the HTTP proxy server | "" |
proxy.httpsProxy |
The URL of the HTTPS proxy server | "" |
proxy.noProxy |
The URLs that the proxy settings not apply to | 127.0.0.1,localhost,.local,.internal |
proxy.components |
The component list that the proxy settings apply to | ["core","jobservice","trivy"] |
logLevel |
The log level used for Harbor services. Allowed values are [ fatal | error | warn | info | debug | trace ] | debug |
internalTLS.enabled |
Use TLS in all the supported containers: core, jobservice, portal, registry and trivy | false |
internalTLS.caBundleSecret |
Name of an existing secret with a custom CA that will be injected into the trust store for core, jobservice, registry, trivy components | "" |
ipFamily.ipv6.enabled |
Enable listening on IPv6 ([::]) for NGINX-based components (NGINX,portal) | true |
ipFamily.ipv4.enabled |
Enable listening on IPv4 for NGINX-based components (NGINX,portal) | true |
Traffic Exposure Parameters
| Name | Description | Value |
|---|---|---|
exposureType |
The way to expose Harbor. Allowed values are [ ingress | proxy ] | proxy |
service.type |
NGINX proxy service type | LoadBalancer |
service.ports.http |
NGINX proxy service HTTP port | 80 |
service.ports.https |
NGINX proxy service HTTPS port | 443 |
service.nodePorts.http |
Node port for HTTP | "" |
service.nodePorts.https |
Node port for HTTPS | "" |
service.sessionAffinity |
Control where client requests go, to the same pod or round-robin | None |
service.sessionAffinityConfig |
Additional settings for the sessionAffinity | {} |
service.clusterIP |
NGINX proxy service Cluster IP | "" |
service.loadBalancerIP |
NGINX proxy service Load Balancer IP | "" |
service.loadBalancerSourceRanges |
NGINX proxy service Load Balancer sources | [] |
service.externalTrafficPolicy |
NGINX proxy service external traffic policy | Cluster |
service.annotations |
Additional custom annotations for NGINX proxy service | {} |
service.extraPorts |
Extra port to expose on NGINX proxy service | [] |
ingress.core.ingressClassName |
IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+) | "" |
ingress.core.pathType |
Ingress path type | ImplementationSpecific |
ingress.core.apiVersion |
Force Ingress API version (automatically detected if not set) | "" |
ingress.core.controller |
The ingress controller type. Currently supports default, gce and ncp |
default |
ingress.core.hostname |
Default host for the ingress record | core.harbor.domain |
ingress.core.annotations |
Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations. | {} |
ingress.core.tls |
Enable TLS configuration for the host defined at ingress.core.hostname parameter |
false |
ingress.core.selfSigned |
Create a TLS secret for this ingress record using self-signed certificates generated by Helm | false |
ingress.core.extraHosts |
An array with additional hostname(s) to be covered with the ingress record | [] |
ingress.core.extraPaths |
An array with additional arbitrary paths that may need to be added to the ingress under the main host | [] |
ingress.core.extraTls |
TLS configuration for additional hostname(s) to be covered with this ingress record | [] |
ingress.core.secrets |
Custom TLS certificates as secrets | [] |
ingress.core.extraRules |
Additional rules to be covered with this ingress record | [] |
Persistence Parameters
| Name | Description | Value |
|---|---|---|
persistence.enabled |
Enable the data persistence or not | true |
persistence.resourcePolicy |
Setting it to keep to avoid removing PVCs during a helm delete operation. Leaving it empty will delete PVCs after the chart deleted |
keep |
persistence.persistentVolumeClaim.registry.existingClaim |
Name of an existing PVC to use | "" |
persistence.persistentVolumeClaim.registry.storageClass |
PVC Storage Class for Harbor Registry data volume | "" |
persistence.persistentVolumeClaim.registry.subPath |
The sub path used in the volume | "" |
persistence.persistentVolumeClaim.registry.accessModes |
The access mode of the volume | ["ReadWriteOnce"] |
persistence.persistentVolumeClaim.registry.size |
The size of the volume | 5Gi |
persistence.persistentVolumeClaim.registry.annotations |
Annotations for the PVC | {} |
persistence.persistentVolumeClaim.registry.selector |
Selector to match an existing Persistent Volume | {} |
persistence.persistentVolumeClaim.jobservice.existingClaim |
Name of an existing PVC to use | "" |
persistence.persistentVolumeClaim.jobservice.storageClass |
PVC Storage Class for Harbor Jobservice data volume | "" |
persistence.persistentVolumeClaim.jobservice.subPath |
The sub path used in the volume | "" |
persistence.persistentVolumeClaim.jobservice.accessModes |
The access mode of the volume | ["ReadWriteOnce"] |
persistence.persistentVolumeClaim.jobservice.size |
The size of the volume | 1Gi |
persistence.persistentVolumeClaim.jobservice.annotations |
Annotations for the PVC | {} |
persistence.persistentVolumeClaim.jobservice.selector |
Selector to match an existing Persistent Volume | {} |
persistence.persistentVolumeClaim.trivy.storageClass |
PVC Storage Class for Trivy data volume | "" |
persistence.persistentVolumeClaim.trivy.accessModes |
The access mode of the volume | ["ReadWriteOnce"] |
persistence.persistentVolumeClaim.trivy.size |
The size of the volume | 5Gi |
persistence.persistentVolumeClaim.trivy.annotations |
Annotations for the PVC | {} |
persistence.persistentVolumeClaim.trivy.selector |
Selector to match an existing Persistent Volume | {} |
persistence.imageChartStorage.caBundleSecret |
Specify the caBundleSecret if the storage service uses a self-signed certificate. The secret must contain keys named ca.crt which will be injected into the trust store of registry's containers. |
"" |
persistence.imageChartStorage.disableredirect |
The configuration for managing redirects from content backends. For backends which do not supported it (such as using MinIO® for s3 storage type), please set it to true to disable redirects. Refer to the guide for more information about the detail |
false |
persistence.imageChartStorage.type |
The type of storage for images and charts: filesystem, azure, gcs, s3, swift or oss. The type must be filesystem if you want to use persistent volumes for registry. Refer to the guide for more information about the detail |
filesystem |
persistence.imageChartStorage.filesystem.rootdirectory |
Filesystem storage type setting: Storage root directory | /storage |
persistence.imageChartStorage.filesystem.maxthreads |
Filesystem storage type setting: Maximum threads directory | "" |
persistence.imageChartStorage.azure.accountname |
Azure storage type setting: Name of the Azure account | accountname |
persistence.imageChartStorage.azure.accountkey |
Azure storage type setting: Key of the Azure account | base64encodedaccountkey |
persistence.imageChartStorage.azure.container |
Azure storage type setting: Container | containername |
persistence.imageChartStorage.azure.storagePrefix |
Azure storage type setting: Storage prefix | /azure/harbor/charts |
persistence.imageChartStorage.azure.realm |
Azure storage type setting: Realm of the Azure account | "" |
persistence.imageChartStorage.gcs.bucket |
GCS storage type setting: Bucket name | bucketname |
persistence.imageChartStorage.gcs.encodedkey |
GCS storage type setting: Base64 encoded key | "" |
persistence.imageChartStorage.gcs.rootdirectory |
GCS storage type setting: Root directory name | "" |
persistence.imageChartStorage.gcs.chunksize |
GCS storage type setting: Chunk size name | "" |
persistence.imageChartStorage.s3.region |
S3 storage type setting: Region | us-west-1 |
persistence.imageChartStorage.s3.bucket |
S3 storage type setting: Bucket name | bucketname |
persistence.imageChartStorage.s3.accesskey |
S3 storage type setting: Access key name | "" |
persistence.imageChartStorage.s3.secretkey |
S3 storage type setting: Secret Key name | "" |
persistence.imageChartStorage.s3.regionendpoint |
S3 storage type setting: Region Endpoint | "" |
persistence.imageChartStorage.s3.encrypt |
S3 storage type setting: Encrypt | "" |
persistence.imageChartStorage.s3.keyid |
S3 storage type setting: Key ID | "" |
persistence.imageChartStorage.s3.secure |
S3 storage type setting: Secure | "" |
persistence.imageChartStorage.s3.skipverify |
S3 storage type setting: TLS skip verification | "" |
persistence.imageChartStorage.s3.v4auth |
S3 storage type setting: V4 authorization | "" |
persistence.imageChartStorage.s3.chunksize |
S3 storage type setting: V4 authorization | "" |
persistence.imageChartStorage.s3.rootdirectory |
S3 storage type setting: Root directory name | "" |
persistence.imageChartStorage.s3.storageClass |
S3 storage type setting: Storage class | "" |
persistence.imageChartStorage.s3.sse |
S3 storage type setting: SSE name | "" |
persistence.imageChartStorage.s3.multipartcopythresholdsize |
S3 storage type setting: Threshold size for multipart copy | "" |
persistence.imageChartStorage.swift.authurl |
Swift storage type setting: Authentication url | https://storage.myprovider.com/v3/auth |
persistence.imageChartStorage.swift.username |
Swift storage type setting: Authentication url | "" |
persistence.imageChartStorage.swift.password |
Swift storage type setting: Password | "" |
persistence.imageChartStorage.swift.container |
Swift storage type setting: Container | "" |
persistence.imageChartStorage.swift.region |
Swift storage type setting: Region | "" |
persistence.imageChartStorage.swift.tenant |
Swift storage type setting: Tenant | "" |
persistence.imageChartStorage.swift.tenantid |
Swift storage type setting: TenantID | "" |
persistence.imageChartStorage.swift.domain |
Swift storage type setting: Domain | "" |
persistence.imageChartStorage.swift.domainid |
Swift storage type setting: DomainID | "" |
persistence.imageChartStorage.swift.trustid |
Swift storage type setting: TrustID | "" |
persistence.imageChartStorage.swift.insecureskipverify |
Swift storage type setting: Verification | "" |
persistence.imageChartStorage.swift.chunksize |
Swift storage type setting: Chunk | "" |
persistence.imageChartStorage.swift.prefix |
Swift storage type setting: Prefix | "" |
persistence.imageChartStorage.swift.secretkey |
Swift storage type setting: Secret Key | "" |
persistence.imageChartStorage.swift.accesskey |
Swift storage type setting: Access Key | "" |
persistence.imageChartStorage.swift.authversion |
Swift storage type setting: Auth | "" |
persistence.imageChartStorage.swift.endpointtype |
Swift storage type setting: Endpoint | "" |
persistence.imageChartStorage.swift.tempurlcontainerkey |
Swift storage type setting: Temp URL container key | "" |
persistence.imageChartStorage.swift.tempurlmethods |
Swift storage type setting: Temp URL methods | "" |
persistence.imageChartStorage.oss.accesskeyid |
OSS storage type setting: Access key ID | "" |
persistence.imageChartStorage.oss.accesskeysecret |
OSS storage type setting: Access key secret name containing the token | "" |
persistence.imageChartStorage.oss.region |
OSS storage type setting: Region name | "" |
persistence.imageChartStorage.oss.bucket |
OSS storage type setting: Bucket name | "" |
persistence.imageChartStorage.oss.endpoint |
OSS storage type setting: Endpoint | "" |
persistence.imageChartStorage.oss.internal |
OSS storage type setting: Internal | "" |
persistence.imageChartStorage.oss.encrypt |
OSS storage type setting: Encrypt | "" |
persistence.imageChartStorage.oss.secure |
OSS storage type setting: Secure | "" |
persistence.imageChartStorage.oss.chunksize |
OSS storage type setting: Chunk | "" |
persistence.imageChartStorage.oss.rootdirectory |
OSS storage type setting: Directory | "" |
persistence.imageChartStorage.oss.secretkey |
OSS storage type setting: Secret key | "" |
Migration job parameters
| Name | Description | Value |
|---|---|---|
migration.enabled |
Enable migration job | false |
migration.podLabels |
Additional pod labels | {} |
migration.podAnnotations |
Additional pod annotations | {} |
migration.automountServiceAccountToken |
Mount Service Account token in pod | false |
migration.hostAliases |
Migration job host aliases | [] |
migration.command |
Override default container command (useful when using custom images) | [] |
migration.args |
Override default container args (useful when using custom images) | [] |
migration.annotations |
Provide any additional annotations which may be required. | {} |
migration.podSecurityContext.enabled |
Enabled Jaeger pods' Security Context | true |
migration.podSecurityContext.fsGroupChangePolicy |
Set filesystem group change policy | Always |
migration.podSecurityContext.sysctls |
Set kernel settings using the sysctl interface | [] |
migration.podSecurityContext.supplementalGroups |
Set filesystem extra groups | [] |
migration.podSecurityContext.fsGroup |
Set Jaeger pod's Security Context fsGroup | 1001 |
migration.containerSecurityContext.enabled |
Enabled containers' Security Context | true |
migration.containerSecurityContext.seLinuxOptions |
Set SELinux options in container | {} |
migration.containerSecurityContext.runAsUser |
Set containers' Security Context runAsUser | 1001 |
migration.containerSecurityContext.runAsGroup |
Set containers' Security Context runAsGroup | 1001 |
migration.containerSecurityContext.runAsNonRoot |
Set container's Security Context runAsNonRoot | true |
migration.containerSecurityContext.privileged |
Set container's Security Context privileged | false |
migration.containerSecurityContext.readOnlyRootFilesystem |
Set container's Security Context readOnlyRootFilesystem | true |
migration.containerSecurityContext.allowPrivilegeEscalation |
Set container's Security Context allowPrivilegeEscalation | false |
migration.containerSecurityContext.capabilities.drop |
List of capabilities to be dropped | ["ALL"] |
migration.containerSecurityContext.seccompProfile.type |
Set container's Security Context seccomp profile | RuntimeDefault |
migration.extraEnvVars |
Extra environment variables to be set on jaeger migration container | [] |
migration.extraEnvVarsCM |
Name of existing ConfigMap containing extra env vars | "" |
migration.extraEnvVarsSecret |
Name of existing Secret containing extra env vars | "" |
migration.extraVolumeMounts |
Optionally specify extra list of additional volumeMounts for jaeger container | [] |
migration.resourcesPreset |
Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if migration.resources is set (migration.resources is recommended for production). | small |
migration.resources |
Set container requests and limits for different resources like CPU or memory (essential for production workloads) | {} |
migration.networkPolicy.enabled |
Specifies whether a NetworkPolicy should be created | true |
migration.networkPolicy.allowExternal |
Don't require server label for connections | true |
migration.networkPolicy.allowExternalEgress |
Allow the pod to access any range of port and all destinations. | true |
migration.networkPolicy.extraIngress |
Add extra ingress rules to the NetworkPolicy | [] |
migration.networkPolicy.extraEgress |
Add extra ingress rules to the NetworkPolicy | [] |
migration.extraVolumes |
Optionally specify extra list of additional volumes for jaeger container | [] |
Tracing parameters
| Name | Description | Value |
|---|---|---|
tracing.enabled |
Enable tracing | false |
tracing.sampleRate |
Tracing sample rate from 0 to 1 | 1 |
tracing.namespace |
Used to differentiate traces between different harbor services | "" |
tracing.attributes |
A key value dict containing user defined attributes used to initialize the trace provider | {} |
tracing.jaeger |
Configuration for exporting to jaeger. If using jaeger collector mode, use endpoint, username and password. If using jaeger agent mode, use agentHostname and agentPort. | |
tracing.jaeger.enabled |
Enable jaeger export | false |
tracing.jaeger.endpoint |
Jaeger endpoint | "" |
tracing.jaeger.username |
Jaeger username | "" |
tracing.jaeger.password |
Jaeger password | "" |
tracing.jaeger.agentHost |
Jaeger agent hostname | "" |
tracing.jaeger.agentPort |
Jaeger agent port | "" |
tracing.otel |
Configuration for exporting to an otel endpoint | |
tracing.otel.enabled |
Enable otel export | false |
tracing.otel.endpoint |
The hostname and port for an otel compatible backend | hostname:4318 |
tracing.otel.urlpath |
Url path of otel endpoint | /v1/traces |
tracing.otel.compression |
Enable data compression | false |
tracing.otel.timeout |
The timeout for data transfer | 10s |
tracing.otel.insecure |
Ignore cert verification for otel backend | true |
Volume Permissions parameters
| Name | Description | Value |
|---|---|---|
certificateVolume.resourcesPreset |
Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if certificateVolume.resources is set (certificateVolume.resources is recommended for production). | nano |
certificateVolume.resources |
Set container requests and limits for different resources like CPU or memory (essential for production workloads) | {} |
volumePermissions.enabled |
Enable init container that changes the owner and group of the persistent volume | false |
volumePermissions.image.registry |
Init container volume-permissions image registry | REGISTRY_NAME |
volumePermissions.image.repository |
Init container volume-permissions image repository | REPOSITORY_NAME/os-shell |
volumePermissions.image.digest |
Init container volume-permissions image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | "" |
volumePermissions.image.pullPolicy |
Init container volume-permissions image pull policy | IfNotPresent |
volumePermissions.image.pullSecrets |
Init container volume-permissions image pull secrets | [] |
volumePermissions.resourcesPreset |
Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if volumePermissions.resources is set (volumePermissions.resources is recommended for production). | nano |
volumePermissions.resources |
Set container requests and limits for different resources like CPU or memory (essential for production workloads) | {} |
volumePermissions.containerSecurityContext.enabled |
Enable init container Security Context | true |
volumePermissions.containerSecurityContext.seLinuxOptions |
Set SELinux options in container | {} |
volumePermissions.containerSecurityContext.runAsUser |
User ID for the init container | 0 |
NGINX Parameters
| Name | Description | Value |
|---|---|---|
nginx.image.registry |
NGINX image registry | REGISTRY_NAME |
nginx.image.repository |
NGINX image repository | REPOSITORY_NAME/nginx |
nginx.image.digest |
NGINX image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | "" |
nginx.image.pullPolicy |
NGINX image pull policy | IfNotPresent |
nginx.image.pullSecrets |
NGINX image pull secrets | [] |
nginx.image.debug |
Enable NGINX image debug mode | false |
nginx.tls.enabled |
Enable TLS termination | true |
nginx.tls.existingSecret |
Existing secret name containing your own TLS certificates. | "" |
nginx.tls.commonName |
The common name used to generate the self-signed TLS certificates | core.harbor.domain |
nginx.behindReverseProxy |
If NGINX is behind another reverse proxy, set to true | false |
nginx.command |
Override default container command (useful when using custom images) | [] |
nginx.args |
Override default container args (useful when using custom images) | [] |
nginx.extraEnvVars |
Array with extra environment variables to add NGINX pods | [] |
nginx.extraEnvVarsCM |
ConfigMap containing extra environment variables for NGINX pods | "" |
nginx.extraEnvVarsSecret |
Secret containing extra environment variables (in case of sensitive data) for NGINX pods | "" |
nginx.containerPorts.http |
NGINX HTTP container port | 8080 |
nginx.containerPorts.https |
NGINX HTTPS container port | 8443 |
nginx.replicaCount |
Number of NGINX replicas | 1 |
nginx.livenessProbe.enabled |
Enable livenessProbe on NGINX containers | true |
nginx.livenessProbe.initialDelaySeconds |
Initial delay seconds for livenessProbe | 20 |
nginx.livenessProbe.periodSeconds |
Period seconds for livenessProbe | 10 |
nginx.livenessProbe.timeoutSeconds |
Timeout seconds for livenessProbe | 5 |
nginx.livenessProbe.failureThreshold |
Failure threshold for livenessProbe | 6 |
nginx.livenessProbe.successThreshold |
Success threshold for livenessProbe | 1 |
nginx.readinessProbe.enabled |
Enable readinessProbe on NGINX containers | true |
nginx.readinessProbe.initialDelaySeconds |
Initial delay seconds for readinessProbe | 20 |
nginx.readinessProbe.periodSeconds |
Period seconds for readinessProbe | 10 |
nginx.readinessProbe.timeoutSeconds |
Timeout seconds for readinessProbe | 5 |
nginx.readinessProbe.failureThreshold |
Failure threshold for readinessProbe | 6 |
nginx.readinessProbe.successThreshold |
Success threshold for readinessProbe | 1 |
nginx.startupProbe.enabled |
Enable startupProbe on NGINX containers | false |
nginx.startupProbe.initialDelaySeconds |
Initial delay seconds for startupProbe | 10 |
nginx.startupProbe.periodSeconds |
Period seconds for startupProbe | 10 |
nginx.startupProbe.timeoutSeconds |
Timeout seconds for startupProbe | 1 |
nginx.startupProbe.failureThreshold |
Failure threshold for startupProbe | 15 |
nginx.startupProbe.successThreshold |
Success threshold for startupProbe | 1 |
nginx.customLivenessProbe |
Custom livenessProbe that overrides the default one | {} |
nginx.customReadinessProbe |
Custom readinessProbe that overrides the default one | {} |
nginx.customStartupProbe |
Custom startupProbe that overrides the default one | {} |
nginx.resourcesPreset |
Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if nginx.resources is set (nginx.resources is recommended for production). | small |
nginx.resources |
Set container requests and limits for different resources like CPU or memory (essential for production workloads) | {} |
nginx.podSecurityContext.enabled |
Enabled NGINX pods' Security Context | true |
nginx.podSecurityContext.fsGroupChangePolicy |
Set filesystem group change policy | Always |
nginx.podSecurityContext.sysctls |
Set kernel settings using the sysctl interface | [] |
nginx.podSecurityContext.supplementalGroups |
Set filesystem extra groups | [] |
nginx.podSecurityContext.fsGroup |
Set NGINX pod's Security Context fsGroup | 1001 |
nginx.containerSecurityContext.enabled |
Enabled containers' Security Context | true |
nginx.containerSecurityContext.seLinuxOptions |
Set SELinux options in container | {} |
nginx.containerSecurityContext.runAsUser |
Set containers' Security Context runAsUser | 1001 |
nginx.containerSecurityContext.runAsGroup |
Set containers' Security Context runAsGroup | 1001 |
nginx.containerSecurityContext.runAsNonRoot |
Set container's Security Context runAsNonRoot | true |
nginx.containerSecurityContext.privileged |
Set container's Security Context privileged | false |
nginx.containerSecurityContext.readOnlyRootFilesystem |
Set container's Security Context readOnlyRootFilesystem | true |
nginx.containerSecurityContext.allowPrivilegeEscalation |
Set container's Security Context allowPrivilegeEscalation | false |
nginx.containerSecurityContext.capabilities.drop |
List of capabilities to be dropped | ["ALL"] |
nginx.containerSecurityContext.seccompProfile.type |
Set container's Security Context seccomp profile | RuntimeDefault |
nginx.updateStrategy.type |
NGINX deployment strategy type - only really applicable for deployments with RWO PVs attached | RollingUpdate |
nginx.lifecycleHooks |
LifecycleHook for the NGINX container(s) to automate configuration before or after startup | {} |
nginx.automountServiceAccountToken |
Mount Service Account token in pod | false |
nginx.serviceAccount.create |
Specifies whether a ServiceAccount should be created | false |
nginx.serviceAccount.name |
The name of the ServiceAccount to use. | "" |
nginx.serviceAccount.automountServiceAccountToken |
Allows auto mount of ServiceAccountToken on the serviceAccount created | false |
nginx.serviceAccount.annotations |
Additional custom annotations for the ServiceAccount | {} |
nginx.hostAliases |
NGINX pods host aliases | [] |
nginx.podLabels |
Add additional labels to the NGINX pods (evaluated as a template) | {} |
nginx.podAnnotations |
Annotations to add to the NGINX pods (evaluated as a template) | {} |
nginx.podAffinityPreset |
NGINX Pod affinity preset. Ignored if affinity is set. Allowed values: soft or hard |
"" |
nginx.podAntiAffinityPreset |
NGINX Pod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hard |
soft |
nginx.nodeAffinityPreset.type |
NGINX Node affinity preset type. Ignored if affinity is set. Allowed values: soft or hard |
"" |
nginx.nodeAffinityPreset.key |
NGINX Node label key to match Ignored if affinity is set. |
"" |
nginx.nodeAffinityPreset.values |
NGINX Node label values to match. Ignored if affinity is set. |
[] |
nginx.affinity |
NGINX Affinity for pod assignment | {} |
nginx.nodeSelector |
NGINX Node labels for pod assignment | {} |
nginx.tolerations |
NGINX Tolerations for pod assignment | [] |
nginx.topologySpreadConstraints |
Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template | [] |
nginx.priorityClassName |
Priority Class Name | "" |
nginx.schedulerName |
Use an alternate scheduler, e.g. "stork". | "" |
nginx.sidecars |
Add additional sidecar containers to the NGINX pods | [] |
nginx.initContainers |
Add additional init containers to the NGINX pods | [] |
nginx.pdb.create |
Enable/disable a Pod Disruption Budget creation | true |
nginx.pdb.minAvailable |
Minimum number/percentage of pods that should remain scheduled | "" |
nginx.pdb.maxUnavailable |
Maximum number/percentage of pods that may be made unavailable. Defaults to 1 if both nginx.pdb.minAvailable and nginx.pdb.maxUnavailable are empty. |
"" |
nginx.extraVolumeMounts |
Optionally specify extra list of additional volumeMounts for the NGINX pods | [] |
nginx.extraVolumes |
Optionally specify extra list of additional volumes for the NGINX pods | [] |
nginx.networkPolicy.enabled |
Specifies whether a NetworkPolicy should be created | true |
nginx.networkPolicy.allowExternal |
Don't require server label for connections | true |
nginx.networkPolicy.allowExternalEgress |
Allow the pod to access any range of port and all destinations. | true |
nginx.networkPolicy.extraIngress |
Add extra ingress rules to the NetworkPolicy | [] |
nginx.networkPolicy.extraEgress |
Add extra ingress rules to the NetworkPolicy | [] |
nginx.networkPolicy.ingressNSMatchLabels |
Labels to match to allow traffic from other namespaces | {} |
nginx.networkPolicy.ingressNSPodMatchLabels |
Pod labels to match to allow traffic from other namespaces | {} |
Harbor Portal Parameters
| Name | Description | Value |
|---|---|---|
portal.image.registry |
Harbor Portal image registry | REGISTRY_NAME |
portal.image.repository |
Harbor Portal image repository | REPOSITORY_NAME/harbor-portal |
portal.image.digest |
Harbor Portal image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | "" |
portal.image.pullPolicy |
Harbor Portal image pull policy | IfNotPresent |
portal.image.pullSecrets |
Harbor Portal image pull secrets | [] |
portal.image.debug |
Enable Harbor Portal image debug mode | false |
portal.tls.existingSecret |
Name of an existing secret with the certificates for internal TLS access | "" |
portal.command |
Override default container command (useful when using custom images) | [] |
portal.args |
Override default container args (useful when using custom images) | [] |
portal.extraEnvVars |
Array with extra environment variables to add Harbor Portal pods | [] |
portal.extraEnvVarsCM |
ConfigMap containing extra environment variables for Harbor Portal pods | "" |
portal.extraEnvVarsSecret |
Secret containing extra environment variables (in case of sensitive data) for Harbor Portal pods | "" |
portal.containerPorts.http |
Harbor Portal HTTP container port | 8080 |
portal.containerPorts.https |
Harbor Portal HTTPS container port | 8443 |
portal.replicaCount |
Number of Harbor Portal replicas | 1 |
portal.livenessProbe.enabled |
Enable livenessProbe on Harbor Portal containers | true |
portal.livenessProbe.initialDelaySeconds |
Initial delay seconds for livenessProbe | 20 |
portal.livenessProbe.periodSeconds |
Period seconds for livenessProbe | 10 |
portal.livenessProbe.timeoutSeconds |
Timeout seconds for livenessProbe | 5 |
portal.livenessProbe.failureThreshold |
Failure threshold for livenessProbe | 6 |
portal.livenessProbe.successThreshold |
Success threshold for livenessProbe | 1 |
portal.readinessProbe.enabled |
Enable readinessProbe on Harbor Portal containers | true |
portal.readinessProbe.initialDelaySeconds |
Initial delay seconds for readinessProbe | 20 |
portal.readinessProbe.periodSeconds |
Period seconds for readinessProbe | 10 |
portal.readinessProbe.timeoutSeconds |
Timeout seconds for readinessProbe | 5 |
portal.readinessProbe.failureThreshold |
Failure threshold for readinessProbe | 6 |
portal.readinessProbe.successThreshold |
Success threshold for readinessProbe | 1 |
portal.startupProbe.enabled |
Enable startupProbe on Harbor Portal containers | false |
portal.startupProbe.initialDelaySeconds |
Initial delay seconds for startupProbe | 5 |
portal.startupProbe.periodSeconds |
Period seconds for startupProbe | 10 |
portal.startupProbe.timeoutSeconds |
Timeout seconds for startupProbe | 1 |
portal.startupProbe.failureThreshold |
Failure threshold for startupProbe | 15 |
portal.startupProbe.successThreshold |
Success threshold for startupProbe | 1 |
portal.customLivenessProbe |
Custom livenessProbe that overrides the default one | {} |
portal.customReadinessProbe |
Custom readinessProbe that overrides the default one | {} |
portal.customStartupProbe |
Custom startupProbe that overrides the default one | {} |
portal.resourcesPreset |
Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if portal.resources is set (portal.resources is recommended for production). | small |
portal.resources |
Set container requests and limits for different resources like CPU or memory (essential for production workloads) | {} |
portal.podSecurityContext.enabled |
Enabled Harbor Portal pods' Security Context | true |
portal.podSecurityContext.fsGroupChangePolicy |
Set filesystem group change policy | Always |
portal.podSecurityContext.sysctls |
Set kernel settings using the sysctl interface | [] |
portal.podSecurityContext.supplementalGroups |
Set filesystem extra groups | [] |
portal.podSecurityContext.fsGroup |
Set Harbor Portal pod's Security Context fsGroup | 1001 |
portal.containerSecurityContext.enabled |
Enabled containers' Security Context | true |
portal.containerSecurityContext.seLinuxOptions |
Set SELinux options in container | {} |
portal.containerSecurityContext.runAsUser |
Set containers' Security Context runAsUser | 1001 |
portal.containerSecurityContext.runAsGroup |
Set containers' Security Context runAsGroup | 1001 |
portal.containerSecurityContext.runAsNonRoot |
Set container's Security Context runAsNonRoot | true |
portal.containerSecurityContext.privileged |
Set container's Security Context privileged | false |
portal.containerSecurityContext.readOnlyRootFilesystem |
Set container's Security Context readOnlyRootFilesystem | true |
portal.containerSecurityContext.allowPrivilegeEscalation |
Set container's Security Context allowPrivilegeEscalation | false |
portal.containerSecurityContext.capabilities.drop |
List of capabilities to be dropped | ["ALL"] |
portal.containerSecurityContext.seccompProfile.type |
Set container's Security Context seccomp profile | RuntimeDefault |
portal.updateStrategy.type |
Harbor Portal deployment strategy type - only really applicable for deployments with RWO PVs attached | RollingUpdate |
portal.lifecycleHooks |
LifecycleHook for the Harbor Portal container(s) to automate configuration before or after startup | {} |
portal.hostAliases |
Harbor Portal pods host aliases | [] |
portal.podLabels |
Add additional labels to the Harbor Portal pods (evaluated as a template) | {} |
portal.podAnnotations |
Annotations to add to the Harbor Portal pods (evaluated as a template) | {} |
portal.podAffinityPreset |
Harbor Portal Pod affinity preset. Ignored if portal.affinity is set. Allowed values: soft or hard |
"" |
portal.podAntiAffinityPreset |
Harbor Portal Pod anti-affinity preset. Ignored if portal.affinity is set. Allowed values: soft or hard |
soft |
portal.nodeAffinityPreset.type |
Harbor Portal Node affinity preset type. Ignored if portal.affinity is set. Allowed values: soft or hard |
"" |
portal.nodeAffinityPreset.key |
Harbor Portal Node label key to match Ignored if portal.affinity is set. |
"" |
portal.nodeAffinityPreset.values |
Harbor Portal Node label values to match. Ignored if portal.affinity is set. |
[] |
portal.affinity |
Harbor Portal Affinity for pod assignment | {} |
portal.nodeSelector |
Harbor Portal Node labels for pod assignment | {} |
portal.tolerations |
Harbor Portal Tolerations for pod assignment | [] |
portal.topologySpreadConstraints |
Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template | [] |
portal.priorityClassName |
Priority Class Name | "" |
portal.schedulerName |
Use an alternate scheduler, e.g. "stork". | "" |
portal.sidecars |
Add additional sidecar containers to the Harbor Portal pods | [] |
portal.initContainers |
Add additional init containers to the Harbor Portal pods | [] |
portal.pdb.create |
Enable/disable a Pod Disruption Budget creation | true |
portal.pdb.minAvailable |
Minimum number/percentage of pods that should remain scheduled | "" |
portal.pdb.maxUnavailable |
Maximum number/percentage of pods that may be made unavailable. Defaults to 1 if both portal.pdb.minAvailable and portal.pdb.maxUnavailable are empty. |
"" |
portal.extraVolumeMounts |
Optionally specify extra list of additional volumeMounts for the Harbor Portal pods | [] |
portal.extraVolumes |
Optionally specify extra list of additional volumes for the Harbor Portal pods | [] |
portal.automountServiceAccountToken |
Mount Service Account token in pod | false |
portal.serviceAccount.create |
Specifies whether a ServiceAccount should be created | false |
portal.serviceAccount.name |
The name of the ServiceAccount to use. | "" |
portal.serviceAccount.automountServiceAccountToken |
Allows auto mount of ServiceAccountToken on the serviceAccount created | false |
portal.serviceAccount.annotations |
Additional custom annotations for the ServiceAccount | {} |
portal.service.ports.http |
Harbor Portal HTTP service port | 80 |
portal.service.ports.https |
Harbor Portal HTTPS service port | 443 |
portal.networkPolicy.enabled |
Specifies whether a NetworkPolicy should be created | true |
portal.networkPolicy.allowExternal |
Don't require server label for connections | true |
portal.networkPolicy.allowExternalEgress |
Allow the pod to access any range of port and all destinations. | true |
portal.networkPolicy.extraIngress |
Add extra ingress rules to the NetworkPolicy | [] |
portal.networkPolicy.extraEgress |
Add extra ingress rules to the NetworkPolicy | [] |
portal.networkPolicy.ingressNSMatchLabels |
Labels to match to allow traffic from other namespaces | {} |
portal.networkPolicy.ingressNSPodMatchLabels |
Pod labels to match to allow traffic from other namespaces | {} |
Harbor Core Parameters
| Name | Description | Value |
|---|---|---|
core.image.registry |
Harbor Core image registry | REGISTRY_NAME |
core.image.repository |
Harbor Core image repository | REPOSITORY_NAME/harbor-core |
core.image.digest |
Harbor Core image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | "" |
core.image.pullPolicy |
Harbor Core image pull policy | IfNotPresent |
core.image.pullSecrets |
Harbor Core image pull secrets | [] |
core.image.debug |
Enable Harbor Core image debug mode | false |
core.sessionLifetime |
Explicitly set a session timeout (in seconds) overriding the backend default. | "" |
core.uaaSecret |
If using external UAA auth which has a self signed cert, you can provide a pre-created secret containing it under the key ca.crt. |
"" |
core.secretKey |
The key used for encryption. Must be a string of 16 chars | "" |
core.secret |
Secret used when the core server communicates with other components. If a secret key is not specified, Helm will generate one. Must be a string of 16 chars. | "" |
core.tokenKey |
Key of the certificate used for token encryption/decryption. | "" |
core.tokenCert |
Certificate used for token encryption/decryption. | "" |
core.secretName |
Fill the name of a kubernetes secret if you want to use your own TLS certificate and private key for token encryption/decryption. The secret must contain two keys named: tls.crt - the certificate and tls.key - the private key. The default key pair will be used if it isn't set |
"" |
core.existingSecret |
Existing secret for core | "" |
core.existingEnvVarsSecret |
Existing secret for core envvars | "" |
core.csrfKey |
The CSRF key. Will be generated automatically if it isn't specified | "" |
core.tls.existingSecret |
Name of an existing secret with the certificates for internal TLS access | "" |
core.command |
Override default container command (useful when using custom images) | [] |
core.args |
Override default container args (useful when using custom images) | [] |
core.extraEnvVars |
Array with extra environment variables to add Harbor Core pods | [] |
core.extraEnvVarsCM |
ConfigMap containing extra environment variables for Harbor Core pods | "" |
core.extraEnvVarsSecret |
Secret containing extra environment variables (in case of sensitive data) for Harbor Core pods | "" |
core.configOverwriteJson |
String containing a JSON with configuration overrides | "" |
core.configOverwriteJsonSecret |
Secret containing the JSON configuration overrides | "" |
core.containerPorts.http |
Harbor Core HTTP container port | 8080 |
core.containerPorts.https |
Harbor Core HTTPS container port | 8443 |
core.containerPorts.metrics |
Harbor Core metrics container port | 8001 |
core.replicaCount |
Number of Harbor Core replicas | 1 |
core.livenessProbe.enabled |
Enable livenessProbe on Harbor Core containers | true |
core.livenessProbe.initialDelaySeconds |
Initial delay seconds for livenessProbe | 20 |
core.livenessProbe.periodSeconds |
Period seconds for livenessProbe | 10 |
core.livenessProbe.timeoutSeconds |
Timeout seconds for livenessProbe | 5 |
core.livenessProbe.failureThreshold |
Failure threshold for livenessProbe | 6 |
core.livenessProbe.successThreshold |
Success threshold for livenessProbe | 1 |
core.readinessProbe.enabled |
Enable readinessProbe on Harbor Core containers | true |
core.readinessProbe.initialDelaySeconds |
Initial delay seconds for readinessProbe | 20 |
core.readinessProbe.periodSeconds |
Period seconds for readinessProbe | 10 |
core.readinessProbe.timeoutSeconds |
Timeout seconds for readinessProbe | 5 |
core.readinessProbe.failureThreshold |
Failure threshold for readinessProbe | 6 |
core.readinessProbe.successThreshold |
Success threshold for readinessProbe | 1 |
core.startupProbe.enabled |
Enable startupProbe on Harbor Core containers | false |
core.startupProbe.initialDelaySeconds |
Initial delay seconds for startupProbe | 5 |
core.startupProbe.periodSeconds |
Period seconds for startupProbe | 10 |
core.startupProbe.timeoutSeconds |
Timeout seconds for startupProbe | 1 |
core.startupProbe.failureThreshold |
Failure threshold for startupProbe | 15 |
core.startupProbe.successThreshold |
Success threshold for startupProbe | 1 |
core.customLivenessProbe |
Custom livenessProbe that overrides the default one | {} |
core.customReadinessProbe |
Custom readinessProbe that overrides the default one | {} |
core.customStartupProbe |
Custom startupProbe that overrides the default one | {} |
core.resourcesPreset |
Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if core.resources is set (core.resources is recommended for production). | small |
core.resources |
Set container requests and limits for different resources like CPU or memory (essential for production workloads) | {} |
core.podSecurityContext.enabled |
Enabled Harbor Core pods' Security Context | true |
core.podSecurityContext.fsGroupChangePolicy |
Set filesystem group change policy | Always |
core.podSecurityContext.sysctls |
Set kernel settings using the sysctl interface | [] |
core.podSecurityContext.supplementalGroups |
Set filesystem extra groups | [] |
core.podSecurityContext.fsGroup |
Set Harbor Core pod's Security Context fsGroup | 1001 |
core.containerSecurityContext.enabled |
Enabled containers' Security Context | true |
core.containerSecurityContext.seLinuxOptions |
Set SELinux options in container | {} |
core.containerSecurityContext.runAsUser |
Set containers' Security Context runAsUser | 1001 |
core.containerSecurityContext.runAsGroup |
Set containers' Security Context runAsGroup | 1001 |
core.containerSecurityContext.runAsNonRoot |
Set container's Security Context runAsNonRoot | true |
core.containerSecurityContext.privileged |
Set container's Security Context privileged | false |
core.containerSecurityContext.readOnlyRootFilesystem |
Set container's Security Context readOnlyRootFilesystem | true |
core.containerSecurityContext.allowPrivilegeEscalation |
Set container's Security Context allowPrivilegeEscalation | false |
core.containerSecurityContext.capabilities.drop |
List of capabilities to be dropped | ["ALL"] |
core.containerSecurityContext.seccompProfile.type |
Set container's Security Context seccomp profile | RuntimeDefault |
core.updateStrategy.type |
Harbor Core deployment strategy type - only really applicable for deployments with RWO PVs attached | RollingUpdate |
core.lifecycleHooks |
LifecycleHook for the Harbor Core container(s) to automate configuration before or after startup | {} |
core.hostAliases |
Harbor Core pods host aliases | [] |
core.podLabels |
Add additional labels to the Harbor Core pods (evaluated as a template) | {} |
core.podAnnotations |
Annotations to add to the Harbor Core pods (evaluated as a template) | {} |
core.podAffinityPreset |
Harbor Core Pod affinity preset. Ignored if core.affinity is set. Allowed values: soft or hard |
"" |
core.podAntiAffinityPreset |
Harbor Core Pod anti-affinity preset. Ignored if core.affinity is set. Allowed values: soft or hard |
soft |
core.nodeAffinityPreset.type |
Harbor Core Node affinity preset type. Ignored if core.affinity is set. Allowed values: soft or hard |
"" |
core.nodeAffinityPreset.key |
Harbor Core Node label key to match Ignored if core.affinity is set. |
"" |
core.nodeAffinityPreset.values |
Harbor Core Node label values to match. Ignored if core.affinity is set. |
[] |
core.affinity |
Harbor Core Affinity for pod assignment | {} |
core.nodeSelector |
Harbor Core Node labels for pod assignment | {} |
core.tolerations |
Harbor Core Tolerations for pod assignment | [] |
core.topologySpreadConstraints |
Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template | [] |
core.priorityClassName |
Priority Class Name | "" |
core.schedulerName |
Use an alternate scheduler, e.g. "stork". | "" |
core.sidecars |
Add additional sidecar containers to the Harbor Core pods | [] |
core.initContainers |
Add additional init containers to the Harbor Core pods | [] |
core.pdb.create |
Enable/disable a Pod Disruption Budget creation | true |
core.pdb.minAvailable |
Minimum number/percentage of pods that should remain scheduled | "" |
core.pdb.maxUnavailable |
Maximum number/percentage of pods that may be made unavailable. Defaults to 1 if both core.pdb.minAvailable and core.pdb.maxUnavailable are empty. |
"" |
core.extraVolumeMounts |
Optionally specify extra list of additional volumeMounts for the Harbor Core pods | [] |
core.extraVolumes |
Optionally specify extra list of additional volumes for the Harbor Core pods | [] |
core.automountServiceAccountToken |
Mount Service Account token in pod | false |
core.serviceAccount.create |
Specifies whether a ServiceAccount should be created | false |
core.serviceAccount.name |
The name of the ServiceAccount to use. | "" |
core.serviceAccount.automountServiceAccountToken |
Allows auto mount of ServiceAccountToken on the serviceAccount created | false |
core.serviceAccount.annotations |
Additional custom annotations for the ServiceAccount | {} |
core.service.ports.http |
Harbor Core HTTP service port | 80 |
core.service.ports.https |
Harbor Core HTTPS service port | 443 |
core.service.ports.metrics |
Harbor Core metrics service port | 8001 |
core.networkPolicy.enabled |
Specifies whether a NetworkPolicy should be created | true |
core.networkPolicy.allowExternal |
Don't require server label for connections | true |
core.networkPolicy.allowExternalEgress |
Allow the pod to access any range of port and all destinations. | true |
core.networkPolicy.extraIngress |
Add extra ingress rules to the NetworkPolicy | [] |
core.networkPolicy.extraEgress |
Add extra ingress rules to the NetworkPolicy | [] |
core.networkPolicy.ingressNSMatchLabels |
Labels to match to allow traffic from other namespaces | {} |
core.networkPolicy.ingressNSPodMatchLabels |
Pod labels to match to allow traffic from other namespaces | {} |
Harbor Jobservice Parameters
| Name | Description | Value |
|---|---|---|
jobservice.image.registry |
Harbor Jobservice image registry | REGISTRY_NAME |
jobservice.image.repository |
Harbor Jobservice image repository | REPOSITORY_NAME/harbor-jobservice |
jobservice.image.digest |
Harbor Jobservice image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | "" |
jobservice.image.pullPolicy |
Harbor Jobservice image pull policy | IfNotPresent |
jobservice.image.pullSecrets |
Harbor Jobservice image pull secrets | [] |
jobservice.image.debug |
Enable Harbor Jobservice image debug mode | false |
jobservice.maxJobWorkers |
The max job workers | 10 |
jobservice.redisNamespace |
Redis namespace for jobservice | harbor_job_service_namespace |
jobservice.jobLogger |
The logger for jobs: file, database or stdout |
file |
jobservice.secret |
Secret used when the job service communicates with other components. If a secret key is not specified, Helm will generate one. Must be a string of 16 chars. | "" |
jobservice.existingSecret |
Existing secret for jobservice | "" |
jobservice.existingEnvVarsSecret |
Existing secret for jobservice envvars | "" |
jobservice.tls.existingSecret |
Name of an existing secret with the certificates for internal TLS access | "" |
jobservice.command |
Override default container command (useful when using custom images) | [] |
jobservice.args |
Override default container args (useful when using custom images) | [] |
jobservice.extraEnvVars |
Array with extra environment variables to add Harbor Jobservice pods | [] |
jobservice.extraEnvVarsCM |
ConfigMap containing extra environment variables for Harbor Jobservice pods | "" |
jobservice.extraEnvVarsSecret |
Secret containing extra environment variables (in case of sensitive data) for Harbor Jobservice pods | "" |
jobservice.containerPorts.http |
Harbor Jobservice HTTP container port | 8080 |
jobservice.containerPorts.https |
Harbor Jobservice HTTPS container port | 8443 |
jobservice.containerPorts.metrics |
Harbor Jobservice metrics container port | 8001 |
jobservice.replicaCount |
Number of Harbor Jobservice replicas | 1 |
jobservice.livenessProbe.enabled |
Enable livenessProbe on Harbor Jobservice containers | true |
jobservice.livenessProbe.initialDelaySeconds |
Initial delay seconds for livenessProbe | 20 |
jobservice.livenessProbe.periodSeconds |
Period seconds for livenessProbe | 10 |
jobservice.livenessProbe.timeoutSeconds |
Timeout seconds for livenessProbe | 5 |
jobservice.livenessProbe.failureThreshold |
Failure threshold for livenessProbe | 6 |
jobservice.livenessProbe.successThreshold |
Success threshold for livenessProbe | 1 |
jobservice.readinessProbe.enabled |
Enable readinessProbe on Harbor Jobservice containers | true |
jobservice.readinessProbe.initialDelaySeconds |
Initial delay seconds for readinessProbe | 20 |
jobservice.readinessProbe.periodSeconds |
Period seconds for readinessProbe | 10 |
jobservice.readinessProbe.timeoutSeconds |
Timeout seconds for readinessProbe | 5 |
jobservice.readinessProbe.failureThreshold |
Failure threshold for readinessProbe | 6 |
jobservice.readinessProbe.successThreshold |
Success threshold for readinessProbe | 1 |
jobservice.startupProbe.enabled |
Enable startupProbe on Harbor Jobservice containers | false |
jobservice.startupProbe.initialDelaySeconds |
Initial delay seconds for startupProbe | 5 |
jobservice.startupProbe.periodSeconds |
Period seconds for startupProbe | 10 |
jobservice.startupProbe.timeoutSeconds |
Timeout seconds for startupProbe | 1 |
jobservice.startupProbe.failureThreshold |
Failure threshold for startupProbe | 15 |
jobservice.startupProbe.successThreshold |
Success threshold for startupProbe | 1 |
jobservice.customLivenessProbe |
Custom livenessProbe that overrides the default one | {} |
jobservice.customReadinessProbe |
Custom readinessProbe that overrides the default one | {} |
jobservice.customStartupProbe |
Custom startupProbe that overrides the default one | {} |
jobservice.resourcesPreset |
Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if jobservice.resources is set (jobservice.resources is recommended for production). | small |
jobservice.resources |
Set container requests and limits for different resources like CPU or memory (essential for production workloads) | {} |
jobservice.podSecurityContext.enabled |
Enabled Harbor Jobservice pods' Security Context | true |
jobservice.podSecurityContext.fsGroupChangePolicy |
Set filesystem group change policy | Always |
jobservice.podSecurityContext.sysctls |
Set kernel settings using the sysctl interface | [] |
jobservice.podSecurityContext.supplementalGroups |
Set filesystem extra groups | [] |
jobservice.podSecurityContext.fsGroup |
Set Harbor Jobservice pod's Security Context fsGroup | 1001 |
jobservice.containerSecurityContext.enabled |
Enabled containers' Security Context | true |
jobservice.containerSecurityContext.seLinuxOptions |
Set SELinux options in container | {} |
jobservice.containerSecurityContext.runAsUser |
Set containers' Security Context runAsUser | 1001 |
jobservice.containerSecurityContext.runAsGroup |
Set containers' Security Context runAsGroup | 1001 |
jobservice.containerSecurityContext.runAsNonRoot |
Set container's Security Context runAsNonRoot | true |
jobservice.containerSecurityContext.privileged |
Set container's Security Context privileged | false |
jobservice.containerSecurityContext.readOnlyRootFilesystem |
Set container's Security Context readOnlyRootFilesystem | true |
jobservice.containerSecurityContext.allowPrivilegeEscalation |
Set container's Security Context allowPrivilegeEscalation | false |
jobservice.containerSecurityContext.capabilities.drop |
List of capabilities to be dropped | ["ALL"] |
jobservice.containerSecurityContext.seccompProfile.type |
Set container's Security Context seccomp profile | RuntimeDefault |
jobservice.updateStrategy.type |
Harbor Jobservice deployment strategy type - only really applicable for deployments with RWO PVs attached | RollingUpdate |
jobservice.lifecycleHooks |
LifecycleHook for the Harbor Jobservice container(s) to automate configuration before or after startup | {} |
jobservice.hostAliases |
Harbor Jobservice pods host aliases | [] |
jobservice.podLabels |
Add additional labels to the Harbor Jobservice pods (evaluated as a template) | {} |
jobservice.podAnnotations |
Annotations to add to the Harbor Jobservice pods (evaluated as a template) | {} |
jobservice.podAffinityPreset |
Harbor Jobservice Pod affinity preset. Ignored if jobservice.affinity is set. Allowed values: soft or hard |
"" |
jobservice.podAntiAffinityPreset |
Harbor Jobservice Pod anti-affinity preset. Ignored if jobservice.affinity is set. Allowed values: soft or hard |
soft |
jobservice.nodeAffinityPreset.type |
Harbor Jobservice Node affinity preset type. Ignored if jobservice.affinity is set. Allowed values: soft or hard |
"" |
jobservice.nodeAffinityPreset.key |
Harbor Jobservice Node label key to match Ignored if jobservice.affinity is set. |
"" |
jobservice.nodeAffinityPreset.values |
Harbor Jobservice Node label values to match. Ignored if jobservice.affinity is set. |
[] |
jobservice.affinity |
Harbor Jobservice Affinity for pod assignment | {} |
jobservice.nodeSelector |
Harbor Jobservice Node labels for pod assignment | {} |
jobservice.tolerations |
Harbor Jobservice Tolerations for pod assignment | [] |
jobservice.topologySpreadConstraints |
Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template | [] |
jobservice.priorityClassName |
Priority Class Name | "" |
jobservice.schedulerName |
Use an alternate scheduler, e.g. "stork". | "" |
jobservice.sidecars |
Add additional sidecar containers to the Harbor Jobservice pods | [] |
jobservice.initContainers |
Add additional init containers to the Harbor Jobservice pods | [] |
jobservice.pdb.create |
Enable/disable a Pod Disruption Budget creation | true |
jobservice.pdb.minAvailable |
Minimum number/percentage of pods that should remain scheduled | "" |
jobservice.pdb.maxUnavailable |
Maximum number/percentage of pods that may be made unavailable. Defaults to 1 if both jobservice.pdb.minAvailable and jobservice.pdb.maxUnavailable are empty. |
"" |
jobservice.extraVolumeMounts |
Optionally specify extra list of additional volumeMounts for the Harbor Jobservice pods | [] |
jobservice.extraVolumes |
Optionally specify extra list of additional volumes for the Harbor Jobservice pods | [] |
jobservice.automountServiceAccountToken |
Mount Service Account token in pod | false |
jobservice.serviceAccount.create |
Specifies whether a ServiceAccount should be created | false |
jobservice.serviceAccount.name |
The name of the ServiceAccount to use. | "" |
jobservice.serviceAccount.automountServiceAccountToken |
Allows auto mount of ServiceAccountToken on the serviceAccount created | false |
jobservice.serviceAccount.annotations |
Additional custom annotations for the ServiceAccount | {} |
jobservice.service.ports.http |
Harbor Jobservice HTTP service port | 80 |
jobservice.service.ports.https |
Harbor Jobservice HTTPS service port | 443 |
jobservice.service.ports.metrics |
Harbor Jobservice HTTPS service port | 8001 |
jobservice.networkPolicy.enabled |
Specifies whether a NetworkPolicy should be created | true |
jobservice.networkPolicy.allowExternal |
Don't require server label for connections | true |
jobservice.networkPolicy.allowExternalEgress |
Allow the pod to access any range of port and all destinations. | true |
jobservice.networkPolicy.extraIngress |
Add extra ingress rules to the NetworkPolicy | [] |
jobservice.networkPolicy.extraEgress |
Add extra ingress rules to the NetworkPolicy | [] |
jobservice.networkPolicy.ingressNSMatchLabels |
Labels to match to allow traffic from other namespaces | {} |
jobservice.networkPolicy.ingressNSPodMatchLabels |
Pod labels to match to allow traffic from other namespaces | {} |
Harbor Registry Parameters
| Name | Description | Value |
|---|---|---|
registry.secret |
Secret is used to secure the upload state from client and registry storage backend. See: https://github.com/docker/distribution/blob/master/docs/configuration.md | "" |
registry.existingSecret |
Existing secret for registry | "" |
registry.relativeurls |
Make the registry return relative URLs in Location headers. The client is responsible for resolving the correct URL. | false |
registry.credentials.username |
The username for accessing the registry instance, which is hosted by htpasswd auth mode. More details see official docs | harbor_registry_user |
registry.credentials.password |
The password for accessing the registry instance, which is hosted by htpasswd auth mode. More details see official docs. It is suggested you update this value before installation. | harbor_registry_password |
registry.credentials.htpasswd |
The content of htpasswd file based on the value of registry.credentials.username registry.credentials.password. Currently helm does not support bcrypt in the template script, if the credential is updated you need to manually generated by calling |
harbor_registry_user:$2y$10$9L4Tc0DJbFFMB6RdSCunrOpTHdwhid4ktBJmLD00bYgqkkGOvll3m |
registry.middleware.enabled |
Middleware is used to add support for a CDN between backend storage and docker pull recipient. See |
false |
registry.middleware.type |
CDN type for the middleware | cloudFront |
registry.middleware.cloudFront.baseurl |
CloudFront CDN settings: Base URL | example.cloudfront.net |
registry.middleware.cloudFront.keypairid |
CloudFront CDN settings: Keypair ID | KEYPAIRID |
registry.middleware.cloudFront.duration |
CloudFront CDN settings: Duration | 3000s |
registry.middleware.cloudFront.ipfilteredby |
CloudFront CDN settings: IP filters | none |
registry.middleware.cloudFront.privateKeySecret |
CloudFront CDN settings: Secret name with the private key | my-secret |
registry.tls.existingSecret |
Name of an existing secret with the certificates for internal TLS access | "" |
registry.replicaCount |
Number of Harbor Registry replicas | 1 |
registry.podSecurityContext.enabled |
Enabled Harbor Registry pods' Security Context | true |
registry.podSecurityContext.fsGroupChangePolicy |
Set filesystem group change policy | Always |
registry.podSecurityContext.sysctls |
Set kernel settings using the sysctl interface | [] |
registry.podSecurityContext.supplementalGroups |
Set filesystem extra groups | [] |
registry.podSecurityContext.fsGroup |
Set Harbor Registry pod's Security Context fsGroup | 1001 |
registry.updateStrategy.type |
Harbor Registry deployment strategy type - only really applicable for deployments with RWO PVs attached | RollingUpdate |
registry.hostAliases |
Harbor Registry pods host aliases | [] |
registry.podLabels |
Add additional labels to the Harbor Registry pods (evaluated as a template) | {} |
registry.podAnnotations |
Annotations to add to the Harbor Registry pods (evaluated as a template) | {} |
registry.podAffinityPreset |
Harbor Registry Pod affinity preset. Ignored if registry.affinity is set. Allowed values: soft or hard |
"" |
registry.podAntiAffinityPreset |
Harbor Registry Pod anti-affinity preset. Ignored if registry.affinity is set. Allowed values: soft or hard |
soft |
registry.nodeAffinityPreset.type |
Harbor Registry Node affinity preset type. Ignored if registry.affinity is set. Allowed values: soft or hard |
"" |
registry.nodeAffinityPreset.key |
Harbor Registry Node label key to match Ignored if registry.affinity is set. |
"" |
registry.nodeAffinityPreset.values |
Harbor Registry Node label values to match. Ignored if registry.affinity is set. |
[] |
registry.affinity |
Harbor Registry Affinity for pod assignment | {} |
registry.nodeSelector |
Harbor Registry Node labels for pod assignment | {} |
registry.tolerations |
Harbor Registry Tolerations for pod assignment | [] |
registry.topologySpreadConstraints |
Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template | [] |
registry.priorityClassName |
Priority Class Name | "" |
registry.schedulerName |
Use an alternate scheduler, e.g. "stork". | "" |
registry.sidecars |
Add additional sidecar containers to the Harbor Registry pods | [] |
registry.initContainers |
Add additional init containers to the Harbor Registry pods | [] |
registry.pdb.create |
Enable/disable a Pod Disruption Budget creation | true |
registry.pdb.minAvailable |
Minimum number/percentage of pods that should remain scheduled | "" |
registry.pdb.maxUnavailable |
Maximum number/percentage of pods that may be made unavailable. Defaults to 1 if both registry.pdb.minAvailable and registry.pdb.maxUnavailable are empty. |
"" |
registry.extraVolumes |
Optionally specify extra list of additional volumes for the Harbor Registry pods | [] |
registry.automountServiceAccountToken |
Mount Service Account token in pod | false |
registry.serviceAccount.create |
Specifies whether a ServiceAccount should be created | true |
registry.serviceAccount.name |
The name of the ServiceAccount to use. | "" |
registry.serviceAccount.automountServiceAccountToken |
Allows auto mount of ServiceAccountToken on the serviceAccount created | false |
registry.serviceAccount.annotations |
Additional custom annotations for the ServiceAccount | {} |
registry.networkPolicy.enabled |
Specifies whether a NetworkPolicy should be created | true |
registry.networkPolicy.allowExternal |
Don't require server label for connections | true |
registry.networkPolicy.allowExternalEgress |
Allow the pod to access any range of port and all destinations. | true |
registry.networkPolicy.extraIngress |
Add extra ingress rules to the NetworkPolicy | [] |
registry.networkPolicy.extraEgress |
Add extra ingress rules to the NetworkPolicy | [] |
registry.networkPolicy.ingressNSMatchLabels |
Labels to match to allow traffic from other namespaces | {} |
registry.networkPolicy.ingressNSPodMatchLabels |
Pod labels to match to allow traffic from other namespaces | {} |
registry.server.image.registry |
Harbor Registry image registry | REGISTRY_NAME |
registry.server.image.repository |
Harbor Registry image repository | REPOSITORY_NAME/harbor-registry |
registry.server.image.digest |
Harbor Registry image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | "" |
registry.server.image.pullPolicy |
Harbor Registry image pull policy | IfNotPresent |
registry.server.image.pullSecrets |
Harbor Registry image pull secrets | [] |
registry.server.image.debug |
Enable Harbor Registry image debug mode | false |
registry.server.command |
Override default container command (useful when using custom images) | [] |
registry.server.args |
Override default container args (useful when using custom images) | [] |
registry.server.extraEnvVars |
Array with extra environment variables to add Harbor Registry main containers | [] |
registry.server.extraEnvVarsCM |
ConfigMap containing extra environment variables for Harbor Registry main containers | "" |
registry.server.extraEnvVarsSecret |
Secret containing extra environment variables (in case of sensitive data) for Harbor Registry main containers | "" |
registry.server.containerPorts.http |
Harbor Registry HTTP container port | 5000 |
registry.server.containerPorts.https |
Harbor Registry HTTPS container port | 5443 |
registry.server.containerPorts.debug |
Harbor Registry debug container port | 5001 |
registry.server.containerPorts.metrics |
Harbor Registry metrics container port | 8001 |
registry.server.livenessProbe.enabled |
Enable livenessProbe on Harbor Registry main containers | true |
registry.server.livenessProbe.initialDelaySeconds |
Initial delay seconds for livenessProbe | 20 |
registry.server.livenessProbe.periodSeconds |
Period seconds for livenessProbe | 10 |
registry.server.livenessProbe.timeoutSeconds |
Timeout seconds for livenessProbe | 5 |
registry.server.livenessProbe.failureThreshold |
Failure threshold for livenessProbe | 6 |
registry.server.livenessProbe.successThreshold |
Success threshold for livenessProbe | 1 |
registry.server.readinessProbe.enabled |
Enable readinessProbe on Harbor Registry main containers | true |
registry.server.readinessProbe.initialDelaySeconds |
Initial delay seconds for readinessProbe | 20 |
registry.server.readinessProbe.periodSeconds |
Period seconds for readinessProbe | 10 |
registry.server.readinessProbe.timeoutSeconds |
Timeout seconds for readinessProbe | 5 |
registry.server.readinessProbe.failureThreshold |
Failure threshold for readinessProbe | 6 |
registry.server.readinessProbe.successThreshold |
Success threshold for readinessProbe | 1 |
registry.server.startupProbe.enabled |
Enable startupProbe on Harbor Registry main containers | false |
registry.server.startupProbe.initialDelaySeconds |
Initial delay seconds for startupProbe | 5 |
registry.server.startupProbe.periodSeconds |
Period seconds for startupProbe | 10 |
registry.server.startupProbe.timeoutSeconds |
Timeout seconds for startupProbe | 1 |
registry.server.startupProbe.failureThreshold |
Failure threshold for startupProbe | 15 |
registry.server.startupProbe.successThreshold |
Success threshold for startupProbe | 1 |
registry.server.customLivenessProbe |
Custom livenessProbe that overrides the default one | {} |
registry.server.customReadinessProbe |
Custom readinessProbe that overrides the default one | {} |
registry.server.customStartupProbe |
Custom startupProbe that overrides the default one | {} |
registry.server.resourcesPreset |
Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if registry.server.resources is set (registry.server.resources is recommended for production). | small |
registry.server.resources |
Set container requests and limits for different resources like CPU or memory (essential for production workloads) | {} |
registry.server.containerSecurityContext.enabled |
Enabled containers' Security Context | true |
registry.server.containerSecurityContext.seLinuxOptions |
Set SELinux options in container | {} |
registry.server.containerSecurityContext.runAsUser |
Set containers' Security Context runAsUser | 1001 |
registry.server.containerSecurityContext.runAsGroup |
Set containers' Security Context runAsGroup | 1001 |
registry.server.containerSecurityContext.runAsNonRoot |
Set container's Security Context runAsNonRoot | true |
registry.server.containerSecurityContext.privileged |
Set container's Security Context privileged | false |
registry.server.containerSecurityContext.readOnlyRootFilesystem |
Set container's Security Context readOnlyRootFilesystem | true |
registry.server.containerSecurityContext.allowPrivilegeEscalation |
Set container's Security Context allowPrivilegeEscalation | false |
registry.server.containerSecurityContext.capabilities.drop |
List of capabilities to be dropped | ["ALL"] |
registry.server.containerSecurityContext.seccompProfile.type |
Set container's Security Context seccomp profile | RuntimeDefault |
registry.server.lifecycleHooks |
LifecycleHook for the Harbor Registry main container(s) to automate configuration before or after startup | {} |
registry.server.extraVolumeMounts |
Optionally specify extra list of additional volumeMounts for the Harbor Registry main pods | [] |
registry.server.service.ports.http |
Harbor Registry HTTP service port | 5000 |
registry.server.service.ports.https |
Harbor Registry HTTPS service port | 5443 |
registry.server.service.ports.metrics |
Harbor Registry metrics service port | 8001 |
registry.controller.image.registry |
Harbor Registryctl image registry | REGISTRY_NAME |
registry.controller.image.repository |
Harbor Registryctl image repository | REPOSITORY_NAME/harbor-registryctl |
registry.controller.image.digest |
Harbor Registryctl image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | "" |
registry.controller.image.pullPolicy |
Harbor Registryctl image pull policy | IfNotPresent |
registry.controller.image.pullSecrets |
Harbor Registryctl image pull secrets | [] |
registry.controller.image.debug |
Enable Harbor Registryctl image debug mode | false |
registry.controller.command |
Override default container command (useful when using custom images) | [] |
registry.controller.args |
Override default container args (useful when using custom images) | [] |
registry.controller.extraEnvVars |
Array with extra environment variables to add Harbor Registryctl containers | [] |
registry.controller.extraEnvVarsCM |
ConfigMap containing extra environment variables for Harbor Registryctl containers | "" |
registry.controller.extraEnvVarsSecret |
Secret containing extra environment variables (in case of sensitive data) for Harbor Registryctl containers | "" |
registry.controller.containerPorts.http |
Harbor Registryctl HTTP container port | 8080 |
registry.controller.containerPorts.https |
Harbor Registryctl HTTPS container port | 8443 |
registry.controller.livenessProbe.enabled |
Enable livenessProbe on Harbor Registryctl containers | true |
registry.controller.livenessProbe.initialDelaySeconds |
Initial delay seconds for livenessProbe | 20 |
registry.controller.livenessProbe.periodSeconds |
Period seconds for livenessProbe | 10 |
registry.controller.livenessProbe.timeoutSeconds |
Timeout seconds for livenessProbe | 5 |
registry.controller.livenessProbe.failureThreshold |
Failure threshold for livenessProbe | 6 |
registry.controller.livenessProbe.successThreshold |
Success threshold for livenessProbe | 1 |
registry.controller.readinessProbe.enabled |
Enable readinessProbe on Harbor Registryctl containers | true |
registry.controller.readinessProbe.initialDelaySeconds |
Initial delay seconds for readinessProbe | 20 |
registry.controller.readinessProbe.periodSeconds |
Period seconds for readinessProbe | 10 |
registry.controller.readinessProbe.timeoutSeconds |
Timeout seconds for readinessProbe | 5 |
registry.controller.readinessProbe.failureThreshold |
Failure threshold for readinessProbe | 6 |
registry.controller.readinessProbe.successThreshold |
Success threshold for readinessProbe | 1 |
registry.controller.startupProbe.enabled |
Enable startupProbe on Harbor Registryctl containers | false |
registry.controller.startupProbe.initialDelaySeconds |
Initial delay seconds for startupProbe | 5 |
registry.controller.startupProbe.periodSeconds |
Period seconds for startupProbe | 10 |
registry.controller.startupProbe.timeoutSeconds |
Timeout seconds for startupProbe | 1 |
registry.controller.startupProbe.failureThreshold |
Failure threshold for startupProbe | 15 |
registry.controller.startupProbe.successThreshold |
Success threshold for startupProbe | 1 |
registry.controller.customLivenessProbe |
Custom livenessProbe that overrides the default one | {} |
registry.controller.customReadinessProbe |
Custom readinessProbe that overrides the default one | {} |
registry.controller.customStartupProbe |
Custom startupProbe that overrides the default one | {} |
registry.controller.resourcesPreset |
Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if registry.controller.resources is set (registry.controller.resources is recommended for production). | small |
registry.controller.resources |
Set container requests and limits for different resources like CPU or memory (essential for production workloads) | {} |
registry.controller.containerSecurityContext.enabled |
Enabled containers' Security Context | true |
registry.controller.containerSecurityContext.seLinuxOptions |
Set SELinux options in container | {} |
registry.controller.containerSecurityContext.runAsUser |
Set containers' Security Context runAsUser | 1001 |
registry.controller.containerSecurityContext.runAsGroup |
Set containers' Security Context runAsGroup | 1001 |
registry.controller.containerSecurityContext.runAsNonRoot |
Set container's Security Context runAsNonRoot | true |
registry.controller.containerSecurityContext.privileged |
Set container's Security Context privileged | false |
registry.controller.containerSecurityContext.readOnlyRootFilesystem |
Set container's Security Context readOnlyRootFilesystem | true |
registry.controller.containerSecurityContext.allowPrivilegeEscalation |
Set container's Security Context allowPrivilegeEscalation | false |
registry.controller.containerSecurityContext.capabilities.drop |
List of capabilities to be dropped | ["ALL"] |
registry.controller.containerSecurityContext.seccompProfile.type |
Set container's Security Context seccomp profile | RuntimeDefault |
registry.controller.lifecycleHooks |
LifecycleHook for the Harbor Registryctl container(s) to automate configuration before or after startup | {} |
registry.controller.extraVolumeMounts |
Optionally specify extra list of additional volumeMounts for the Harbor Registryctl pods | [] |
registry.controller.service.ports.http |
Harbor Registryctl HTTP service port | 8080 |
registry.controller.service.ports.https |
Harbor Registryctl HTTPS service port | 8443 |
Harbor Adapter Trivy Parameters
| Name | Description | Value |
|---|---|---|
trivy.image.registry |
Harbor Adapter Trivy image registry | REGISTRY_NAME |
trivy.image.repository |
Harbor Adapter Trivy image repository | REPOSITORY_NAME/harbor-adapter-trivy |
trivy.image.digest |
Harbor Adapter Trivy image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | "" |
trivy.image.pullPolicy |
Harbor Adapter Trivy image pull policy | IfNotPresent |
trivy.image.pullSecrets |
Harbor Adapter Trivy image pull secrets | [] |
trivy.image.debug |
Enable Harbor Adapter Trivy image debug mode | false |
trivy.enabled |
Enable Trivy | true |
trivy.debugMode |
The flag to enable Trivy debug mode | false |
trivy.vulnType |
Comma-separated list of vulnerability types. Possible values os and library. |
os,library |
trivy.severity |
Comma-separated list of severities to be checked | UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL |
trivy.ignoreUnfixed |
The flag to display only fixed vulnerabilities | false |
trivy.insecure |
The flag to skip verifying registry certificate | false |
trivy.existingEnvVarsSecret |
Existing secret for trivy | "" |
trivy.gitHubToken |
The GitHub access token to download Trivy DB | "" |
trivy.skipUpdate |
The flag to disable Trivy DB downloads from GitHub | false |
trivy.skipJavaDbUpdate |
The flag to disable Trivy JAVA DB downloads. | false |
trivy.dbRepository |
OCI repositor(ies) to retrieve the trivy vulnerability database from | "" |
trivy.javaDbRepository |
OCI repositor(ies) to retrieve the Java trivy vulnerability database from | "" |
trivy.cacheDir |
Directory to store the cache | /bitnami/harbor-adapter-trivy/.cache |
trivy.tls.existingSecret |
Name of an existing secret with the certificates for internal TLS access | "" |
trivy.command |
Override default container command (useful when using custom images) | [] |
trivy.args |
Override default container args (useful when using custom images) | [] |
trivy.extraEnvVars |
Array with extra environment variables to add Trivy pods | [] |
trivy.extraEnvVarsCM |
ConfigMap containing extra environment variables for Trivy pods | "" |
trivy.extraEnvVarsSecret |
Secret containing extra environment variables (in case of sensitive data) for Trivy pods | "" |
trivy.containerPorts.http |
Trivy HTTP container port | 8080 |
trivy.containerPorts.https |
Trivy HTTPS container port | 8443 |
trivy.replicaCount |
Number of Trivy replicas | 1 |
trivy.livenessProbe.enabled |
Enable livenessProbe on Trivy containers | true |
trivy.livenessProbe.initialDelaySeconds |
Initial delay seconds for livenessProbe | 20 |
trivy.livenessProbe.periodSeconds |
Period seconds for livenessProbe | 10 |
trivy.livenessProbe.timeoutSeconds |
Timeout seconds for livenessProbe | 5 |
trivy.livenessProbe.failureThreshold |
Failure threshold for livenessProbe | 6 |
trivy.livenessProbe.successThreshold |
Success threshold for livenessProbe | 1 |
trivy.readinessProbe.enabled |
Enable readinessProbe on Trivy containers | true |
trivy.readinessProbe.initialDelaySeconds |
Initial delay seconds for readinessProbe | 20 |
trivy.readinessProbe.periodSeconds |
Period seconds for readinessProbe | 10 |
trivy.readinessProbe.timeoutSeconds |
Timeout seconds for readinessProbe | 5 |
trivy.readinessProbe.failureThreshold |
Failure threshold for readinessProbe | 6 |
trivy.readinessProbe.successThreshold |
Success threshold for readinessProbe | 1 |
trivy.startupProbe.enabled |
Enable startupProbe on Trivy containers | false |
trivy.startupProbe.initialDelaySeconds |
Initial delay seconds for startupProbe | 5 |
trivy.startupProbe.periodSeconds |
Period seconds for startupProbe | 10 |
trivy.startupProbe.timeoutSeconds |
Timeout seconds for startupProbe | 1 |
trivy.startupProbe.failureThreshold |
Failure threshold for startupProbe | 15 |
trivy.startupProbe.successThreshold |
Success threshold for startupProbe | 1 |
trivy.customLivenessProbe |
Custom livenessProbe that overrides the default one | {} |
trivy.customReadinessProbe |
Custom readinessProbe that overrides the default one | {} |
trivy.customStartupProbe |
Custom startupProbe that overrides the default one | {} |
trivy.resourcesPreset |
Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if trivy.resources is set (trivy.resources is recommended for production). | small |
trivy.resources |
Set container requests and limits for different resources like CPU or memory (essential for production workloads) | {} |
trivy.podSecurityContext.enabled |
Enabled Trivy pods' Security Context | true |
trivy.podSecurityContext.fsGroupChangePolicy |
Set filesystem group change policy | Always |
trivy.podSecurityContext.sysctls |
Set kernel settings using the sysctl interface | [] |
trivy.podSecurityContext.supplementalGroups |
Set filesystem extra groups | [] |
trivy.podSecurityContext.fsGroup |
Set Trivy pod's Security Context fsGroup | 1001 |
trivy.containerSecurityContext.enabled |
Enabled containers' Security Context | true |
trivy.containerSecurityContext.seLinuxOptions |
Set SELinux options in container | {} |
trivy.containerSecurityContext.runAsUser |
Set containers' Security Context runAsUser | 1001 |
trivy.containerSecurityContext.runAsGroup |
Set containers' Security Context runAsGroup | 1001 |
trivy.containerSecurityContext.runAsNonRoot |
Set container's Security Context runAsNonRoot | true |
trivy.containerSecurityContext.privileged |
Set container's Security Context privileged | false |
trivy.containerSecurityContext.readOnlyRootFilesystem |
Set container's Security Context readOnlyRootFilesystem | true |
trivy.containerSecurityContext.allowPrivilegeEscalation |
Set container's Security Context allowPrivilegeEscalation | false |
trivy.containerSecurityContext.capabilities.drop |
List of capabilities to be dropped | ["ALL"] |
trivy.containerSecurityContext.seccompProfile.type |
Set container's Security Context seccomp profile | RuntimeDefault |
trivy.updateStrategy.type |
Trivy deployment strategy type - only really applicable for deployments with RWO PVs attached | RollingUpdate |
trivy.lifecycleHooks |
LifecycleHook for the Trivy container(s) to automate configuration before or after startup | {} |
trivy.hostAliases |
Trivy pods host aliases | [] |
trivy.podLabels |
Add additional labels to the Trivy pods (evaluated as a template) | {} |
trivy.podAnnotations |
Annotations to add to the Trivy pods (evaluated as a template) | {} |
trivy.podAffinityPreset |
Trivy Pod affinity preset. Ignored if trivy.affinity is set. Allowed values: soft or hard |
"" |
trivy.podAntiAffinityPreset |
Trivy Pod anti-affinity preset. Ignored if trivy.affinity is set. Allowed values: soft or hard |
soft |
trivy.nodeAffinityPreset.type |
Trivy Node affinity preset type. Ignored if trivy.affinity is set. Allowed values: soft or hard |
"" |
trivy.nodeAffinityPreset.key |
Trivy Node label key to match Ignored if trivy.affinity is set. |
"" |
trivy.nodeAffinityPreset.values |
Trivy Node label values to match. Ignored if trivy.affinity is set. |
[] |
trivy.affinity |
Trivy Affinity for pod assignment | {} |
trivy.nodeSelector |
Trivy Node labels for pod assignment | {} |
trivy.tolerations |
Trivy Tolerations for pod assignment | [] |
trivy.topologySpreadConstraints |
Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template | [] |
trivy.priorityClassName |
Priority Class Name | "" |
trivy.schedulerName |
Use an alternate scheduler, e.g. "stork". | "" |
trivy.sidecars |
Add additional sidecar containers to the Trivy pods | [] |
trivy.initContainers |
Add additional init containers to the Trivy pods | [] |
trivy.pdb.create |
Enable/disable a Pod Disruption Budget creation | true |
trivy.pdb.minAvailable |
Minimum number/percentage of pods that should remain scheduled | "" |
trivy.pdb.maxUnavailable |
Maximum number/percentage of pods that may be made unavailable. Defaults to 1 if both trivy.pdb.minAvailable and trivy.pdb.maxUnavailable are empty. |
"" |
trivy.extraVolumeMounts |
Optionally specify extra list of additional volumeMounts for the Trivy pods | [] |
trivy.extraVolumes |
Optionally specify extra list of additional volumes for the Trivy pods | [] |
trivy.automountServiceAccountToken |
Mount Service Account token in pod | false |
trivy.serviceAccount.create |
Specifies whether a ServiceAccount should be created | false |
trivy.serviceAccount.name |
The name of the ServiceAccount to use. | "" |
trivy.serviceAccount.automountServiceAccountToken |
Allows auto mount of ServiceAccountToken on the serviceAccount created | false |
trivy.serviceAccount.annotations |
Additional custom annotations for the ServiceAccount | {} |
trivy.service.ports.http |
Trivy HTTP service port | 8080 |
trivy.service.ports.https |
Trivy HTTPS service port | 8443 |
trivy.networkPolicy.enabled |
Specifies whether a NetworkPolicy should be created | true |
trivy.networkPolicy.allowExternal |
Don't require server label for connections | true |
trivy.networkPolicy.allowExternalEgress |
Allow the pod to access any range of port and all destinations. | true |
trivy.networkPolicy.extraIngress |
Add extra ingress rules to the NetworkPolicy | [] |
trivy.networkPolicy.extraEgress |
Add extra ingress rules to the NetworkPolicy | [] |
trivy.networkPolicy.ingressNSMatchLabels |
Labels to match to allow traffic from other namespaces | {} |
trivy.networkPolicy.ingressNSPodMatchLabels |
Pod labels to match to allow traffic from other namespaces | {} |
Harbor Exporter Parameters
| Name | Description | Value |
|---|---|---|
exporter.image.registry |
Harbor Exporter image registry | REGISTRY_NAME |
exporter.image.repository |
Harbor Exporter image repository | REPOSITORY_NAME/harbor-exporter |
exporter.image.digest |
Harbor Exporter image image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | "" |
exporter.image.pullPolicy |
Harbor exporter image pull policy | IfNotPresent |
exporter.image.pullSecrets |
Specify docker-registry secret names as an array | [] |
exporter.image.debug |
Specify if debug logs should be enabled | false |
exporter.command |
Override default container command (useful when using custom images) | [] |
exporter.args |
Override default container args (useful when using custom images) | [] |
exporter.extraEnvVars |
Array containing extra env vars | [] |
exporter.extraEnvVarsCM |
ConfigMap containing extra env vars | "" |
exporter.extraEnvVarsSecret |
Secret containing extra env vars (in case of sensitive data) | "" |
exporter.containerPorts.metrics |
Harbor Exporter HTTP container port | 8001 |
exporter.replicaCount |
The replica count | 1 |
exporter.livenessProbe.enabled |
Enable livenessProbe | true |
exporter.livenessProbe.initialDelaySeconds |
Initial delay seconds for livenessProbe | 20 |
exporter.livenessProbe.periodSeconds |
Period seconds for livenessProbe | 10 |
exporter.livenessProbe.timeoutSeconds |
Timeout seconds for livenessProbe | 5 |
exporter.livenessProbe.failureThreshold |
Failure threshold for livenessProbe | 6 |
exporter.livenessProbe.successThreshold |
Success threshold for livenessProbe | 1 |
exporter.readinessProbe.enabled |
Enable readinessProbe | true |
exporter.readinessProbe.initialDelaySeconds |
Initial delay seconds for readinessProbe | 20 |
exporter.readinessProbe.periodSeconds |
Period seconds for readinessProbe | 10 |
exporter.readinessProbe.timeoutSeconds |
Timeout seconds for readinessProbe | 5 |
exporter.readinessProbe.failureThreshold |
Failure threshold for readinessProbe | 6 |
exporter.readinessProbe.successThreshold |
Success threshold for readinessProbe | 1 |
exporter.startupProbe.enabled |
Enable startupProbe on Harbor Exporter containers | false |
exporter.startupProbe.initialDelaySeconds |
Initial delay seconds for startupProbe | 5 |
exporter.startupProbe.periodSeconds |
Period seconds for startupProbe | 10 |
exporter.startupProbe.timeoutSeconds |
Timeout seconds for startupProbe | 1 |
exporter.startupProbe.failureThreshold |
Failure threshold for startupProbe | 15 |
exporter.startupProbe.successThreshold |
Success threshold for startupProbe | 1 |
exporter.customLivenessProbe |
Custom livenessProbe that overrides the default one | {} |
exporter.customReadinessProbe |
Custom readinessProbe that overrides the default one | {} |
exporter.customStartupProbe |
Custom startupProbe that overrides the default one | {} |
exporter.resourcesPreset |
Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if exporter.resources is set (exporter.resources is recommended for production). | nano |
exporter.resources |
Set container requests and limits for different resources like CPU or memory (essential for production workloads) | {} |
exporter.podSecurityContext.enabled |
Enabled Exporter pods' Security Context | true |
exporter.podSecurityContext.fsGroupChangePolicy |
Set filesystem group change policy | Always |
exporter.podSecurityContext.sysctls |
Set kernel settings using the sysctl interface | [] |
exporter.podSecurityContext.supplementalGroups |
Set filesystem extra groups | [] |
exporter.podSecurityContext.fsGroup |
Set Exporter pod's Security Context fsGroup | 1001 |
exporter.containerSecurityContext.enabled |
Enabled containers' Security Context | true |
exporter.containerSecurityContext.seLinuxOptions |
Set SELinux options in container | {} |
exporter.containerSecurityContext.runAsUser |
Set containers' Security Context runAsUser | 1001 |
exporter.containerSecurityContext.runAsGroup |
Set containers' Security Context runAsGroup | 1001 |
exporter.containerSecurityContext.runAsNonRoot |
Set container's Security Context runAsNonRoot | true |
exporter.containerSecurityContext.privileged |
Set container's Security Context privileged | false |
exporter.containerSecurityContext.readOnlyRootFilesystem |
Set container's Security Context readOnlyRootFilesystem | true |
exporter.containerSecurityContext.allowPrivilegeEscalation |
Set container's Security Context allowPrivilegeEscalation | false |
exporter.containerSecurityContext.capabilities.drop |
List of capabilities to be dropped | ["ALL"] |
exporter.containerSecurityContext.seccompProfile.type |
Set container's Security Context seccomp profile | RuntimeDefault |
exporter.updateStrategy.type |
The update strategy for deployments with persistent volumes: RollingUpdate or Recreate. Set it as Recreate when RWM for volumes isn't supported | RollingUpdate |
exporter.lifecycleHooks |
LifecycleHook to set additional configuration at startup, e.g. LDAP settings via REST API. Evaluated as a template | {} |
exporter.hostAliases |
Exporter pods host aliases | [] |
exporter.podLabels |
Add additional labels to the pod (evaluated as a template) | {} |
exporter.podAnnotations |
Annotations to add to the exporter pod | {} |
exporter.podAffinityPreset |
Harbor Exporter Pod affinity preset. Ignored if affinity is set. Allowed values: soft or hard |
"" |
exporter.podAntiAffinityPreset |
Harbor Exporter Pod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hard |
soft |
exporter.nodeAffinityPreset.type |
Harbor Exporter Node affinity preset type. Ignored if exporter.affinity is set. Allowed values: soft or hard |
"" |
exporter.nodeAffinityPreset.key |
Harbor Exporter Node label key to match Ignored if exporter.affinity is set. |
"" |
exporter.nodeAffinityPreset.values |
Harbor Exporter Node label values to match. Ignored if exporter.affinity is set. |
[] |
exporter.affinity |
Harbor Exporter Affinity for pod assignment | {} |
exporter.priorityClassName |
Exporter pods Priority Class Name | "" |
exporter.schedulerName |
Name of the k8s scheduler (other than default) | "" |
exporter.nodeSelector |
Harbor Exporter Node labels for pod assignment | {} |
exporter.tolerations |
Harbor Exporter Tolerations for pod assignment | [] |
exporter.topologySpreadConstraints |
Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template | [] |
exporter.initContainers |
Add additional init containers to the pod (evaluated as a template) | [] |
exporter.pdb.create |
Enable/disable a Pod Disruption Budget creation | true |
exporter.pdb.minAvailable |
Minimum number/percentage of pods that should remain scheduled | "" |
exporter.pdb.maxUnavailable |
Maximum number/percentage of pods that may be made unavailable. Defaults to 1 if both exporter.pdb.minAvailable and exporter.pdb.maxUnavailable are empty. |
"" |
exporter.extraVolumeMounts |
[] |
|
exporter.extraVolumes |
[] |
|
exporter.sidecars |
Attach additional containers to the pod (evaluated as a template) | [] |
exporter.automountServiceAccountToken |
Mount Service Account token in pod | false |
exporter.serviceAccount.create |
Specifies whether a ServiceAccount should be created | false |
exporter.serviceAccount.name |
The name of the ServiceAccount to use. | "" |
exporter.serviceAccount.automountServiceAccountToken |
Allows auto mount of ServiceAccountToken on the serviceAccount created | false |
exporter.serviceAccount.annotations |
Additional custom annotations for the ServiceAccount | {} |
exporter.service.ports.metrics |
Exporter HTTP service port | 8001 |
exporter.networkPolicy.enabled |
Specifies whether a NetworkPolicy should be created | true |
exporter.networkPolicy.allowExternal |
Don't require server label for connections | true |
exporter.networkPolicy.allowExternalEgress |
Allow the pod to access any range of port and all destinations. | true |
exporter.networkPolicy.extraIngress |
Add extra ingress rules to the NetworkPolicy | [] |
exporter.networkPolicy.extraEgress |
Add extra ingress rules to the NetworkPolicy | [] |
exporter.networkPolicy.ingressNSMatchLabels |
Labels to match to allow traffic from other namespaces | {} |
exporter.networkPolicy.ingressNSPodMatchLabels |
Pod labels to match to allow traffic from other namespaces | {} |
PostgreSQL Parameters
| Name | Description | Value |
|---|---|---|
postgresql.enabled |
Switch to enable or disable the PostgreSQL helm chart | true |
postgresql.auth.enablePostgresUser |
Assign a password to the "postgres" admin user. Otherwise, remote access will be blocked for this user | true |
postgresql.auth.postgresPassword |
Password for the "postgres" admin user | not-secure-database-password |
postgresql.auth.existingSecret |
Name of existing secret to use for PostgreSQL credentials | "" |
postgresql.architecture |
PostgreSQL architecture (standalone or replication) |
standalone |
postgresql.primary.extendedConfiguration |
Extended PostgreSQL Primary configuration (appended to main or default configuration) | `max_connections = 1024 |
| ` | ||
postgresql.primary.initdb.scripts |
Initdb scripts to create Harbor databases | {} |
postgresql.primary.resourcesPreset |
Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if primary.resources is set (primary.resources is recommended for production). | nano |
postgresql.primary.resources |
Set container requests and limits for different resources like CPU or memory (essential for production workloads) | {} |
externalDatabase.host |
Database host | localhost |
externalDatabase.port |
Database port number | 5432 |
externalDatabase.user |
Non-root username for Harbor | bn_harbor |
externalDatabase.password |
Password for the non-root username for Harbor | "" |
externalDatabase.sslmode |
External database ssl mode | disable |
externalDatabase.coreDatabase |
External database name for core | "" |
externalDatabase.existingSecret |
The name of an existing secret with database credentials | "" |
externalDatabase.existingSecretPasswordKey |
Password key on the existing secret | db-password |
Redis® parameters
| Name | Description | Value |
|---|---|---|
redis.enabled |
Switch to enable or disable the Redis® helm | true |
redis.tls.enabled |
Enable Redis TLS traffic | false |
redis.tls.authClients |
Require Redis clients to authenticate. Mutual TLS is not supported by Harbor. | false |
redis.tls.autoGenerated |
Enable autogenerated Redis TLS certificates | true |
redis.tls.existingSecret |
The name of the existing secret that contains the Redis TLS certificates | "" |
redis.tls.certFilename |
Name of key in existing secret for the Redis TLS certificate | "" |
redis.tls.certKeyFilename |
Name of key in existing secret for the Redis TLS certificate key | "" |
redis.tls.certCAFilename |
Name of key in existing secret for the Redis CA certificate | "" |
redis.auth.enabled |
Enable password authentication | false |
redis.auth.password |
Redis® password | "" |
redis.auth.existingSecret |
The name of an existing secret with Redis® credentials | "" |
redis.architecture |
Redis® architecture. Allowed values: standalone or replication |
standalone |
redis.sentinel.enabled |
Use Redis® Sentinel on Redis® pods. | false |
redis.sentinel.masterSet |
Master set name | mymaster |
redis.sentinel.service.ports.sentinel |
Redis® service port for Redis® Sentinel | 26379 |
redis.master.resourcesPreset |
Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if master.resources is set (master.resources is recommended for production). | nano |
redis.master.resources |
Set container requests and limits for different resources like CPU or memory (essential for production workloads) | {} |
externalRedis.host |
Redis® host | localhost |
externalRedis.port |
Redis® port number | 6379 |
externalRedis.password |
Redis® password | "" |
externalRedis.coreDatabaseIndex |
Index for core database | 0 |
externalRedis.jobserviceDatabaseIndex |
Index for jobservice database | 1 |
externalRedis.registryDatabaseIndex |
Index for registry database | 2 |
externalRedis.trivyAdapterDatabaseIndex |
Index for trivy adapter database | 5 |
externalRedis.tls.enabled |
Enable Redis TLS traffic | false |
externalRedis.tls.existingSecret |
The name of the existing secret that contains the Redis TLS certificates | "" |
externalRedis.tls.certCAFilename |
Name of key in existing secret for the Redis CA certificate | "" |
externalRedis.sentinel.enabled |
If external redis with sentinal is used, set it to true |
false |
externalRedis.sentinel.masterSet |
Name of sentinel masterSet if sentinel is used | mymaster |
externalRedis.sentinel.hosts |
Sentinel hosts and ports in the format | "" |
Harbor metrics parameters
| Name | Description | Value |
|---|---|---|
metrics.enabled |
Whether or not to enable metrics for different | false |
metrics.path |
Path where metrics are exposed | /metrics |
metrics.serviceMonitor.enabled |
if true, creates a Prometheus Operator ServiceMonitor (requires metrics.enabled to be true) |
false |
metrics.serviceMonitor.namespace |
Namespace in which Prometheus is running | "" |
metrics.serviceMonitor.interval |
Interval at which metrics should be scraped | "" |
metrics.serviceMonitor.scrapeTimeout |
Timeout after which the scrape is ended | "" |
metrics.serviceMonitor.labels |
Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | {} |
metrics.serviceMonitor.selector |
Prometheus instance selector labels | {} |
metrics.serviceMonitor.relabelings |
RelabelConfigs to apply to samples before scraping | [] |
metrics.serviceMonitor.metricRelabelings |
MetricRelabelConfigs to apply to samples before ingestion | [] |
metrics.serviceMonitor.honorLabels |
Specify honorLabels parameter to add the scrape endpoint | false |
metrics.serviceMonitor.jobLabel |
The name of the label on the target service to use as the job name in prometheus. | "" |
Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,
helm install my-release \
--set adminPassword=password \
oci://REGISTRY_NAME/REPOSITORY_NAME/harbor
Note: You need to substitute the placeholders
REGISTRY_NAMEandREPOSITORY_NAMEwith a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to useREGISTRY_NAME=registry-1.docker.ioandREPOSITORY_NAME=bitnamicharts.
The above command sets the Harbor administrator account password to password.
NOTE: Once this chart is deployed, it is not possible to change the application's access credentials, such as usernames or passwords, using Helm. To change these application credentials after deployment, delete any persistent volumes (PVs) used by the chart and re-deploy it, or use the application's built-in administrative tools if available.
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
helm install my-release -f values.yaml oci://REGISTRY_NAME/REPOSITORY_NAME/harbor
Note: You need to substitute the placeholders
REGISTRY_NAMEandREPOSITORY_NAMEwith a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to useREGISTRY_NAME=registry-1.docker.ioandREPOSITORY_NAME=bitnamicharts.
Troubleshooting
Find more information about how to deal with common errors related to Bitnami's Helm charts in this troubleshooting guide.
Upgrading
To 26.0.0
This major updates the Redis® subchart to its newest major, 21.0.0, which updates Redis® from 7.4 to 8.0. Here you can find more information about the changes introduced in that version. No major issues are expected during the upgrade.
To 25.0.0
This version uses the PostgreSQL version provided by the bitnami/postgresql subchart, PostgreSQL 17.x, instead of overriding it with version 14.x.
To 24.1.0
This version introduces image verification for security purposes. To disable it, set global.security.allowInsecureImages to true. More details at GitHub issue.
To 24.0.1
This version updates the PostgreSQL version to 14.x. Follow the official instructions to upgrade to 14.x.
To 24.0.0
This major updates the PostgreSQL subchart to its newest major, 16.0.0, which uses PostgreSQL 17.x. Follow the official instructions to upgrade to 17.x.
To 23.0.0
This major updates the Redis® subchart to its newest major, 20.0.0. Here you can find more information about the changes introduced in that version.
To 22.0.0
This major version renames the following values:
nginx.serviceAccountNamewas renamed asnginx.serviceAccount.name.portal.serviceAccountNamewas renamed asportal.serviceAccount.name.core.serviceAccountNamewas renamed ascore.serviceAccount.name.jobservice.serviceAccountNamewas renamed asjobservice.serviceAccount.name.registry.serviceAccountNamewas renamed asregistry.serviceAccount.name.trivy.serviceAccountNamewas renamed astrivy.serviceAccount.name.exporter.serviceAccountNamewas renamed asexporter.serviceAccount.name.
Additionally, this major version adds support for serviceAccount creation in the Helm chart.
To 21.0.0
This major bump changes the following security defaults:
runAsGroupis changed from0to1001readOnlyRootFilesystemis set totrueresourcesPresetis changed fromnoneto the minimum size working in our test suites (NOTE:resourcesPresetis not meant for production usage, butresourcesadapted to your use case).global.compatibility.openshift.adaptSecurityContextis changed fromdisabledtoauto.
This could potentially break any customization or init scripts used in your deployment. If this is the case, change the default values to the previous ones.
To 20.0.0
This major release bumps the PostgreSQL chart version to 14.x.x; no major issues are expected during the upgrade.
To 19.0.0
This major updates the PostgreSQL subchart to its newest major, 13.0.0. Here you can find more information about the changes introduced in that version.
To 18.0.0
This major deprecates harbor notary server and harbor notary signer components. These components were deprecated in Harbor 2.9.0, find more information in Harbor wiki.
To 17.0.0
This major updates the Redis® subchart to its newest major, 18.0.0. Here you can find more information about the changes introduced in that version.
NOTE: Due to an error in our release process, Redis®' chart versions higher or equal than 17.15.4 already use Redis® 7.2 by default.
To 16.0.0
This major updates the PostgreSQL subchart to its newest major, 12.0.0. Here you can find more information about the changes introduced in that version.
To 15.0.0
This major update the Redis® subchart to its newest major, 17.0.0, which updates Redis® from its version 6.2 to the latest 7.0.
To 14.x.x
The new version of this chart is not longer sopported Clair, who was deprecated by Habor in the version 2.2.0
To 13.x.x
This major release updates the PostgreSQL subchart image to the newest 13.x version.
Upgrading Instructions
The upgrade process to 13.x.x from 12.x.x should be done by reusing the PVC(s) used to hold the data on your previous release. To do so, follow the instructions below (the following example assumes that the release name is harbor):
NOTE: Please, create a backup of your database before running any of these actions.
- Select the namespace where Harbor is deployed:
HARBOR_NAMESPACE='default'
- Obtain the credentials and the names of the PVCs used to hold the data on your current release:
HARBOR_PASSWORD=$(kubectl get secret --namespace "${HARBOR_NAMESPACE:?}" harbor-core-envvars -o jsonpath="{.data.HARBOR_ADMIN_PASSWORD}" | base64 --decode)
POSTGRESQL_PASSWORD=$(kubectl get secret --namespace "${HARBOR_NAMESPACE:?}" harbor-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
POSTGRESQL_PVC=$(kubectl get pvc --namespace "${HARBOR_NAMESPACE:?}" -l app.kubernetes.io/instance=harbor,app.kubernetes.io/name=postgresql,role=primary -o jsonpath="{.items[0].metadata.name}")
- Delete the PostgreSQL statefulset (notice the option
--cascade=orphan) and secret:
kubectl delete statefulsets.apps --namespace "${HARBOR_NAMESPACE:?}" --cascade=orphan harbor-postgresql
kubectl delete secret --namespace "${HARBOR_NAMESPACE:?}" harbor-postgresql
- Upgrade your release using the same PostgreSQL version:
CURRENT_PG_VERSION=$(kubectl exec --namespace "${HARBOR_NAMESPACE:?}" harbor-postgresql-0 -c harbor-postgresql -- bash -c 'printenv APP_VERSION')
helm --namespace "${HARBOR_NAMESPACE:?}" upgrade harbor bitnami/harbor \
--set adminPassword="${HARBOR_PASSWORD:?}" \
--set postgresql.image.tag="${CURRENT_PG_VERSION:?}" \
--set postgresql.auth.postgresPassword="${POSTGRESQL_PASSWORD:?}" \
--set postgresql.primary.persistence.existingClaim="${POSTGRESQL_PVC:?}"
- Delete the existing PostgreSQL pods and the new statefulset will create a new one:
kubectl delete pod --namespace "${HARBOR_NAMESPACE:?}" harbor-postgresql-0
To 12.x.x
This major release renames several values in this chart and adds missing features, in order to be inline with the rest of assets in the Bitnami charts repository. Additionally updates the PostgreSQL & Redis subcharts to their newest major 11.x.x and 16.x.x, respectively, which contain similar changes.
harborAdminPasswordwas renamed toadminPasswordforcePasswordwas deprecated- Traffic exposure was completely redesigned:
- The new parameter
exposureTypeallows deciding whether to expose Harbor using Ingress or an NGINX proxy service.typedoesn't acceptIngressas a valid value anymore. To configure traffic exposure through Ingress, setexposureTypetoingressservice.tlsmap has been renamed tonginx.tls- To configure TLS termination with Ingress, set the
ingress.core.tlsparameter ingressmap is completely redefined
- The new parameter
xxxImageparameters (e.g.nginxImage) have been renamed toxxx.image(e.g.nginx.image)xxx.replicasparameters (e.g.nginx.replicas) have been renamed toxxx.replicaCount(e.g.nginx.replicaCount)persistence.persistentVolumeClaim.xxx.accessModeparameters (e.g.persistence.persistentVolumeClaim.registry.accessMode) have been renamed topersistence.persistentVolumeClaim.xxx.accessModes(e.g.persistence.persistentVolumeClaim.registry.accessModes) and expect an array instead of a stringcaBundleSecretNamewas renamed tointernalTLS.caBundleSecretpersistence.imageChartStorage.caBundleSecretNamewas renamed topersistence.imageChartStorage.caBundleSecretcore.uaaSecretNamewas renamed tocore.uaaSecret
How to upgrade to version 12.0.0
To upgrade to 12.x.x from 11.x, it should be done reusing the PVC(s) used to hold the data on your previous release. To do so, follow the instructions below (the following example assumes that the release name is harbor):
NOTE: Please, create a backup of your database before running any of those actions.
- Select the namespace where Harbor is deployed:
HARBOR_NAMESPACE='default'
- Obtain the credentials and the names of the PVCs used to hold the data on your current release:
HARBOR_PASSWORD=$(kubectl get secret --namespace "${HARBOR_NAMESPACE:?}" harbor-core-envvars -o jsonpath="{.data.HARBOR_ADMIN_PASSWORD}" | base64 --decode)
POSTGRESQL_PASSWORD=$(kubectl get secret --namespace "${HARBOR_NAMESPACE:?}" harbor-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
POSTGRESQL_PVC=$(kubectl get pvc --namespace "${HARBOR_NAMESPACE:?}" -l app.kubernetes.io/instance=harbor,app.kubernetes.io/name=postgresql,role=primary -o jsonpath="{.items[0].metadata.name}")
- Delete the PostgreSQL statefulset (notice the option
--cascade=orphan) and secret:
kubectl delete statefulsets.apps --namespace "${HARBOR_NAMESPACE:?}" --cascade=orphan harbor-postgresql
kubectl delete secret --namespace "${HARBOR_NAMESPACE:?}" harbor-postgresql
- Upgrade your release using the same PostgreSQL version:
CURRENT_PG_VERSION=$(kubectl exec --namespace "${HARBOR_NAMESPACE:?}" harbor-postgresql-0 -c harbor-postgresql -- bash -c 'printenv BITNAMI_IMAGE_VERSION')
helm --namespace "${HARBOR_NAMESPACE:?}" upgrade harbor bitnami/harbor \
--set adminPassword="${HARBOR_PASSWORD:?}" \
--set postgresql.image.tag="${CURRENT_PG_VERSION:?}" \
--set postgresql.auth.postgresPassword="${POSTGRESQL_PASSWORD:?}" \
--set postgresql.primary.persistence.existingClaim="${POSTGRESQL_PVC:?}"
- Delete the existing PostgreSQL pods and the new statefulset will create a new one:
kubectl delete pod --namespace "${HARBOR_NAMESPACE:?}" harbor-postgresql-0
NOTE: the instructions above reuse the same PostgreSQL version you were using in your chart release. Otherwise, you will find an error such as the one below when upgrading since the new chart major version also bumps the application version. To workaround this issue you need to upgrade database, please refer to the official PostgreSQL documentation for more information about this.
$ kubectl --namespace "${HARBOR_NAMESPACE:?}" logs harbor-postgresql-0 --container harbor-postgresql ... postgresql 08:10:14.72 INFO ==> ** Starting PostgreSQL ** 2022-02-01 08:10:14.734 GMT [1] FATAL: database files are incompatible with server 2022-02-01 08:10:14.734 GMT [1] DETAIL: The data directory was initialized by PostgreSQL version 11, which is not compatible with this version 14.1.
To 11.0.0
This major update the Redis® subchart to its newest major, 15.0.0. Here you can find more info about the specific changes.
To 10.0.0
This major updates the Redis® subchart to it newest major, 14.0.0, which contains breaking changes. For more information on this subchart's major and the steps needed to migrate your data from your previous release, please refer to Redis® upgrade notes..
To 9.7.0
This new version of the chart bumps the version of Harbor to 2.2.0 which deprecates built-in Clair. If you still want to use Clair, you will need to set clair.enabled to true and Clair scanner and the Harbor adapter will be deployed. Follow these steps to add it as an additional interrogation service for Harbor.
Please note that Clair might be fully deprecated from this chart in following updates.
To 9.0.0
On November 13, 2020, Helm v2 support was formally finished, this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL.
- Previous versions of this Helm Chart use
apiVersion: v1(installable by both Helm 2 and 3), this Helm Chart was updated toapiVersion: v2(installable by Helm 3 only). Here you can find more information about theapiVersionfield. - Move dependency information from the requirements.yaml to the Chart.yaml
- After running helm dependency update, a Chart.lock file is generated containing the same structure used in the previous requirements.lock
- The different fields present in the Chart.yaml file has been ordered alphabetically in a homogeneous way for all the Bitnami Helm Chart.
- This chart depends on the PostgreSQL 10 instead of PostgreSQL 9. Apart from the same changes that are described in this section, there are also other major changes due to the master/slave nomenclature was replaced by primary/readReplica. Here you can find more information about the changes introduced.
Considerations when upgrading to this version
- If you want to upgrade to this version using Helm v2, this scenario is not supported as this version does not support Helm v2 anymore.
- If you installed the previous version with Helm v2 and wants to upgrade to this version with Helm v3, please refer to the official Helm documentation about migrating from Helm v2 to v3.
Useful links
How to upgrade to version 9.0.0
To upgrade to 9.0.0 from 8.x, it should be done reusing the PVC(s) used to hold the data on your previous release. To do so, follow the instructions below (the following example assumes that the release name is harbor and the release namespace default):
NOTE: Please, create a backup of your database before running any of those actions.
- Obtain the credentials and the names of the PVCs used to hold the data on your current release:
export HARBOR_PASSWORD=$(kubectl get secret --namespace default harbor-core-envvars -o jsonpath="{.data.HARBOR_ADMIN_PASSWORD}" | base64 --decode)
export POSTGRESQL_PASSWORD=$(kubectl get secret --namespace default harbor-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
export POSTGRESQL_PVC=$(kubectl get pvc -l app.kubernetes.io/instance=harbor,app.kubernetes.io/name=postgresql,role=master -o jsonpath="{.items[0].metadata.name}")
- Delete the PostgreSQL statefulset (notice the option --cascade=false)
kubectl delete statefulsets.apps --cascade=false harbor-postgresql
- Upgrade your release:
helm upgrade harbor bitnami/harbor \
--set harborAdminPassword=$HARBOR_PASSWORD \
--set postgresql.image.tag=$CURRENT_PG_VERSION \
--set postgresql.postgresqlPassword$POSTGRESQL_PASSWORD \
--set postgresql.persistence.existingClaim=$POSTGRESQL_PVC
- Delete the existing PostgreSQL pods and the new statefulset will create a new one:
kubectl delete pod harbor-postgresql-0
License
Copyright © 2025 Broadcom. The term "Broadcom" refers to Broadcom Inc. and/or its subsidiaries.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.