From eb80a7eadd32e55caaa9aa3b1b7741e02f2648c7 Mon Sep 17 00:00:00 2001 From: Carlos Rodriguez Hernandez Date: Wed, 23 Oct 2019 11:59:42 +0000 Subject: [PATCH] Adapt README in charts (I) --- bitnami/airflow/Chart.yaml | 2 +- bitnami/airflow/README.md | 53 ++++++------- bitnami/apache/Chart.yaml | 2 +- bitnami/apache/README.md | 39 +++++----- bitnami/cassandra/Chart.yaml | 2 +- bitnami/cassandra/README.md | 80 +++++++++---------- bitnami/consul/Chart.yaml | 2 +- bitnami/consul/README.md | 127 +++++++++++-------------------- bitnami/elasticsearch/Chart.yaml | 2 +- bitnami/elasticsearch/README.md | 40 ++++------ bitnami/etcd/Chart.yaml | 2 +- bitnami/etcd/README.md | 118 +++++++++++----------------- bitnami/grafana/Chart.yaml | 2 +- bitnami/grafana/README.md | 71 +++++++++-------- bitnami/harbor/Chart.yaml | 2 +- bitnami/harbor/README.md | 121 +++++++++++++---------------- 16 files changed, 282 insertions(+), 383 deletions(-) diff --git a/bitnami/airflow/Chart.yaml b/bitnami/airflow/Chart.yaml index 7e1746c897..ed9e9430ed 100644 --- a/bitnami/airflow/Chart.yaml +++ b/bitnami/airflow/Chart.yaml @@ -1,6 +1,6 @@ apiVersion: v1 name: airflow -version: 4.0.1 +version: 4.0.2 appVersion: 1.10.5 description: Apache Airflow is a platform to programmatically author, schedule and monitor workflows. keywords: diff --git a/bitnami/airflow/README.md b/bitnami/airflow/README.md index 6080a50603..cae36cd492 100644 --- a/bitnami/airflow/README.md +++ b/bitnami/airflow/README.md @@ -42,7 +42,7 @@ $ helm delete my-release The command removes all the Kubernetes components associated with the chart and deletes the release. -## Configuration +## Parameters The following tables lists the configurable parameters of the Kafka chart and their default values. @@ -170,13 +170,17 @@ $ helm install --name my-release -f values.yaml bitnami/airflow > **Tip**: You can use the default [values.yaml](values.yaml) +## Configuration and installation details + +### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/) + +It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image. + +Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist. + ### Production configuration -This chart includes a `values-production.yaml` file where you can find some parameters oriented to production configuration in comparison to the regular `values.yaml`. - -```console -$ helm install --name my-release -f ./values-production.yaml bitnami/airflow -``` +This chart includes a `values-production.yaml` file where you can find some parameters oriented to production configuration in comparison to the regular `values.yaml`. You can use this file instead of the default one. - URL used to access to airflow web ui: ```diff @@ -202,46 +206,39 @@ $ helm install --name my-release -f ./values-production.yaml bitnami/airflow + ingress.enabled: true ``` -### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/) - -It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image. - -Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist. - -## Persistence - -The Bitnami Airflow chart relies on the PostgreSQL chart persistence. This means that Airflow does not persist anything. - -## Generate a Fernet key +### Generate a Fernet key A Fernet key is required in order to encrypt password within connections. The Fernet key must be a base64-encoded 32-byte key. Learn how to generate one [here](https://bcb.github.io/airflow/fernet-key) -## Load DAG files +### Load DAG files There are three different ways to load your custom DAG files into the Airflow chart. All of them are compatible so you can use more than one at the same time. -### Option 1: Load locally from the `files` folder +#### Option 1: Load locally from the `files` folder -If you plan to deploy the chart from your filesystem, you can copy your DAG files inside the `files/dags` directory. A config map will be created with those files and it will be mounted in all airflow nodes.. +If you plan to deploy the chart from your filesystem, you can copy your DAG files inside the `files/dags` directory. A config map will be created with those files and it will be mounted in all airflow nodes. -### Option 2: Specify an existing config map +#### Option 2: Specify an existing config map -You can manually create a config map containing all your DAG files and then pass the name when deploying Airflow chart. For that, you can pass the option `--set airflow.dagsConfigMap`. +You can manually create a config map containing all your DAG files and then pass the name when deploying Airflow chart. For that, you can pass the option `airflow.dagsConfigMap`. -### Option 3: Get your DAG files from a git repository +#### Option 3: Get your DAG files from a git repository You can store all your DAG files on a GitHub repository and then clone to the Airflow pods with an initContainer. The repository will be periodically updated using a sidecar container. In order to do that, you can deploy airflow with the following options: ```console -helm install --name my-release bitnami/airflow \ - --set airflow.cloneDagFilesFromGit.enabled=true \ - --set airflow.cloneDagFilesFromGit.repository=https://github.com/USERNAME/REPOSITORY \ - --set airflow.cloneDagFilesFromGit.branch=master - --set airflow.cloneDagFilesFromGit.interval=60 +airflow.cloneDagFilesFromGit.enabled=true +airflow.cloneDagFilesFromGit.repository=https://github.com/USERNAME/REPOSITORY +airflow.cloneDagFilesFromGit.branch=master +airflow.cloneDagFilesFromGit.interval=60 ``` +## Persistence + +The Bitnami Airflow chart relies on the PostgreSQL chart persistence. This means that Airflow does not persist anything. + ## Notable changes ### 1.0.0 diff --git a/bitnami/apache/Chart.yaml b/bitnami/apache/Chart.yaml index 857f6fa659..da6f43a614 100644 --- a/bitnami/apache/Chart.yaml +++ b/bitnami/apache/Chart.yaml @@ -1,6 +1,6 @@ apiVersion: v1 name: apache -version: 7.2.0 +version: 7.2.1 appVersion: 2.4.41 description: Chart for Apache HTTP Server keywords: diff --git a/bitnami/apache/README.md b/bitnami/apache/README.md index 247b5e2d3c..ce4ed61aed 100644 --- a/bitnami/apache/README.md +++ b/bitnami/apache/README.md @@ -47,7 +47,7 @@ $ helm delete my-release The command removes all the Kubernetes components associated with the chart and deletes the release. -## Configuration +## Parameters The following tables lists the configurable parameters of the Apache chart and their default values. @@ -121,22 +121,7 @@ $ helm install --name my-release -f values.yaml bitnami/apache > **Tip**: You can use the default [values.yaml](values.yaml) -## Deploying your custom web application -The Apache chart allows you to deploy a custom web application using one of the following methods: - - - Cloning from a git repository: Set `cloneHtdocsFromGit.enabled` to `true` and set the repository and branch using the `cloneHtdocsFromGit.repository` and `cloneHtdocsFromGit.branch` parameters. A sidecar will also pull the latest changes in an interval set by `cloneHtdocsFromGit.interval`. - - Providing a ConfigMap: Set the `htdocsConfigMap` value to mount a ConfigMap in the Apache htdocs folder. - - Using an existing PVC: Set the `htdocsPVC` value to mount an PersistentVolumeClaim with the web application content. - -In the following example you can deploy a example web application using git: - -``` -helm install bitnami/apache --set cloneHtdocsFromGit.enabled=true --set cloneHtdocsFromGit.repository=https://github.com/mdn/beginner-html-site-styled.git --set cloneHtdocsFromGit.branch=master -``` - -To use your own `httpd.conf` file you can mount it using the `httpdConfConfigMap` parameter, which is the name of a Config Map with the contents of your `httpd.conf`. Additionaly, you can copy your `httpd.conf` to `/files/httpd.conf` in your current working directory to mount it to the container. - -You may also want to mount different virtual host configurations. This can be done using the `vhostsConfigMap` value. This is a pointer to a ConfigMap with the desired Apache virtual host configurations. You can also copy your virtual host configurations under the `files/vhosts/` directory in your current working directory to mount them as a Config Map to the container. +## Configuration and installation details ### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/) @@ -144,6 +129,26 @@ It is strongly recommended to use immutable tags in a production environment. Th Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist. +### Deploying your custom web application + +The Apache chart allows you to deploy a custom web application using one of the following methods: + + - Cloning from a git repository: Set `cloneHtdocsFromGit.enabled` to `true` and set the repository and branch using the `cloneHtdocsFromGit.repository` and `cloneHtdocsFromGit.branch` parameters. A sidecar will also pull the latest changes in an interval set by `cloneHtdocsFromGit.interval`. + - Providing a ConfigMap: Set the `htdocsConfigMap` value to mount a ConfigMap in the Apache htdocs folder. + - Using an existing PVC: Set the `htdocsPVC` value to mount an PersistentVolumeClaim with the web application content. + +You can deploy a example web application using git deploying the chart with the following parameters: + +```console +cloneHtdocsFromGit.enabled=true +cloneHtdocsFromGit.repository=https://github.com/mdn/beginner-html-site-styled.git +cloneHtdocsFromGit.branch=master +``` + +To use your own `httpd.conf` file you can mount it using the `httpdConfConfigMap` parameter, which is the name of a Config Map with the contents of your `httpd.conf`. Additionaly, you can copy your `httpd.conf` to `/files/httpd.conf` in your current working directory to mount it to the container. + +You may also want to mount different virtual host configurations. This can be done using the `vhostsConfigMap` value. This is a pointer to a ConfigMap with the desired Apache virtual host configurations. You can also copy your virtual host configurations under the `files/vhosts/` directory in your current working directory to mount them as a Config Map to the container. + ## Notable changes ### 7.0.0 diff --git a/bitnami/cassandra/Chart.yaml b/bitnami/cassandra/Chart.yaml index 7c6dbd63b9..f9e0dc22c5 100644 --- a/bitnami/cassandra/Chart.yaml +++ b/bitnami/cassandra/Chart.yaml @@ -1,6 +1,6 @@ apiVersion: v1 name: cassandra -version: 4.1.3 +version: 4.1.4 appVersion: 3.11.4 description: Apache Cassandra is a free and open-source distributed database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure. Cassandra offers robust support for clusters spanning multiple datacenters, with asynchronous masterless replication allowing low latency operations for all clients. icon: https://bitnami.com/assets/stacks/cassandra/img/cassandra-stack-220x234.png diff --git a/bitnami/cassandra/README.md b/bitnami/cassandra/README.md index 21abf9ae24..1bbbb62d6c 100644 --- a/bitnami/cassandra/README.md +++ b/bitnami/cassandra/README.md @@ -44,7 +44,7 @@ $ helm delete my-release The command removes all the Kubernetes components associated with the chart and deletes the release. -## Configuration +## Parameters The following tables lists the configurable parameters of the cassandra chart and their default values. @@ -154,13 +154,17 @@ $ helm install --name my-release -f values.yaml bitnami/cassandra > **Tip**: You can use the default [values.yaml](values.yaml) +## Configuration and installation details + +### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/) + +It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image. + +Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist. + ### Production configuration -This chart includes a `values-production.yaml` file where you can find some parameters oriented to production configuration in comparison to the regular `values.yaml`. - -```console -$ helm install --name my-release -f ./values-production.yaml bitnami/cassandra -``` +This chart includes a `values-production.yaml` file where you can find some parameters oriented to production configuration in comparison to the regular `values.yaml`. You can use this file instead of the default one. - Number of Cassandra and seed nodes: ```diff @@ -194,11 +198,34 @@ $ helm install --name my-release -f ./values-production.yaml bitnami/cassandra + metrics.enabled: true ``` -### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/) +### Enable TLS for Cassandra -It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image. +You can enable TLS between client and server and between nodes. In order to do so, you need to set the following values: -Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist. + * For internode cluster encryption, set `cluster.internodeEncryption` to a value different from `none`. Available values are `all`, `dc` or `rack`. + * For client-server encryption, set `cluster.clientEncryption` to true. + +In addition to this, you **must** create a secret containing the *keystore* and *truststore* certificates and their corresponding protection passwords. Then, set the `tlsEncryptionSecretName` when deploying the chart. + +You can create the secret (named for example `cassandra-tls`) using `--from-file=./keystore`, `--from-file=./truststore`, `--from-literal=keystore-password=PUT_YOUR_KEYSTORE_PASSWORD` and `--from-literal=truststore-password=PUT_YOUR_TRUSTSTORE_PASSWORD` options, assuming you have your certificates in your working directory (replace the PUT_YOUR_KEYSTORE_PASSWORD and PUT_YOUR_TRUSTSTORE_PASSWORD placeholders).To deploy Cassandra with TLS you can use those parameters: + +```console +cluster.internodeEncryption=all +cluster.clientEncryption=true +tlsEncryptionSecretName=cassandra-tls +``` + +### Initializing the database + +The [Bitnami cassandra](https://github.com/bitnami/bitnami-docker-cassandra) image allows having initialization scripts mounted in `/docker-entrypoint.initdb`. This is done in the chart by adding files in the `files/docker-entrypoint-initdb.d` folder (in order to do so, clone this chart) or by setting the `initDBConfigMap` value with a `ConfigMap` (named, for example, `init-db`) that includes the necessary `sh` or `cql` scripts: + +```console +initDBConfigMap=init-db +``` + +### Using a custom Cassandra image + +This chart uses the [Bitnami cassandra](https://github.com/bitnami/bitnami-docker-cassandra) image by default. In case you want to use a different image, you can redefine the container entrypoint by setting the `entrypoint` and `cmd` values. ## Persistence @@ -216,41 +243,6 @@ As an alternative, this chart supports using an initContainer to change the owne You can enable this initContainer by setting `volumePermissions.enabled` to `true`. -## Enable TLS for Cassandra - -You can enable TLS between client and server and between nodes. In order to do so, you need to set the following values: - - * For internode cluster encryption, set `cluster.internodeEncryption` to a value different from `none`. Available values are `all`, `dc` or `rack`. - * For client-server encryption, set `cluster.clientEncryption` to true. - -In addition to this, you **must** create a secret containing the *keystore* and *truststore* certificates and their corresponding protection passwords. Then, set the `tlsEncryptionSecretName` when deploying the chart. - -You can create the secret with this command assuming you have your certificates in your working directory (replace the PUT_YOUR_KEYSTORE_PASSWORD and PUT_YOUR_TRUSTSTORE_PASSWORD placeholders): - -```console -kubectl create secret generic casssandra-tls --from-file=./keystore --from-file=./truststore --from-literal=keystore-password=PUT_YOUR_KEYSTORE_PASSWORD --from-literal=truststore-password=PUT_YOUR_TRUSTSTORE_PASSWORD -``` - -As an example of Cassandra installed with TLS you can use this command: - -```console -helm install --name my-release bitnami/cassandra --set cluster.internodeEncryption=all \ - --set cluster.clientEncryption=true --set tlsEncryptionSecretName=cassandra-tls \ -``` - -## Initializing the database - -The [Bitnami cassandra](https://github.com/bitnami/bitnami-docker-cassandra) image allows having initialization scripts mounted in `/docker-entrypoint.initdb`. This is done in the chart by adding files in the `files/docker-entrypoint-initdb.d` folder (in order to do so, clone this chart) or by setting the `initDBConfigMap` value with a `ConfigMap` that includes the necessary `sh` or `cql` scripts: - -```bash -kubectl create configmap init-db --from-file=path/to/scripts -helm install bitnami/cassandra --set initDBConfigMap=init-db -``` - -## Using a custom Cassandra image - -This chart uses the [Bitnami cassandra](https://github.com/bitnami/bitnami-docker-cassandra) image by default. In case you want to use a different image, you can redefine the container entrypoint by setting the `entrypoint` and `cmd` values. - ## Upgrade ### 4.0.0 diff --git a/bitnami/consul/Chart.yaml b/bitnami/consul/Chart.yaml index 5a6e4f2a06..3ec6f47aa5 100644 --- a/bitnami/consul/Chart.yaml +++ b/bitnami/consul/Chart.yaml @@ -1,6 +1,6 @@ apiVersion: v1 name: consul -version: 6.0.4 +version: 6.0.5 appVersion: 1.6.1 description: Highly available and distributed service discovery and key-value store designed with support for the modern data center to make distributed systems and configuration easy. home: https://www.consul.io/ diff --git a/bitnami/consul/README.md b/bitnami/consul/README.md index 756b4dc783..006e1986f0 100644 --- a/bitnami/consul/README.md +++ b/bitnami/consul/README.md @@ -47,7 +47,7 @@ The command removes all the Kubernetes components associated with the chart and $ helm delete --purge my-release ``` -## Configuration +## Parameters The following tables lists the configurable parameters of the HashiCorp Consul chart and their default values. @@ -145,19 +145,7 @@ $ helm install --name my-release -f values.yaml bitnami/consul > **Tip**: You can use the default [values.yaml](values.yaml) -### Production configuration - -This chart includes a `values-production.yaml` file where you can find some parameters oriented to production configuration in comparison to the regular `values.yaml`. - -```console -$ helm install --name my-release -f ./values-production.yaml bitnami/consul -``` - -- Start a side-car prometheus exporter: -```diff -- metrics.enabled: false -+ metrics.enabled: true -``` +## Configuration and installation details ### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/) @@ -165,65 +153,41 @@ It is strongly recommended to use immutable tags in a production environment. Th Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist. -## Persistence +### Production configuration -The [Bitnami HashiCorp Consul](https://github.com/bitnami/bitnami-docker-consul) image stores the HashiCorp Consul data at the `/bitnami` path of the container. +This chart includes a `values-production.yaml` file where you can find some parameters oriented to production configuration in comparison to the regular `values.yaml`. You can use this file instead of the default one. -Persistent Volume Claims are used to keep the data across deployments. This is known to work in GCE, AWS, and minikube. -See the [Configuration](#configuration) section to configure the PVC or to disable persistence. +- Start a side-car prometheus exporter: +```diff +- metrics.enabled: false ++ metrics.enabled: true +``` -### Adjust permissions of persistent volume mountpoint +### Ingress -As the image run as non-root by default, it is necessary to adjust the ownership of the persistent volume so that the container can write data into it. - -By default, the chart is configured to use Kubernetes Security Context to automatically change the ownership of the volume. However, this feature does not work in all Kubernetes distributions. -As an alternative, this chart supports using an initContainer to change the ownership of the volume before mounting it in the final destination. - -You can enable this initContainer by setting `volumePermissions.enabled` to `true`. - -## Ingress - -This chart provides support for ingress resources. If you have an -ingress controller installed on your cluster, such as [nginx-ingress](https://kubeapps.com/charts/stable/nginx-ingress) -or [traefik](https://kubeapps.com/charts/stable/traefik) you can utilize -the ingress controller to service your HashiCorp Consul UI application. +This chart provides support for ingress resources. If you have an ingress controller installed on your cluster, such as [nginx-ingress](https://kubeapps.com/charts/stable/nginx-ingress) or [traefik](https://kubeapps.com/charts/stable/traefik) you can utilize the ingress controller to service your HashiCorp Consul UI application. To enable ingress integration, please set `ingress.enabled` to `true` -### Hosts -Most likely you will only want to have one hostname that maps to this -HashiCorp Consul installation, however it is possible to have more than one -host. To facilitate this, the `ingress.hosts` object is an array. +#### Hosts +Most likely you will only want to have one hostname that maps to this HashiCorp Consul installation, however it is possible to have more than one host. To facilitate this, the `ingress.hosts` object is an array. -For each item, please indicate a `name`, `tls`, `tlsSecret`, and any -`annotations` that you may want the ingress controller to know about. +For each item, please indicate a `name`, `tls`, `tlsSecret`, and any `annotations` that you may want the ingress controller to know about. -Indicating TLS will cause HashiCorp Consul to generate HTTPS urls, and -HashiCorp Consul will be connected to at port 443. The actual secret that -`tlsSecret` references does not have to be generated by this chart. -However, please note that if TLS is enabled, the ingress record will not -work until this secret exists. +Indicating TLS will cause HashiCorp Consul to generate HTTPS urls, and HashiCorp Consul will be connected to at port 443. The actual secret that `tlsSecret` references does not have to be generated by this chart. However, please note that if TLS is enabled, the ingress record will not work until this secret exists. -For annotations, please see [this document](https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md). -Not all annotations are supported by all ingress controllers, but this -document does a good job of indicating which annotation is supported by -many popular ingress controllers. +For annotations, please see [this document](https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md). Not all annotations are supported by all ingress controllers, but this document does a good job of indicating which annotation is supported by many popular ingress controllers. ### TLS Secrets -This chart will facilitate the creation of TLS secrets for use with the -ingress controller, however this is not required. There are three -common use cases: +This chart will facilitate the creation of TLS secrets for use with the ingress controller, however this is not required. There are three common use cases: * helm generates / manages certificate secrets * user generates / manages certificates separately -* an additional tool (like [kube-lego](https://kubeapps.com/charts/stable/kube-lego)) -manages the secrets for the application +* an additional tool (like [kube-lego](https://kubeapps.com/charts/stable/kube-lego)) manages the secrets for the application -In the first two cases, one will need a certificate and a key. We would -expect them to look like this: +In the first two cases, one will need a certificate and a key. We would expect them to look like this: -* certificate files should look like (and there can be more than one -certificate if there is a certificate chain) +* certificate files should look like (and there can be more than one certificate if there is a certificate chain) ``` -----BEGIN CERTIFICATE----- @@ -232,40 +196,25 @@ MIID6TCCAtGgAwIBAgIJAIaCwivkeB5EMA0GCSqGSIb3DQEBCwUAMFYxCzAJBgNV jScrvkiBO65F46KioCL9h5tDvomdU1aqpI/CBzhvZn1c0ZTf87tGQR8NK7v7 -----END CERTIFICATE----- ``` + * keys should look like: + ``` -----BEGIN RSA PRIVATE KEY----- MIIEogIBAAKCAQEAvLYcyu8f3skuRyUgeeNpeDvYBCDcgq+LsWap6zbX5f8oLqp4 ... wrj2wDbCDCFmfqnSJ+dKI3vFLlEz44sAV8jX/kd4Y6ZTQhlLbYc= -----END RSA PRIVATE KEY----- -```` - -If you are going to use helm to manage the certificates, please copy -these values into the `certificate` and `key` values for a given -`ingress.secrets` entry. - -If you are going to manage TLS secrets outside of helm, please -know that you can create a TLS secret by doing the following: - -``` -kubectl create secret tls consul.local-tls --key /path/to/key.key --cert /path/to/cert.crt ``` -Please see [this example](https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/tls) -for more information. +If you are going to use helm to manage the certificates, please copy these values into the `certificate` and `key` values for a given `ingress.secrets` entry. -## Enable TLS encryption between servers +Please see [this example](https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/tls) for more information. + +#### Enable TLS encryption between servers You must manually create a secret containing your PEM-encoded certificate authority, your PEM-encoded certificate, and your PEM-encoded private key. -``` -kubectl create secret generic consul-tls-encryption \ - --from-file=ca.pem \ - --from-file=consul.pem \ - --from-file=consul-key.pem -``` - > Take into account that you will need to create a config map with the proper configuration. If the secret is specified, the chart will locate those files at `/opt/bitnami/consul/certs/`, so you will want to use the below snippet to configure HashiCorp Consul TLS encryption in your config map: @@ -279,16 +228,28 @@ If the secret is specified, the chart will locate those files at `/opt/bitnami/c "verify_server_hostname": true, ``` -After creating the secret, you can install the helm chart specyfing the secret name: +After creating the secret, you can install the helm chart specyfing the secret name using `tlsEncryptionSecretName=consul-tls-encryption`. -``` -helm install bitnami/consul --set tlsEncryptionSecretName=consul-tls-encryption -``` - -## Metrics +### Metrics The chart can optionally start a metrics exporter endpoint on port `9107` for [prometheus](https://prometheus.io). The data exposed by the endpoint is intended to be consumed by a prometheus chart deployed within the cluster and as such the endpoint is not exposed outside the cluster. +## Persistence + +The [Bitnami HashiCorp Consul](https://github.com/bitnami/bitnami-docker-consul) image stores the HashiCorp Consul data at the `/bitnami` path of the container. + +Persistent Volume Claims are used to keep the data across deployments. This is known to work in GCE, AWS, and minikube. +See the [Configuration](#configuration) section to configure the PVC or to disable persistence. + +### Adjust permissions of persistent volume mountpoint + +As the image run as non-root by default, it is necessary to adjust the ownership of the persistent volume so that the container can write data into it. + +By default, the chart is configured to use Kubernetes Security Context to automatically change the ownership of the volume. However this feature does not work in all Kubernetes distributions. +As an alternative, this chart supports using an initContainer to change the ownership of the volume before mounting it in the final destination. + +You can enable this initContainer by setting `volumePermissions.enabled` to `true`. + ## Upgrading ### To 6.0.0 diff --git a/bitnami/elasticsearch/Chart.yaml b/bitnami/elasticsearch/Chart.yaml index aa2505e5fa..bbba59dadb 100644 --- a/bitnami/elasticsearch/Chart.yaml +++ b/bitnami/elasticsearch/Chart.yaml @@ -1,6 +1,6 @@ apiVersion: v1 name: elasticsearch -version: 6.3.11 +version: 6.3.12 appVersion: 7.4.0 description: A highly scalable open-source full-text search and analytics engine keywords: diff --git a/bitnami/elasticsearch/README.md b/bitnami/elasticsearch/README.md index b6cfd0cadb..31cc3fbed3 100644 --- a/bitnami/elasticsearch/README.md +++ b/bitnami/elasticsearch/README.md @@ -48,7 +48,7 @@ The command removes all the Kubernetes components associated with the chart and $ helm delete --purge my-release ``` -## Configuration +## Parameters The following table lists the configurable parameters of the Elasticsearch chart and their default values. @@ -210,13 +210,17 @@ $ helm install --name my-release -f values.yaml bitnami/elasticsearch > **Tip**: You can use the default [values.yaml](values.yaml). +## Configuration and installation details + +### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/) + +It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image. + +Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist. + ### Production configuration -This chart includes a `values-production.yaml` file where you can find some parameters oriented to production configuration in comparison to the regular `values.yaml`. - -```console -$ helm install --name my-release -f ./values-production.yaml bitnami/elasticsearch -``` +This chart includes a `values-production.yaml` file where you can find some parameters oriented to production configuration in comparison to the regular `values.yaml`. You can use this file instead of the default one. - Desired number of Elasticsearch master-eligible nodes: ```diff @@ -370,11 +374,14 @@ $ helm install --name my-release -f ./values-production.yaml bitnami/elasticsear + metrics.enabled: true ``` -### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/) +### Troubleshooting -It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image. +Currently, Elasticsearch requires some changes in the kernel of the host machine to work as expected. If those values are not set in the underlying operating system, the ES containers fail to boot with ERROR messages. More information about these requirements can be found in the links below: -Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist. +- [File Descriptor requirements](https://www.elastic.co/guide/en/elasticsearch/reference/current/file-descriptors.html) +- [Virtual memory requirements](https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html) + +You can use a **privileged** initContainer (using the `sysctlImage.enabled=true` parameter) to change those settings in the Kernel. ## Persistence @@ -391,21 +398,6 @@ As an alternative, this chart supports using an initContainer to change the owne You can enable this initContainer by setting `volumePermissions.enabled` to `true`. -## Troubleshooting - -Currently, Elasticsearch requires some changes in the kernel of the host machine to work as expected. If those values are not set in the underlying operating system, the ES containers fail to boot with ERROR messages. More information about these requirements can be found in the links below: - -- [File Descriptor requirements](https://www.elastic.co/guide/en/elasticsearch/reference/current/file-descriptors.html) -- [Virtual memory requirements](https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html) - -You can use a **privileged** initContainer to changes those settings in the Kernel by enabling the `sysctlImage.enabled`: - -```console -$ helm install --name my-release \ - --set sysctlImage.enabled=true \ - bitnami/elasticsearch -``` - ## Upgrading ### To 3.0.0 diff --git a/bitnami/etcd/Chart.yaml b/bitnami/etcd/Chart.yaml index aeeb6e2f62..b114eee2d1 100644 --- a/bitnami/etcd/Chart.yaml +++ b/bitnami/etcd/Chart.yaml @@ -1,6 +1,6 @@ apiVersion: v1 name: etcd -version: 4.3.10 +version: 4.3.11 appVersion: 3.4.2 description: etcd is a distributed key value store that provides a reliable way to store data across a cluster of machines keywords: diff --git a/bitnami/etcd/README.md b/bitnami/etcd/README.md index e8e65cbd24..3fa07d1f42 100644 --- a/bitnami/etcd/README.md +++ b/bitnami/etcd/README.md @@ -44,7 +44,7 @@ $ helm delete my-release The command removes all the Kubernetes components associated with the chart and deletes the release. -## Configuration +## Parameters The following tables lists the configurable parameters of the etcd chart and their default values. @@ -151,13 +151,17 @@ $ helm install --name my-release -f values.yaml bitnami/etcd > **Tip**: You can use the default [values.yaml](values.yaml) +## Configuration and installation details + +### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/) + +It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image. + +Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist. + ### Production configuration and horizontal scaling -This chart includes a `values-production.yaml` file where you can find some parameters oriented to production configuration in comparison to the regular `values.yaml`. - -```console -$ helm install --name my-release -f ./values-production.yaml bitnami/etcd -``` +This chart includes a `values-production.yaml` file where you can find some parameters oriented to production configuration in comparison to the regular `values.yaml`. You can use this file instead of the default one. - Number of etcd nodes: ```diff @@ -195,27 +199,13 @@ $ helm install --name my-release -f ./values-production.yaml bitnami/etcd + metrics.enabled: true ``` -To horizontally scale this chart once it has been deployed: - -```console -$ helm upgrade my-release bitnami/etcd \ - -f ./values-production.yaml \ - --set statefulset.replicaCount=5 -``` - -> **Note**: Scaling the statefulset with `kubectl scale ...` command is highly discouraged. Use `helm upgrade ...` for horizontal scaling so you ensure all the environment variables used to configure the ectd cluster are properly updated. - -### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/) - -It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image. - -Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist. +To horizontally scale this chart once it has been deployed, you can upgrade the deployment using a new value for the `statefulset.replicaCount` parameter. ### Using custom configuration In order to use custom configuration parameters, two options are available: -- Using environment variables: etcd allows setting environment variables that map to configuration settings. In order to set extra environment variables, use the `envVarsConfigMap` value to point to a ConfigMap that contains them. Example: +- Using environment variables: etcd allows setting environment variables that map to configuration settings. In order to set extra environment variables, use the `envVarsConfigMap` value to point to a ConfigMap (shown in the below example) that contains them. This ConfigMap can be created with the `-f /tmp/configurationEnvVars.yaml` flag. Then deploy the chart with the `envVarsConfigMap=etcd-env-vars` parameter: ```console $ cat << EOF > /tmp/configurationEnvVars.yaml @@ -228,70 +218,67 @@ data: ETCD_AUTO_COMPACTION_RETENTION: "0" ETCD_HEARTBEAT_INTERVAL: "150" EOF - -$ kubectl create -f /tmp/configurationEnvVars.yaml -$ helm install bitnami/etcd --set envVarsConfigMap=etcd-env-vars ``` -- Using a custom `etcd.conf.yml`: The etcd chart allows mounting a custom etcd.conf.yml file using the `configFileConfigMap` value. Example: +- Using a custom `etcd.conf.yml`: The etcd chart allows mounting a custom etcd.conf.yml file as ConfigMap (named, for example, etcd-conf) and deploy it using the `configFileConfigMap=etcd-conf` parameter. + +### Enable security for etcd + +#### Configure RBAC + +In order to enable [Role-based access control for etcd](https://coreos.com/etcd/docs/latest/op-guide/authentication.html) you can set the following parameters: ```console -$ kubectl create configmap etcd-conf --from-file=etcd.conf.yml -$ helm install bitnami/etcd --set configFileConfigMap=etcd-conf -``` - -## Enable security for etcd - -### Configure RBAC - -In order to enable [Role-based access control for etcd](https://coreos.com/etcd/docs/latest/op-guide/authentication.html) you can run the following command: - -```console -$ helm install --name my-release --set auth.rbac.enabled --set auth.rbac.rootPassword=YOUR-PASSWORD bitnami/etcd +auth.rbac.enabled=true +auth.rbac.rootPassword=YOUR-PASSWORD ``` The previous command will deploy etcd creating a `root` user with its associate `root` role with access to everything. The rest of users will use the `guest` role and won't have permissions to do anything. -### Configure certificated for peer communication +#### Configure certificated for peer communication In order to enable secure transport between peer nodes deploy the helm chart with these options: ```console -$ helm install --name my-release --set auth.peer.secureTransport=true --set auth.peer.useAutoTLS=true bitnami/etcd +auth.peer.secureTransport=true +auth.peer.useAutoTLS=true ``` -### Configure certificates for client communication +#### Configure certificates for client communication In order to enable secure transport between client and server you have to create a secret containing the cert and key files and the CA used to sign those client certificates. -You can create that secret with this command: +You can create that secret and deploy the helm chart with these options: ```console -$ kubectl create secret generic etcd-client-certs --from-file=ca.crt=path/to/ca.crt --from-file=cert.pem=path/to/cert.pem --from-file=key.pem=path/to/key.pem -``` - -Once the secret is created, you can deploy the helm chart with these options: - -```console -$ helm install --name my-release --set auth.client.secureTransport=true --set auth.client.enableAuthentication=true --set auth.client.existingSecret=etcd-client-certs bitnami/etcd +auth.client.secureTransport=true +auth.client.enableAuthentication=true +auth.client.existingSecret=etcd-client-certs ``` > Ref: [etcd security model](https://coreos.com/etcd/docs/latest/op-guide/security.html) > > Ref: [Generate self-signed certificagtes for etcd](https://coreos.com/os/docs/latest/generate-self-signed-certificates.html) -## Persistence and Disaster recovery +### Disaster recovery -### Persistence - -The [Bitnami etcd](https://github.com/bitnami/bitnami-docker-etcd) image stores the etcd data at the `/bitnami/etcd` path of the container. Persistent Volume Claims are used to keep the data across statefulsets. This is known to work in GCE, AWS, and Minikube. To enable persistence, deploy the helm chart with these options: +You can enable auto disaster recovery by periodically snapshotting the keyspace. If the cluster permanently loses more than (N-1)/2 members, it tries to recover the cluster from a previous snapshot. You can enable it using the following parameters: ```console -$ helm install --name my-release bitnami/etcd \ - --set persistence.enable=true \ - --set persistence.size=8Gi +persistence.enable=true +disasterRecovery.enabled=true +disasterRecovery.pvc.size=2Gi +disasterRecovery.pvc.storageClassName=nfs ``` +> **Note**: Disaster recovery feature requires using volumes with ReadWriteMany access mode. For instance, you can use the stable/nfs-server-provisioner chart to provide NFS PVCs. + +## Persistence + +The [Bitnami etcd](https://github.com/bitnami/bitnami-docker-etcd) image stores the etcd data at the `/bitnami/etcd` path of the container. Persistent Volume Claims are used to keep the data across statefulsets. + +By default, the chart mounts a [Persistent Volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) at this location. The volume is created using dynamic volume provisioning. See the [Configuration](#configuration) section to configure the PVC. + ### Adjust permissions of persistent volume mountpoint As the image run as non-root by default, it is necessary to adjust the ownership of the persistent volume so that the container can write data into it. @@ -301,25 +288,6 @@ As an alternative, this chart supports using an initContainer to change the owne You can enable this initContainer by setting `volumePermissions.enabled` to `true`. -### Disaster recovery - -You can enable auto disaster recovery by periodically snapshotting the keyspace. If the cluster permanently loses more than (N-1)/2 members, it tries to recover the cluster from a previous snapshot. - -```console -$ helm install --name my-release bitnami/etcd \ - --set persistence.enable=true \ - --set disasterRecovery.enabled=true \ - --set disasterRecovery.pvc.size=2Gi \ - --set disasterRecovery.pvc.storageClassName=nfs -``` - -> **Note**: Disaster recovery feature requires using volumes with ReadWriteMany access mode. For instance, you can use the stable/nfs-server-provisioner chart to provide NFS PVCs: - -```console -$ helm install --name nfs-server-provisioner stable/nfs-server-provisioner \ - --set persistence.enabled=true --set persistence.size=10Gi -``` - ## Upgrading ### To 3.0.0 diff --git a/bitnami/grafana/Chart.yaml b/bitnami/grafana/Chart.yaml index 7d735d8d6e..8289582adc 100644 --- a/bitnami/grafana/Chart.yaml +++ b/bitnami/grafana/Chart.yaml @@ -1,6 +1,6 @@ apiVersion: v1 name: grafana -version: 1.0.0 +version: 1.0.1 appVersion: 6.4.3 description: Grafana is an open source, feature rich metrics dashboard and graph editor for Graphite, Elasticsearch, OpenTSDB, Prometheus and InfluxDB. keywords: diff --git a/bitnami/grafana/README.md b/bitnami/grafana/README.md index 92909e2d53..6a96f393f4 100644 --- a/bitnami/grafana/README.md +++ b/bitnami/grafana/README.md @@ -45,7 +45,7 @@ $ helm delete my-release The command removes all the Kubernetes components associated with the chart and deletes the release. Use the option `--purge` to delete all persistent volumes too. -## Configuration +## Parameters The following tables lists the configurable parameters of the grafana chart and their default values. @@ -140,6 +140,29 @@ $ helm install --name my-release -f values.yaml bitnami/grafana > **Tip**: You can use the default [values.yaml](values.yaml) +## Configuration and installation details + +### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/) + +It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image. + +Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist. + +### Production configuration + +This chart includes a `values-production.yaml` file where you can find some parameters oriented to production configuration in comparison to the regular `values.yaml`. You can use this file instead of the default one. + +```console +$ helm install --name my-release -f ./values-production.yaml bitnami/grafana +``` + +- Enable ingress controller + +```diff +- ingress.enabled: false ++ ingress.enabled: true +``` + ### Using custom configuration Grafana support multiples configuration files. Using kubernetes you can mount a file using a ConfigMap. For example, to mount a custom `grafana.ini` file or `custom.ini` file you can create a ConfigMap like the following: @@ -154,36 +177,20 @@ data: # Raw text of the file ``` -And now you need to pass the ConfigMap name, to the corresponding parameter: - -```console -$ helm install bitnami/grafana --set config.useGrafanaIniFile=true,config.grafanaIniConfigMap=myconfig -``` +And now you need to pass the ConfigMap name, to the corresponding parameters: `config.useGrafanaIniFile=true` and `config.grafanaIniConfigMap=myconfig`. To provide dashboards on deployment time, Grafana needs a dashboards provider and the dashboards themselves. A default provider is created if enabled, or you can mount your own provider using a ConfigMap, but have in mind that the path to the dashboard folder must be `/opt/bitnami/grafana/dashboards`. 1. To create a dashboard, it is needed to have a datasource for it. The datasources must be created mounting a secret with all the datasource files in it. In this case, it is not a ConfigMap because the datasource could contain sensitive information. 2. To load the dashboards themselves you need to create a ConfigMap for each one containing the `json` file that defines the dashboard and set the array with the ConfigMap names into the `dashboardsConfigMaps` parameter. Note the difference between the datasources and the dashboards creation. For the datasources we can use just one secret with all of the files, while for the dashboards we need one ConfigMap per file. -For example, after the creation of the dashboard and datasource ConfigMap in the same way that the explained for the `grafana.ini` file, execute the following to deploy Grafana with custom dashboards: +For example, after the creation of the dashboard and datasource ConfigMap in the same way that the explained for the `grafana.ini` file, use the following parameters to deploy Grafana with custom dashboards: ```console -$ helm install bitnami/grafana --set "dashboardsProvider.enabled=true,datasources.secretName=datasource-secret,dashboardsConfigMaps[0].configMapName=mydashboard,dashboardsConfigMaps[0].fileName=mydashboard.json" -``` - -### Production configuration - -This chart includes a `values-production.yaml` file where you can find some parameters oriented to production configuration in comparison to the regular `values.yaml`. - -```console -$ helm install --name my-release -f ./values-production.yaml bitnami/grafana -``` - -- Enable ingress controller - -```diff -- ingress.enabled: false -+ ingress.enabled: true +dashboardsProvider.enabled=true +datasources.secretName=datasource-secret +dashboardsConfigMaps[0].configMapName=mydashboard +dashboardsConfigMaps[0].fileName=mydashboard.json ``` ### LDAP configuration @@ -242,16 +249,12 @@ data: email = "email" ``` -Create the ConfigMap into the cluster: +Create the ConfigMap into the cluster and deploy the Grafana Helm Chart using the existing ConfigMap and the following parameters: ```console -$ kubectl create -f configmap.yaml -``` - -And deploy the Grafana Helm Chart using the existing ConfigMap: - -```console -$ helm install bitnami/grafana --set ldap.enabled=true,ldap.configMapName=ldap-config,ldap.allowSignUp=true +ldap.enabled=true +ldap.configMapName=ldap-config +ldap.allowSignUp=true ``` ### Supporting HA (High Availability) @@ -261,12 +264,6 @@ To configure the external database provide a configuration file containing the [ More information about Grafana HA [here](https://grafana.com/docs/tutorials/ha_setup/) -### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/) - -It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image. - -Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist. - ## Persistence The [Bitnami Grafana](https://github.com/bitnami/bitnami-docker-grafana) image stores the Grafana data and configurations at the `/opt/bitnami/grafana/data` path of the container. diff --git a/bitnami/harbor/Chart.yaml b/bitnami/harbor/Chart.yaml index 2681d3032c..51d48dbaa3 100644 --- a/bitnami/harbor/Chart.yaml +++ b/bitnami/harbor/Chart.yaml @@ -1,6 +1,6 @@ apiVersion: v1 name: harbor -version: 2.6.12 +version: 2.6.13 appVersion: 1.9.1 description: Harbor is an an open source trusted cloud native registry project that stores, signs, and scans content keywords: diff --git a/bitnami/harbor/README.md b/bitnami/harbor/README.md index 07f9f9cb21..b48223273f 100644 --- a/bitnami/harbor/README.md +++ b/bitnami/harbor/README.md @@ -49,69 +49,7 @@ $ helm delete --purge my-release Additionaly, if `persistence.resourcePolicy` is set to `keep`, you should manually delete the PVCs. -## Downloading the chart - -Download Harbor helm chart - -```bash -$ git clone https://github.com/bitnami/charts -``` - -Change directory to Harbor code - -```bash -$ cd charts/bitnami/harbor -``` - -## Configuration - -### Configure the way how to expose Harbor service: - -- **Ingress**: The ingress controller must be installed in the Kubernetes cluster. - **Notes:** if the TLS is disabled, the port must be included in the command when pulling/pushing images. Refer to issue [#5291](https://github.com/goharbor/harbor/issues/5291) for the detail. -- **ClusterIP**: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. -- **NodePort**: Exposes the service on each Node’s IP at a static port (the NodePort). You’ll be able to contact the NodePort service, from outside the cluster, by requesting `NodeIP:NodePort`. -- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer. - -### Configure the external URL: - -The external URL for Harbor core service is used to: - -1. populate the docker/helm commands showed on portal -2. populate the token service URL returned to docker/notary client - -Format: `protocol://domain[:port]`. Usually: - -- if expose the service via `Ingress`, the `domain` should be the value of `service.ingress.hosts.core` -- if expose the service via `ClusterIP`, the `domain` should be the value of `service.clusterIP.name` -- if expose the service via `NodePort`, the `domain` should be the IP address of one Kubernetes node -- if expose the service via `LoadBalancer`, set the `domain` as your own domain name and add a CNAME record to map the domain name to the one you got from the cloud provider - -If Harbor is deployed behind the proxy, set it as the URL of proxy. - -### Configure data persistence: - -- **Disable**: The data does not survive the termination of a pod. -- **Persistent Volume Claim(default)**: A default `StorageClass` is needed in the Kubernetes cluster to dynamically provision the volumes. Specify another StorageClass in the `storageClass` or set `existingClaim` if you have already existing persistent volumes to use. -- **External Storage(only for images and charts)**: For images and charts, the external storages are supported: `azure`, `gcs`, `s3` `swift` and `oss`. - -### Configure the secrets: - -- **Secret keys**: Secret keys are used for secure communication between components. Fill `core.secret`, `jobservice.secret` and `registry.secret` to configure. -- **Certificates**: Used for token encryption/decryption. Fill `core.secretName` to configure. - -Secrets and certificates must be setup to avoid changes on every Helm upgrade (see: [#107](https://github.com/goharbor/harbor-helm/issues/107)). - -### Adjust permissions of persistent volume mountpoint - -As the images run as non-root by default, it is necessary to adjust the ownership of the persistent volumes so that the containers can write data into it. - -By default, the chart is configured to use Kubernetes Security Context to automatically change the ownership of the volume. However, this feature does not work in all Kubernetes distributions. -As an alternative, this chart supports using an initContainer to change the ownership of the volume before mounting it in the final destination. - -You can enable this initContainer by setting `volumePermissions.enabled` to `true`. - -### Configure the deployment options: +## Parameters The following table lists the configurable parameters of the Harbor chart and the default values. They can be configured in `values.yaml` or set via `--set` flag during installation. @@ -359,9 +297,18 @@ Alternatively, a YAML file that specifies the values for the above parameters ca ```console $ helm install --name my-release -f values.yaml bitnami/harbor ``` + +## Configuration and installation details + +### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/) + +It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image. + +Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist. + ### Production configuration -This chart includes a `values-production.yaml` file where you can find some parameters oriented to production configuration in comparison to the regular `values.yaml`: +This chart includes a `values-production.yaml` file where you can find some parameters oriented to production configuration in comparison to the regular `values.yaml`. You can use this file instead of the default one. - The way how to expose the service: `Ingress`, `ClusterIP`, `NodePort` or `LoadBalancer`: ```diff @@ -393,11 +340,51 @@ This chart includes a `values-production.yaml` file where you can find some para + postgresql.replication.enabled: true ``` -### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/) +### Configure the way how to expose Harbor service: -It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image. +- **Ingress**: The ingress controller must be installed in the Kubernetes cluster. + **Notes:** if the TLS is disabled, the port must be included in the command when pulling/pushing images. Refer to issue [#5291](https://github.com/goharbor/harbor/issues/5291) for the detail. +- **ClusterIP**: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. +- **NodePort**: Exposes the service on each Node’s IP at a static port (the NodePort). You’ll be able to contact the NodePort service, from outside the cluster, by requesting `NodeIP:NodePort`. +- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer. -Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist. +### Configure the external URL: + +The external URL for Harbor core service is used to: + +1. populate the docker/helm commands showed on portal +2. populate the token service URL returned to docker/notary client + +Format: `protocol://domain[:port]`. Usually: + +- if expose the service via `Ingress`, the `domain` should be the value of `service.ingress.hosts.core` +- if expose the service via `ClusterIP`, the `domain` should be the value of `service.clusterIP.name` +- if expose the service via `NodePort`, the `domain` should be the IP address of one Kubernetes node +- if expose the service via `LoadBalancer`, set the `domain` as your own domain name and add a CNAME record to map the domain name to the one you got from the cloud provider + +If Harbor is deployed behind the proxy, set it as the URL of proxy. + +### Configure data persistence: + +- **Disable**: The data does not survive the termination of a pod. +- **Persistent Volume Claim(default)**: A default `StorageClass` is needed in the Kubernetes cluster to dynamically provision the volumes. Specify another StorageClass in the `storageClass` or set `existingClaim` if you have already existing persistent volumes to use. +- **External Storage(only for images and charts)**: For images and charts, the external storages are supported: `azure`, `gcs`, `s3` `swift` and `oss`. + +### Configure the secrets: + +- **Secret keys**: Secret keys are used for secure communication between components. Fill `core.secret`, `jobservice.secret` and `registry.secret` to configure. +- **Certificates**: Used for token encryption/decryption. Fill `core.secretName` to configure. + +Secrets and certificates must be setup to avoid changes on every Helm upgrade (see: [#107](https://github.com/goharbor/harbor-helm/issues/107)). + +### Adjust permissions of persistent volume mountpoint + +As the images run as non-root by default, it is necessary to adjust the ownership of the persistent volumes so that the containers can write data into it. + +By default, the chart is configured to use Kubernetes Security Context to automatically change the ownership of the volume. However, this feature does not work in all Kubernetes distributions. +As an alternative, this chart supports using an initContainer to change the ownership of the volume before mounting it in the final destination. + +You can enable this initContainer by setting `volumePermissions.enabled` to `true`. ## Upgrade