Merge branch 'master' into replaceGit

This commit is contained in:
Carlos Rodríguez Hernández
2018-10-25 15:40:41 +02:00
committed by GitHub
80 changed files with 723 additions and 478 deletions

View File

@@ -29,6 +29,7 @@ $ helm search bitnami
- [Parse](https://github.com/helm/charts/tree/master/stable/parse)
- [Phabricator](https://github.com/helm/charts/tree/master/stable/phabricator)
- [phpBB](https://github.com/helm/charts/tree/master/stable/phpbb)
- [PostgreSQL](https://github.com/helm/charts/tree/master/stable/postgresql)
- [PrestaShop](https://github.com/helm/charts/tree/master/stable/prestashop)
- [RabbitMQ](https://github.com/helm/charts/tree/master/stable/rabbitmq)
- [Redis](https://github.com/helm/charts/tree/master/stable/redis)
@@ -51,7 +52,6 @@ $ helm search bitnami
- [MySQL](https://github.com/bitnami/charts/tree/master/bitnami/mysql)
- [nginx](https://github.com/bitnami/charts/tree/master/bitnami/nginx)
- [NodeJS](https://github.com/bitnami/charts/tree/master/bitnami/node)
- [PostgreSQL](https://github.com/bitnami/charts/tree/master/bitnami/postgresql)
- [TensorFlow Inception](https://github.com/bitnami/charts/tree/master/bitnami/tensorflow-inception)
- [Tomcat](https://github.com/bitnami/charts/tree/master/bitnami/tomcat)
- [WildFly](https://github.com/bitnami/charts/tree/master/bitnami/wildfly)

View File

@@ -1,5 +1,5 @@
name: elasticsearch
version: 4.1.2
version: 4.1.3
appVersion: 6.4.2
description: A highly scalable open-source full-text search and analytics engine
keywords:

View File

@@ -51,106 +51,107 @@ The following table lists the configurable parameters of the Elasticsearch chart
| Parameter | Description | Default |
|---------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------|
| `global.imageRegistry` | Global Docker image registry | `nil` |
| `image.registry` | Elasticsearch image registry | `docker.io` |
| `image.repository` | Elasticsearch image repository | `bitnami/elasticsearch` |
| `image.tag` | Elasticsearch image tag | `{VERSION}` |
| `image.pullPolicy` | Image pull policy | `Always` |
| `image.pullSecrets` | Specify image pull secrets | `nil` |
| `name` | Elasticsearch cluster name | `elastic` |
| `config` | Elasticsearch node custom configuration | `` |
| `master.name` | Master-eligible node pod name | `master` |
| `master.replicas` | Desired number of Elasticsearch master-eligible nodes | `2` |
| `master.heapSize` | Master-eligible node heap size | `128m` |
| `master.antiAffinity` | Master-eligible node pod anti-affinity policy | `soft` |
| `master.resources` | CPU/Memory resource requests/limits for master-eligible nodes pods | `requests: { cpu: "25m", memory: "256Mi" }` |
| `master.livenessProbe.enabled` | Enable/disable the liveness probe (master-eligible nodes pod) | `true` |
| `master.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated (master-eligible nodes pod) | `90` |
| `master.livenessProbe.periodSeconds` | How often to perform the probe (master-eligible nodes pod) | `10` |
| `master.livenessProbe.timeoutSeconds` | When the probe times out (master-eligible nodes pod) | `5` |
| `master.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed (master-eligible nodes pod) | `1` |
| `master.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded | `5` |
| `master.readinessProbe.enabled` | Enable/disable the readiness probe (master-eligible nodes pod) | `true` |
| `master.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated (master-eligible nodes pod) | `90` |
| `master.readinessProbe.periodSeconds` | How often to perform the probe (master-eligible nodes pod) | `10` |
| `master.readinessProbe.timeoutSeconds` | When the probe times out (master-eligible nodes pod) | `5` |
| `master.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed (master-eligible nodes pod) | `1` |
| `master.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded | `5` |
| `securityContext.enabled` | Enable security context | `true` |
| `securityContext.fsGroup` | Group ID for the container | `1001` |
| `securityContext.runAsUser` | User ID for the container | `1001`
| `discovery.name` | Discover node pod name | `discovery` |
| `coordinating.name` | Coordinating-only node pod name | `coordinating-only` |
| `coordinating.replicas` | Desired number of Elasticsearch coordinating-only nodes | `2` |
| `coordinating.heapSize` | Coordinating-only node heap size | `128m` |
| `coordinating.antiAffinity` | Coordinating-only node pod anti-affinity policy | `soft` |
| `coordinating.service.type` | Coordinating-only node kubernetes service type | `ClusterIP` |
| `coordinating.service.port` | Elasticsearch REST API port | `9200` |
| `coordinating.resources` | CPU/Memory resource requests/limits for coordinating-only nodes pods | `requests: { cpu: "25m", memory: "256Mi" }` |
| `coordinating.livenessProbe.enabled` | Enable/disable the liveness probe (coordinating-only nodes pod) | `true` |
| `coordinating.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated (coordinating-only nodes pod) | `90` |
| `coordinating.livenessProbe.periodSeconds` | How often to perform the probe (coordinating-only nodes pod) | `10` |
| `coordinating.livenessProbe.timeoutSeconds` | When the probe times out (coordinating-only nodes pod) | `5` |
| `coordinating.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed (coordinating-only nodes pod) | `1` |
| `coordinating.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded | `5` |
| `coordinating.readinessProbe.enabled` | Enable/disable the readiness probe (coordinating-only nodes pod) | `true` |
| `coordinating.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated (coordinating-only nodes pod) | `90` |
| `coordinating.readinessProbe.periodSeconds` | How often to perform the probe (coordinating-only nodes pod) | `10` |
| `coordinating.readinessProbe.timeoutSeconds` | When the probe times out (coordinating-only nodes pod) | `5` |
| `coordinating.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed (coordinating-only nodes pod) | `1` |
| `coordinating.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded | `5` |
| `data.name` | Data node pod name | `data` |
| `data.replicas` | Desired number of Elasticsearch data nodes nodes | `3` |
| `data.heapSize` | Data node heap size | `1024m` |
| `data.antiAffinity` | Data pod anti-affinity policy | `soft` |
| `data.resources` | CPU/Memory resource requests/limits for data nodes | `requests: { cpu: "25m", memory: "1152Mi" }` |
| `data.persistence.enabled` | Enable persistence using a `PersistentVolumeClaim` | `true` |
| `data.persistence.annotations` | Persistent Volume Claim annotations | `{}` |
| `data.persistence.storageClass` | Persistent Volume Storage Class | `` |
| `data.persistence.accessModes` | Persistent Volume Access Modes | `[ReadWriteOnce]` |
| `data.persistence.size` | Persistent Volume Size | `8Gi` |
| `data.livenessProbe.enabled` | Enable/disable the liveness probe (data nodes pod) | `true` |
| `data.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated (data nodes pod) | `90` |
| `data.livenessProbe.periodSeconds` | How often to perform the probe (data nodes pod) | `10` |
| `data.livenessProbe.timeoutSeconds` | When the probe times out (data nodes pod) | `5` |
| `data.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed (data nodes pod) | `1` |
| `data.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded | `5` |
| `data.readinessProbe.enabled` | Enable/disable the readiness probe (data nodes pod) | `true` |
| `data.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated (data nodes pod) | `90` |
| `data.readinessProbe.periodSeconds` | How often to perform the probe (data nodes pod) | `10` |
| `data.readinessProbe.timeoutSeconds` | When the probe times out (data nodes pod) | `5` |
| `data.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed (data nodes pod) | `1` |
| `data.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded | `5` |
| `ingest.enabled` | Enable ingest nodes | `false` |
| `ingest.name` | Ingest node pod name | `ingest` |
| `ingest.replicas` | Desired number of Elasticsearch ingest nodes | `2` |
| `ingest.heapSize` | Ingest node heap size | `128m` |
| `ingest.antiAffinity` | Ingest node pod anti-affinity policy | `soft` |
| `ingest.resources` | CPU/Memory resource requests/limits for ingest nodes pods | `requests: { cpu: "25m", memory: "256Mi" }` |
| `ingest.livenessProbe.enabled` | Enable/disable the liveness probe (ingest nodes pod) | `true` |
| `ingest.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated (ingest nodes pod) | `90` |
| `ingest.livenessProbe.periodSeconds` | How often to perform the probe (ingest nodes pod) | `10` |
| `ingest.livenessProbe.timeoutSeconds` | When the probe times out (ingest nodes pod) | `5` |
| `ingest.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed (ingest nodes pod) | `1` |
| `ingest.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded | `5` |
| `ingest.readinessProbe.enabled` | Enable/disable the readiness probe (ingest nodes pod) | `true` |
| `ingest.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated (ingest nodes pod) | `90` |
| `ingest.readinessProbe.periodSeconds` | How often to perform the probe (ingest nodes pod) | `10` |
| `ingest.readinessProbe.timeoutSeconds` | When the probe times out (ingest nodes pod) | `5` |
| `ingest.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed (ingest nodes pod) | `1` |
| `ingest.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded | `5` |
| `metrics.enabled` | Enable prometheus exporter | `false` |
| `metrics.name` | Metrics pod name | `metrics` |
| `metrics.image.registry` | Metrics exporter image registry | `docker.io` |
| `metrics.image.repository` | Metrics exporter image repository | `bitnami/elasticsearch-exporter` |
| `metrics.image.tag` | Metrics exporter image tag | `latest` |
| `metrics.image.pullPolicy` | Metrics exporter image pull policy | `Always` |
| `metrics.service.type` | Metrics exporter endpoint service type | `ClusterIP` |
| `metrics.resources` | Metrics exporter resource requests/limit | `requests: { cpu: "25m" }` |
| `sysctlImage.registry` | Kernel settings modifier image registry | `docker.io` |
| `sysctlImage.repository` | Kernel settings modifier image repository | `busybox` |
| `sysctlImage.tag` | Kernel settings modifier image tag | `latest` |
| `sysctlImage.pullPolicy` | Kernel settings modifier image pull policy | `Always` |
| `global.imageRegistry` | Global Docker image registry | `nil` |
| `image.registry` | Elasticsearch image registry | `docker.io` |
| `image.repository` | Elasticsearch image repository | `bitnami/elasticsearch` |
| `image.tag` | Elasticsearch image tag | `{VERSION}` |
| `image.pullPolicy` | Image pull policy | `Always` |
| `image.pullSecrets` | Specify image pull secrets | `nil` |
| `name` | Elasticsearch cluster name | `elastic` |
| `config` | Elasticsearch node custom configuration | `` |
| `master.name` | Master-eligible node pod name | `master` |
| `master.replicas` | Desired number of Elasticsearch master-eligible nodes | `2` |
| `master.heapSize` | Master-eligible node heap size | `128m` |
| `master.antiAffinity` | Master-eligible node pod anti-affinity policy | `soft` |
| `master.resources` | CPU/Memory resource requests/limits for master-eligible nodes pods | `requests: { cpu: "25m", memory: "256Mi" }` |
| `master.livenessProbe.enabled` | Enable/disable the liveness probe (master-eligible nodes pod) | `true` |
| `master.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated (master-eligible nodes pod) | `90` |
| `master.livenessProbe.periodSeconds` | How often to perform the probe (master-eligible nodes pod) | `10` |
| `master.livenessProbe.timeoutSeconds` | When the probe times out (master-eligible nodes pod) | `5` |
| `master.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed (master-eligible nodes pod) | `1` |
| `master.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded | `5` |
| `master.readinessProbe.enabled` | Enable/disable the readiness probe (master-eligible nodes pod) | `true` |
| `master.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated (master-eligible nodes pod) | `90` |
| `master.readinessProbe.periodSeconds` | How often to perform the probe (master-eligible nodes pod) | `10` |
| `master.readinessProbe.timeoutSeconds` | When the probe times out (master-eligible nodes pod) | `5` |
| `master.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed (master-eligible nodes pod) | `1` |
| `master.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded | `5` |
| `securityContext.enabled` | Enable security context | `true` |
| `securityContext.fsGroup` | Group ID for the container | `1001` |
| `securityContext.runAsUser` | User ID for the container | `1001` |
| `discovery.name` | Discover node pod name | `discovery` |
| `coordinating.name` | Coordinating-only node pod name | `coordinating-only` |
| `coordinating.replicas` | Desired number of Elasticsearch coordinating-only nodes | `2` |
| `coordinating.heapSize` | Coordinating-only node heap size | `128m` |
| `coordinating.antiAffinity` | Coordinating-only node pod anti-affinity policy | `soft` |
| `coordinating.service.type` | Coordinating-only node kubernetes service type | `ClusterIP` |
| `coordinating.service.port` | Elasticsearch REST API port | `9200` |
| `coordinating.resources` | CPU/Memory resource requests/limits for coordinating-only nodes pods | `requests: { cpu: "25m", memory: "256Mi" }` |
| `coordinating.livenessProbe.enabled` | Enable/disable the liveness probe (coordinating-only nodes pod) | `true` |
| `coordinating.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated (coordinating-only nodes pod) | `90` |
| `coordinating.livenessProbe.periodSeconds` | How often to perform the probe (coordinating-only nodes pod) | `10` |
| `coordinating.livenessProbe.timeoutSeconds` | When the probe times out (coordinating-only nodes pod) | `5` |
| `coordinating.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed (coordinating-only nodes pod) | `1` |
| `coordinating.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded | `5` |
| `coordinating.readinessProbe.enabled` | Enable/disable the readiness probe (coordinating-only nodes pod) | `true` |
| `coordinating.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated (coordinating-only nodes pod) | `90` |
| `coordinating.readinessProbe.periodSeconds` | How often to perform the probe (coordinating-only nodes pod) | `10` |
| `coordinating.readinessProbe.timeoutSeconds` | When the probe times out (coordinating-only nodes pod) | `5` |
| `coordinating.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed (coordinating-only nodes pod) | `1` |
| `coordinating.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded | `5` |
| `data.name` | Data node pod name | `data` |
| `data.replicas` | Desired number of Elasticsearch data nodes nodes | `3` |
| `data.heapSize` | Data node heap size | `1024m` |
| `data.antiAffinity` | Data pod anti-affinity policy | `soft` |
| `data.resources` | CPU/Memory resource requests/limits for data nodes | `requests: { cpu: "25m", memory: "1152Mi" }` |
| `data.persistence.enabled` | Enable persistence using a `PersistentVolumeClaim` | `true` |
| `data.persistence.annotations` | Persistent Volume Claim annotations | `{}` |
| `data.persistence.storageClass` | Persistent Volume Storage Class | `` |
| `data.persistence.accessModes` | Persistent Volume Access Modes | `[ReadWriteOnce]` |
| `data.persistence.size` | Persistent Volume Size | `8Gi` |
| `data.livenessProbe.enabled` | Enable/disable the liveness probe (data nodes pod) | `true` |
| `data.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated (data nodes pod) | `90` |
| `data.livenessProbe.periodSeconds` | How often to perform the probe (data nodes pod) | `10` |
| `data.livenessProbe.timeoutSeconds` | When the probe times out (data nodes pod) | `5` |
| `data.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed (data nodes pod) | `1` |
| `data.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded | `5` |
| `data.readinessProbe.enabled` | Enable/disable the readiness probe (data nodes pod) | `true` |
| `data.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated (data nodes pod) | `90` |
| `data.readinessProbe.periodSeconds` | How often to perform the probe (data nodes pod) | `10` |
| `data.readinessProbe.timeoutSeconds` | When the probe times out (data nodes pod) | `5` |
| `data.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed (data nodes pod) | `1` |
| `data.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded | `5` |
| `ingest.enabled` | Enable ingest nodes | `false` |
| `ingest.name` | Ingest node pod name | `ingest` |
| `ingest.replicas` | Desired number of Elasticsearch ingest nodes | `2` |
| `ingest.heapSize` | Ingest node heap size | `128m` |
| `ingest.antiAffinity` | Ingest node pod anti-affinity policy | `soft` |
| `ingest.resources` | CPU/Memory resource requests/limits for ingest nodes pods | `requests: { cpu: "25m", memory: "256Mi" }` |
| `ingest.livenessProbe.enabled` | Enable/disable the liveness probe (ingest nodes pod) | `true` |
| `ingest.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated (ingest nodes pod) | `90` |
| `ingest.livenessProbe.periodSeconds` | How often to perform the probe (ingest nodes pod) | `10` |
| `ingest.livenessProbe.timeoutSeconds` | When the probe times out (ingest nodes pod) | `5` |
| `ingest.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed (ingest nodes pod) | `1` |
| `ingest.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded | `5` |
| `ingest.readinessProbe.enabled` | Enable/disable the readiness probe (ingest nodes pod) | `true` |
| `ingest.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated (ingest nodes pod) | `90` |
| `ingest.readinessProbe.periodSeconds` | How often to perform the probe (ingest nodes pod) | `10` |
| `ingest.readinessProbe.timeoutSeconds` | When the probe times out (ingest nodes pod) | `5` |
| `ingest.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed (ingest nodes pod) | `1` |
| `ingest.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded | `5` |
| `metrics.enabled` | Enable prometheus exporter | `false` |
| `metrics.name` | Metrics pod name | `metrics` |
| `metrics.image.registry` | Metrics exporter image registry | `docker.io` |
| `metrics.image.repository` | Metrics exporter image repository | `bitnami/elasticsearch-exporter` |
| `metrics.image.tag` | Metrics exporter image tag | `latest` |
| `metrics.image.pullPolicy` | Metrics exporter image pull policy | `Always` |
| `metrics.service.type` | Metrics exporter endpoint service type | `ClusterIP` |
| `metrics.resources` | Metrics exporter resource requests/limit | `requests: { cpu: "25m" }` |
| `sysctlImage.enabled` | Enable kernel settings modifier image | `false` |
| `sysctlImage.registry` | Kernel settings modifier image registry | `docker.io` |
| `sysctlImage.repository` | Kernel settings modifier image repository | `bitnami/minideb` |
| `sysctlImage.tag` | Kernel settings modifier image tag | `latest` |
| `sysctlImage.pullPolicy` | Kernel settings modifier image pull policy | `Always` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
@@ -176,6 +177,18 @@ The [Bitnami Elasticsearch](https://github.com/bitnami/bitnami-docker-elasticsea
By default, the chart mounts a [Persistent Volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) at this location. The volume is created using dynamic volume provisioning. See the [Configuration](#configuration) section to configure the PVC.
## Troubleshooting
Currently, Elasticsearch 5 requires some changes in the kernel of the host machine to work as expected. If those values are not set in the underlying operating system, the ES containers fail to boot with ERROR messages.
You can use the initContainer created to set those parameters
```console
$ helm install --name my-release \
--set sysctlImage.enabled=true \
bitnami/elasticsearch
```
## Upgrading
### To 3.0.0

View File

@@ -13,7 +13,7 @@ spec:
matchLabels:
app: {{ template "elasticsearch.name" . }}
release: "{{ .Release.Name }}"
role: "coordinating-only"
role: "coordinating-only"
replicas: {{ .Values.coordinating.replicas }}
template:
metadata:
@@ -55,13 +55,16 @@ spec:
- name: {{ . }}
{{- end}}
{{- end }}
{{- if .Values.sysctlImage.enabled }}
## Image that performs the sysctl operation to modify Kernel settings (needed sometimes to avoid boot errors)
initContainers:
- name: sysctl
image: {{ template "sysctl.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
command: ["sysctl", "-w", "vm.max_map_count=262144"]
command: ['sh', '-c', 'install_packages systemd && sysctl -w vm.max_map_count=262144 && sysctl -w fs.file-max=65536']
securityContext:
privileged: true
{{- end }}
containers:
- name: {{ template "elasticsearch.coordinating.fullname" . }}
{{- if .Values.securityContext.enabled }}

View File

@@ -51,13 +51,16 @@ spec:
release: {{ .Release.Name | quote }}
role: "data"
{{- end }}
{{- if .Values.sysctlImage.enabled }}
## Image that performs the sysctl operation to modify Kernel settings (needed sometimes to avoid boot errors)
initContainers:
- name: sysctl
image: {{ template "sysctl.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
command: ["sysctl", "-w", "vm.max_map_count=262144"]
command: ['sh', '-c', 'install_packages systemd && sysctl -w vm.max_map_count=262144 && sysctl -w fs.file-max=65536']
securityContext:
privileged: true
{{- end }}
containers:
- name: {{ template "elasticsearch.data.fullname" . }}
image: {{ template "elasticsearch.image" . }}

View File

@@ -14,7 +14,7 @@ spec:
matchLabels:
app: {{ template "elasticsearch.name" . }}
release: "{{ .Release.Name }}"
role: "ingest"
role: "ingest"
replicas: {{ .Values.ingest.replicas }}
template:
metadata:
@@ -56,13 +56,16 @@ spec:
- name: {{ . }}
{{- end}}
{{- end }}
{{- if .Values.sysctlImage.enabled }}
## Image that performs the sysctl operation to modify Kernel settings (needed sometimes to avoid boot errors)
initContainers:
- name: sysctl
image: {{ template "sysctl.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
command: ["sysctl", "-w", "vm.max_map_count=262144"]
command: ['sh', '-c', 'install_packages systemd && sysctl -w vm.max_map_count=262144 && sysctl -w fs.file-max=65536']
securityContext:
privileged: true
{{- end }}
containers:
- name: {{ template "elasticsearch.ingest.fullname" . }}
image: {{ template "elasticsearch.image" . }}

View File

@@ -13,7 +13,7 @@ spec:
matchLabels:
app: {{ template "elasticsearch.name" . }}
release: "{{ .Release.Name }}"
role: "master"
role: "master"
replicas: {{ .Values.master.replicas }}
template:
metadata:
@@ -56,13 +56,16 @@ spec:
- name: {{ . }}
{{- end}}
{{- end }}
{{- if .Values.sysctlImage.enabled }}
## Image that performs the sysctl operation to modify Kernel settings (needed sometimes to avoid boot errors)
initContainers:
- name: sysctl
image: {{ template "sysctl.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
command: ["sysctl", "-w", "vm.max_map_count=262144"]
command: ['sh', '-c', 'install_packages systemd && sysctl -w vm.max_map_count=262144 && sysctl -w fs.file-max=65536']
securityContext:
privileged: true
{{- end }}
containers:
- name: {{ template "elasticsearch.master.fullname" . }}
image: {{ template "elasticsearch.image" . }}

View File

@@ -64,9 +64,11 @@ master:
failureThreshold: 5
## Image that performs the sysctl operation
##
sysctlImage:
enabled: false
registry: docker.io
repository: busybox
repository: bitnami/minideb
tag: latest
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'

View File

@@ -24,9 +24,11 @@ image:
# - myRegistrKeySecretName
## Image that performs the sysctl operation
##
sysctlImage:
enabled: false
registry: docker.io
repository: busybox
repository: bitnami/minideb
tag: latest
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'

View File

@@ -1,6 +1,6 @@
name: mean
version: 4.2.1
appVersion: 3.6.4
version: 4.2.2
appVersion: 4.6.2
description: MEAN is a free and open-source JavaScript software stack for building dynamic web sites and web applications. The MEAN stack is MongoDB, Express.js, Angular, and Node.js. Because all components of the MEAN stack support programs written in JavaScript, MEAN applications can be written in one language for both server-side and client-side execution environments.
keywords:
- node

View File

@@ -1,9 +1,9 @@
dependencies:
- name: mongodb
repository: https://kubernetes-charts.storage.googleapis.com/
version: 4.5.0
version: 4.6.2
- name: bitnami-common
repository: https://charts.bitnami.com/bitnami
version: 0.0.3
digest: sha256:e08b8d1bb8197aa8fdc27536aaa1de2e7de210515a451ebe94949a3db55264dd
generated: 2018-10-16T08:37:10.583517+02:00
generated: 2018-10-25T11:06:24.877576+02:00

View File

@@ -10,7 +10,7 @@
image:
registry: docker.io
repository: bitnami/node
tag: 9.11.1-prod
tag: 8.12.0-prod
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images

View File

@@ -1,5 +1,5 @@
name: nginx-ingress-controller
version: 2.1.1
version: 2.1.2
appVersion: 0.20.0
description: Chart for the nginx Ingress controller
keywords:

View File

@@ -237,8 +237,8 @@ extraVolumes: []
extraInitContainers: []
## Containers, which are run before the app containers are started.
# - name: init-myservice
# image: busybox
# command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']
# image: bitnami/minideb
# command: ['sh', '-c', 'install_packages dnsutils && until nslookup myservice; do echo waiting for myservice; sleep 2; done;']
stats:
enabled: false

View File

@@ -1,6 +1,6 @@
name: node
version: 6.2.1
appVersion: 10.7.0
version: 6.2.2
appVersion: 8.12.0
description: Event-driven I/O server-side JavaScript environment based on V8
keywords:
- node

View File

@@ -1,9 +1,9 @@
dependencies:
- name: mongodb
repository: https://kubernetes-charts.storage.googleapis.com/
version: 4.5.0
version: 4.6.2
- name: bitnami-common
repository: https://charts.bitnami.com/bitnami
version: 0.0.3
digest: sha256:e08b8d1bb8197aa8fdc27536aaa1de2e7de210515a451ebe94949a3db55264dd
generated: 2018-10-16T08:36:36.201735+02:00
generated: 2018-10-25T10:09:00.707768+02:00

View File

@@ -10,7 +10,7 @@
image:
registry: docker.io
repository: bitnami/node
tag: 10.7.0-prod
tag: 8.12.0-prod
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images

View File

@@ -1 +0,0 @@
.git

View File

@@ -1,156 +0,0 @@
# PostgreSQL
[PostgreSQL](https://www.postgresql.org/) is an object-relational database management system (ORDBMS) with an emphasis on extensibility and on standards-compliance.
## TL;DR;
```console
$ helm install bitnami/postgresql
```
## Introduction
This chart bootstraps a [PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters.
## Prerequisites
- Kubernetes 1.4+ with Beta APIs enabled
- PV provisioner support in the underlying infrastructure
## Installing the Chart
To install the chart with the release name `my-release`:
```console
$ helm install --name my-release bitnami/postgresql
```
The command deploys PostgreSQL on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation.
> **Tip**: List all releases using `helm list`
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
```console
$ helm delete my-release
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following tables lists the configurable parameters of the PostgreSQL chart and their default values.
| Parameter | Description | Default |
|----------------------------|-------------------------------------------|---------------------------------------------------------- |
| `global.imageRegistry` | Global Docker image registry | `nil` |
| `image.registry` | PostgreSQL image registry | `docker.io` |
| `image.repository` | PostgreSQL Image name | `bitnami/postgresql` |
| `image.tag` | PostgreSQL Image tag | `{VERSION}` |
| `image.pullPolicy` | PostgreSQL image pull policy | `Always` |
| `image.pullSecrets` | Specify image pull secrets | `nil` (does not add image pull secrets to deployed pods) |
| `image.debug` | Specify if debug values should be set | `false` |
| `replication.enabled` | Would you like to enable replication | `false` |
| `replication.user` | Replication user | `repl_user` |
| `replication.password` | Replication user password | `repl_password` |
| `replication.slaveReplicas`| Number of slaves replicas | `1` |
| `postgresqlUsername` | PostgreSQL admin user | `postgres` |
| `postgresqlPassword` | PostgreSQL admin password | _random 10 character alphanumeric string_ |
| `postgresqlDatabase` | PostgreSQL database | `nil` |
| `service.type` | Kubernetes Service type | `ClusterIP` |
| `service.port` | PostgreSQL port | `5432` |
| `persistence.enabled` | Enable persistence using PVC | `true` |
| `persistence.storageClass` | PVC Storage Class for PostgreSQL volume | `nil` |
| `persistence.accessMode` | PVC Access Mode for PostgreSQL volume | `ReadWriteOnce` |
| `persistence.size` | PVC Storage Request for PostgreSQL volume | `8Gi` |
| `persistence.annotations` | Annotations for the PVC | `{}` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `tolerations` | Toleration labels for pod assignment | `[]` |
| `resources` | CPU/Memory resource requests/limits | Memory: `256Mi`, CPU: `250m` |
| `securityContext.enabled` | Enable security context | `true` |
| `securityContext.fsGroup` | Group ID for the container | `1001` |
| `securityContext.runAsUser` | User ID for the container | `1001` | `livenessProbe.enabled` | would you like a livessProbed to be enabled | `true` |
| `livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 30 |
| `livenessProbe.periodSeconds` | How often to perform the probe | 10 |
| `livenessProbe.timeoutSeconds` | When the probe times out | 5 |
| `livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 |
| `livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 |
| `readinessProbe.enabled` | would you like a readinessProbe to be enabled | `true` |
| `readinessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 5 |
| `readinessProbe.periodSeconds` | How often to perform the probe | 10 |
| `readinessProbe.timeoutSeconds` | When the probe times out | 5 |
| `readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 |
| `readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 |
| `metrics.enabled` | Start a prometheus exporter | `false` |
| `metrics.service.type` | Kubernetes Service type | `ClusterIP` |
| `metrics.service.annotatios` | Additional annotations for metrics exporter pod | `{}` |
| `metrics.service.loadBalancerIP` | loadBalancerIP if redis metrics service type is `LoadBalancer` | `nil` |
| `metrics.image.registry` | PostgreSQL image registry | `docker.io` |
| `metrics.image.repository` | PostgreSQL Image name | `wrouesnel/postgres_exporter` |
| `metrics.image.tag` | PostgreSQL Image tag | `{VERSION}` |
| `metrics.image.pullPolicy` | PostgreSQL image pull policy | `IfNotPresent` |
| `metrics.image.pullSecrets` | Specify image pull secrets | `nil` (does not add image pull secrets to deployed pods) |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
$ helm install --name my-release \
--set postgresqlPassword=secretpassword,postgresqlDatabase=my-database \
bitnami/postgresql
```
The above command sets the PostgreSQL `postgres` account password to `secretpassword`. Additionally it creates a database named `my-database`.
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
```console
$ helm install --name my-release -f values.yaml bitnami/postgresql
```
> **Tip**: You can use the default [values.yaml](values.yaml)
### postgresql.conf file as configMap
Instead of using specific variables for the PostgreSQL configuration, this helm chart also supports to customize the whole configuration file.
Add your custom file to "files/postgresql.conf" in your working directory. This file will be mounted as configMap to the containers and it will be used for configuring the PostgreSQL server.
## Initialize a fresh instance
The [Bitnami PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) image allows you to use your custom scripts to initialize a fresh instance. In order to execute the scripts, they must be located inside the chart folder `files/docker-entrypoint-initdb.d` so they can be consumed as a ConfigMap.
The allowed extensions are `.sh`, `.sql` and `.sql.gz`.
## Production and horizontal scaling
The following repo contains the recommended production settings for PostgreSQL server in an alternative [values file](values-production.yaml). Please read carefully the comments in the values-production.yaml file to set up your environment
To horizontally scale this chart, first download the [values-production.yaml](values-production.yaml) file to your local folder, then:
```console
$ helm install --name my-release -f ./values-production.yaml bitnami/postgresql
$ kubectl scale statefulset my-postgresql-slave --replicas=3
```
## Persistence
The [Bitnami PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) image stores the PostgreSQL data and configurations at the `/bitnami/postgresql` path of the container.
Persistent Volume Claims are used to keep the data across deployments. This is known to work in GCE, AWS, and minikube.
See the [Configuration](#configuration) section to configure the PVC or to disable persistence.
## Upgrading
### To 3.0.0
Backwards compatibility is not guaranteed unless you modify the labels used on the chart's deployments.
Use the workaround below to upgrade from versions previous to 3.0.0. The following example assumes that the release name is postgresql:
```console
$ kubectl delete statefulset postgresql --cascade=false
$ kubectl delete statefulset postgresql-slave --cascade=false
```

View File

@@ -1 +0,0 @@
Copy here your postgresql.conf file to use it as a config map.

View File

@@ -1,19 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "postgresql.fullname" . }}
labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
spec:
type: {{ .Values.service.type }}
ports:
- name: postgresql
port: 5432
targetPort: postgresql
selector:
app: {{ template "postgresql.name" . }}
release: {{ .Release.Name | quote }}
role: master

View File

@@ -1,5 +1,5 @@
name: tensorflow-inception
version: 2.0.0
version: 3.0.0
appVersion: 1.10.1
description: Open-source software library for serving machine learning models
keywords:

View File

@@ -63,6 +63,7 @@ The following tables lists the configurable parameters of the TensorFlow Incepti
| Parameter | Description | Default |
| ------------------------------- | -------------------------------------- | ---------------------------------------------------------- |
| `global.imageRegistry` | Global Docker image registry | `nil` |
| `replicaCount` | desired number of pods | `1` |
| `server.image.registry` | TensorFlow Serving image registry | `docker.io` |
| `server.image.repository` | TensorFlow Serving Image name | `bitnami/tensorflow-serving` |
| `server.image.tag` | TensorFlow Serving Image tag | `{VERSION}` |

View File

@@ -12,7 +12,7 @@ spec:
matchLabels:
app: {{ template "fullname" . }}
release: "{{ .Release.Name }}"
replicas: 1
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
@@ -26,6 +26,27 @@ spec:
- name: {{ . }}
{{- end}}
{{- end }}
initContainers:
- name: seed
image: "{{ template "tensorflow-inception.client.image" . }}"
imagePullPolicy: {{ .Values.client.image.pullPolicy | quote }}
command:
- "/bin/sh"
- "-c"
- |
if [ -f /seed/.initialized ];then
echo "Already initialized. Skipping"
else
curl -o /seed/inception-v3-2016-03-01.tar.gz http://download.tensorflow.org/models/image/imagenet/inception-v3-2016-03-01.tar.gz
cd /seed/ && tar -xzf inception-v3-2016-03-01.tar.gz
rm inception-v3-2016-03-01.tar.gz
inception_saved_model --checkpoint_dir=/seed/inception-v3 --output_dir=/seed/
rm -rf inception-v3
touch /seed/.initialized
fi
volumeMounts:
- name: seed
mountPath: /seed
containers:
- name: {{ template "fullname" . }}
image: "{{ template "tensorflow-inception.server.image" . }}"
@@ -45,5 +66,4 @@ spec:
mountPath: "/bitnami/model-data"
volumes:
- name: seed
persistentVolumeClaim:
claimName: {{ template "fullname" . }}-seed-inception
emptyDir: {}

View File

@@ -1,41 +0,0 @@
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "fullname" . }}-seed-inception
labels:
app: {{ template "fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
spec:
template:
metadata:
name: seed-inception
spec:
containers:
{{- if .Values.client.image.pullSecrets }}
imagePullSecrets:
{{- range .Values.client.image.pullSecrets }}
- name: {{ . }}
{{- end}}
{{- end }}
- name: seed
image: "{{ template "tensorflow-inception.client.image" . }}"
imagePullPolicy: {{ .Values.client.image.pullPolicy | quote }}
command:
- "/bin/sh"
- "-c"
- |
curl -o /seed/inception-v3-2016-03-01.tar.gz http://download.tensorflow.org/models/image/imagenet/inception-v3-2016-03-01.tar.gz
cd /seed/ && tar -xzf inception-v3-2016-03-01.tar.gz
rm inception-v3-2016-03-01.tar.gz
inception_saved_model --checkpoint_dir=/seed/inception-v3 --output_dir=/seed/
rm -rf inception-v3
volumeMounts:
- name: seed
mountPath: /seed
restartPolicy: Never
volumes:
- name: seed
persistentVolumeClaim:
claimName: {{ template "fullname" . }}-seed-inception

View File

@@ -1,15 +0,0 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ template "fullname" . }}-seed-inception
labels:
app: {{ template "fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi

View File

@@ -4,6 +4,8 @@
# global:
# imageRegistry:
replicaCount: 1
## TensorFlow Serving server image version
## ref: https://hub.docker.com/r/bitnami/tensorflow-serving/tags/
##

View File

@@ -1,5 +1,5 @@
name: mariadb
version: 5.2.1
version: 5.2.2
appVersion: 10.1.36
description: Fast, reliable, scalable, and easy to use open-source relational database system. MariaDB Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software. Highly available MariaDB cluster.
keywords:

View File

@@ -167,7 +167,7 @@ It's necessary to set the `rootUser.password` parameter when upgrading for readi
$ helm upgrade my-release stable/mariadb --set rootUser.password=[ROOT_PASSWORD]
```
| Note: you need to substitue the placeholder _[ROOT_PASSWORD]_ with the value obtained in the installation notes.
| Note: you need to substitute the placeholder _[ROOT_PASSWORD]_ with the value obtained in the installation notes.
### To 5.0.0

View File

@@ -1,5 +1,5 @@
name: mongodb
version: 4.6.1
version: 4.6.2
appVersion: 4.0.3
description: NoSQL document-oriented database that stores JSON-like documents with dynamic schemas, simplifying the integration of data in content-driven applications.
keywords:

View File

@@ -63,6 +63,7 @@ The following table lists the configurable parameters of the MongoDB chart and t
| `mongodbExtraFlags` | MongoDB additional command line flags | [] |
| `service.annotations` | Kubernetes service annotations | `{}` |
| `service.type` | Kubernetes Service type | `ClusterIP` |
| `service.clusterIP` | Static clusterIP or None for headless services | `nil` |
| `service.nodePort` | Port to bind to for NodePort service type | `nil` |
| `port` | MongoDB service port | `27017` |
| `replicaSet.enabled` | Switch to enable/disable replica set configuration | `false` |

View File

@@ -78,6 +78,12 @@ spec:
{{- end }}
- name: MONGODB_EXTRA_FLAGS
value: {{ default "" .Values.mongodbExtraFlags | join " " }}
- name: MONGODB_ENABLE_IPV6
{{- if .Values.mongodbEnableIPv6 }}
value: "yes"
{{- else }}
value: "no"
{{- end }}
ports:
- name: mongodb
containerPort: 27017

View File

@@ -14,6 +14,9 @@ metadata:
{{- end }}
spec:
type: {{ .Values.service.type }}
{{- if and (eq .Values.service.type "ClusterIP") .Values.service.clusterIP }}
clusterIP: {{ .Values.service.clusterIP }}
{{- end }}
ports:
- name: mongodb
port: 27017

View File

@@ -14,6 +14,9 @@ metadata:
{{- end }}
spec:
type: {{ .Values.service.type }}
{{- if and (eq .Values.service.type "ClusterIP") .Values.service.clusterIP }}
clusterIP: {{ .Values.service.clusterIP }}
{{- end }}
ports:
- name: mongodb
port: 27017

View File

@@ -73,6 +73,7 @@ clusterDomain: cluster.local
service:
annotations: {}
type: ClusterIP
# clusterIP: None
port: 27017
## Specify the nodePort value for the LoadBalancer and NodePort service types.

View File

@@ -74,6 +74,7 @@ clusterDomain: cluster.local
service:
annotations: {}
type: ClusterIP
# clusterIP: None
port: 27017
## Specify the nodePort value for the LoadBalancer and NodePort service types.

View File

@@ -1 +1,2 @@
.git
OWNERS

View File

@@ -1,5 +1,5 @@
name: odoo
version: 3.2.2
version: 4.0.0
appVersion: 11.0.20181015
description: A suite of web based open source business apps.
home: https://www.odoo.com/

View File

@@ -1,6 +1,6 @@
dependencies:
- name: postgresql
repository: https://kubernetes-charts.storage.googleapis.com/
version: 0.19.0
digest: sha256:88ef0719267ade838b784ffd08d91a6728350516344d5cd7089502587c982ded
generated: 2018-10-16T08:49:00.660599+02:00
version: 2.1.0
digest: sha256:972c7085960fbe4a3f530f726f5a1cc6fe038f0ab84df632f6427c3a49f3f366
generated: 2018-10-24T11:56:43.864565+02:00

View File

@@ -1,4 +1,4 @@
dependencies:
- name: postgresql
version: 0.x.x
version: 2.x.x
repository: https://kubernetes-charts.storage.googleapis.com/

View File

@@ -38,7 +38,7 @@ spec:
valueFrom:
secretKeyRef:
name: {{ template "odoo.postgresql.fullname" . }}
key: postgres-password
key: postgresql-password
- name: ODOO_EMAIL
value: {{ .Values.odooEmail | quote }}
- name: ODOO_PASSWORD

View File

@@ -50,6 +50,8 @@ odooEmail: user@example.com
##
## PostgreSQL chart configuration
##
## https://github.com/helm/charts/blob/master/stable/postgresql/values.yaml
##
postgresql:
## PostgreSQL password
## ref: https://hub.docker.com/_/postgres/

View File

@@ -1,6 +1,6 @@
name: phabricator
version: 3.2.2
appVersion: 2018.41.0
version: 3.2.3
appVersion: 2018.42.0
description: Collection of open source web applications that help software companies build better software.
keywords:
- phabricator

View File

@@ -1,6 +1,6 @@
dependencies:
- name: mariadb
repository: https://kubernetes-charts.storage.googleapis.com/
version: 5.2.0
version: 5.2.1
digest: sha256:e09c8ca7126923a30e39f442c3863b44684d4eb3f7b6dc869f0206da4463f416
generated: 2018-10-16T08:50:03.59136+02:00
generated: 2018-10-23T11:10:43.067362461Z

View File

@@ -10,7 +10,7 @@
image:
registry: docker.io
repository: bitnami/phabricator
tag: 2018.41.0
tag: 2018.42.0
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images

View File

@@ -1,6 +1,6 @@
name: phpmyadmin
version: 1.2.1
appVersion: 4.8.2
version: 1.2.2
appVersion: 4.8.3
description: phpMyAdmin is an mysql administration frontend
keywords:
- mariadb

View File

@@ -10,7 +10,7 @@
image:
registry: docker.io
repository: bitnami/phpmyadmin
tag: 4.8.2
tag: 4.8.3
## Specify a imagePullPolicy
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.

View File

@@ -0,0 +1,2 @@
.git
OWNERS

View File

@@ -1,18 +1,19 @@
name: postgresql
version: 3.1.1
version: 2.2.0
appVersion: 10.5.0
description: Chart for PostgreSQL
description: Chart for PostgreSQL, an object-relational database management system (ORDBMS) with an emphasis on extensibility and on standards-compliance.
keywords:
- postgresql
- postgres
- database
- sql
- replication
- cluster
home: http://www.postgresql.org
home: https://www.postgresql.org/
icon: https://bitnami.com/assets/stacks/postgresql/img/postgresql-stack-110x117.png
sources:
- https://github.com/bitnami/bitnami-docker-postgresql
maintainers:
- name: Bitnami
email: containers@bitnami.com
engine: gotpl
icon: https://bitnami.com/assets/stacks/postgresql/img/postgresql-stack-110x117.png

View File

@@ -0,0 +1,12 @@
approvers:
- prydonius
- tompizmor
- sameersbn
- carrodher
- juan131
reviewers:
- prydonius
- tompizmor
- sameersbn
- carrodher
- juan131

View File

@@ -0,0 +1,225 @@
# PostgreSQL
[PostgreSQL](https://www.postgresql.org/) is an object-relational database management system (ORDBMS) with an emphasis on extensibility and on standards-compliance.
## TL;DR;
```console
$ helm install stable/postgresql
```
## Introduction
This chart bootstraps a [PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters.
## Prerequisites
- Kubernetes 1.4+ with Beta APIs enabled
- PV provisioner support in the underlying infrastructure
## Installing the Chart
To install the chart with the release name `my-release`:
```console
$ helm install --name my-release stable/postgresql
```
The command deploys PostgreSQL on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation.
> **Tip**: List all releases using `helm list`
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
```console
$ helm delete my-release
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following tables lists the configurable parameters of the PostgreSQL chart and their default values.
| Parameter | Description | Default |
|--------------------------------------|----------------------------------------------------|---------------------------------------------------------- |
| `global.imageRegistry` | Global Docker Image registry | `nil` |
| `image.registry` | PostgreSQL Image registry | `docker.io` |
| `image.repository` | PostgreSQL Image name | `bitnami/postgresql` |
| `image.tag` | PostgreSQL Image tag | `{VERSION}` |
| `image.pullPolicy` | PostgreSQL Image pull policy | `Always` |
| `image.pullSecrets` | Specify Image pull secrets | `nil` (does not add image pull secrets to deployed pods) |
| `image.debug` | Specify if debug values should be set | `false` |
| `replication.enabled` | Would you like to enable replication | `false` |
| `replication.user` | Replication user | `repl_user` |
| `replication.password` | Replication user password | `repl_password` |
| `replication.slaveReplicas` | Number of slaves replicas | `1` |
| `postgresqlUsername` | PostgreSQL admin user | `postgres` |
| `postgresqlPassword` | PostgreSQL admin password | _random 10 character alphanumeric string_ |
| `postgresqlDatabase` | PostgreSQL database | `nil` |
| `service.type` | Kubernetes Service type | `ClusterIP` |
| `service.port` | PostgreSQL port | `5432` |
| `service.nodePort` | Kubernetes Service nodePort | `nil` |
| `service.annotations` | Annotations for PostgreSQL service | {} |
| `service.loadBalancerIP` | loadBalancerIP if service type is `LoadBalancer` | `nil` |
| `persistence.enabled` | Enable persistence using PVC | `true` |
| `persistence.storageClass` | PVC Storage Class for PostgreSQL volume | `nil` |
| `persistence.accessMode` | PVC Access Mode for PostgreSQL volume | `ReadWriteOnce` |
| `persistence.size` | PVC Storage Request for PostgreSQL volume | `8Gi` |
| `persistence.annotations` | Annotations for the PVC | `{}` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `tolerations` | Toleration labels for pod assignment | `[]` |
| `terminationGracePeriodSeconds` | Seconds the pod needs to terminate gracefully | `nil` |
| `resources` | CPU/Memory resource requests/limits | Memory: `256Mi`, CPU: `250m` |
| `securityContext.enabled` | Enable security context | `true` |
| `securityContext.fsGroup` | Group ID for the container | `1001` |
| `securityContext.runAsUser` | User ID for the container | `1001` |
| `livenessProbe.enabled` | Would you like a livessProbed to be enabled | `true` |
| `networkPolicy.enabled` | Enable NetworkPolicy | `false` |
| `networkPolicy.allowExternal` | Don't require client label for connections | `true` |
| `livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 30 |
| `livenessProbe.periodSeconds` | How often to perform the probe | 10 |
| `livenessProbe.timeoutSeconds` | When the probe times out | 5 |
| `livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 |
| `livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 |
| `readinessProbe.enabled` | would you like a readinessProbe to be enabled | `true` |
| `readinessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 5 |
| `readinessProbe.periodSeconds` | How often to perform the probe | 10 |
| `readinessProbe.timeoutSeconds` | When the probe times out | 5 |
| `readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 |
| `readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 |
| `metrics.enabled` | Start a prometheus exporter | `false` |
| `metrics.service.type` | Kubernetes Service type | `ClusterIP` |
| `metrics.service.annotatios` | Additional annotations for metrics exporter pod | `{}` |
| `metrics.service.loadBalancerIP` | loadBalancerIP if redis metrics service type is `LoadBalancer` | `nil` |
| `metrics.image.registry` | PostgreSQL Image registry | `docker.io` |
| `metrics.image.repository` | PostgreSQL Image name | `wrouesnel/postgres_exporter` |
| `metrics.image.tag` | PostgreSQL Image tag | `{VERSION}` |
| `metrics.image.pullPolicy` | PostgreSQL Image pull policy | `IfNotPresent` |
| `metrics.image.pullSecrets` | Specify Image pull secrets | `nil` (does not add image pull secrets to deployed pods) |
| `extraEnv` | Any extra environment variables you would like to pass on to the pod | `{}` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
$ helm install --name my-release \
--set postgresqlPassword=secretpassword,postgresqlDatabase=my-database \
stable/postgresql
```
The above command sets the PostgreSQL `postgres` account password to `secretpassword`. Additionally it creates a database named `my-database`.
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
```console
$ helm install --name my-release -f values.yaml stable/postgresql
```
> **Tip**: You can use the default [values.yaml](values.yaml)
### postgresql.conf file as configMap
Instead of using specific variables for the PostgreSQL configuration, this helm chart also supports to customize the whole configuration file.
Add your custom file to "files/postgresql.conf" in your working directory. This file will be mounted as configMap to the containers and it will be used for configuring the PostgreSQL server.
## Initialize a fresh instance
The [Bitnami PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) image allows you to use your custom scripts to initialize a fresh instance. In order to execute the scripts, they must be located inside the chart folder `files/docker-entrypoint-initdb.d` so they can be consumed as a ConfigMap.
The allowed extensions are `.sh`, `.sql` and `.sql.gz`.
## Production and horizontal scaling
The following repo contains the recommended production settings for PostgreSQL server in an alternative [values file](values-production.yaml). Please read carefully the comments in the values-production.yaml file to set up your environment
To horizontally scale this chart, first download the [values-production.yaml](values-production.yaml) file to your local folder, then:
```console
$ helm install --name my-release -f ./values-production.yaml stable/postgresql
$ kubectl scale statefulset my-postgresql-slave --replicas=3
```
## Persistence
The [Bitnami PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) image stores the PostgreSQL data and configurations at the `/bitnami/postgresql` path of the container.
Persistent Volume Claims are used to keep the data across deployments. This is known to work in GCE, AWS, and minikube.
See the [Configuration](#configuration) section to configure the PVC or to disable persistence.
## Metrics
The chart optionally can start a metrics exporter for [prometheus](https://prometheus.io). The metrics endpoint (port 9187) is not exposed and it is expected that the metrics are collected from inside the k8s cluster using something similar as the described in the [example Prometheus scrape configuration](https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml).
The exporter allows to create custom metrics from additional SQL queries. See the Chart's `values.yaml` for an example and consult the [exporters documentation](https://github.com/wrouesnel/postgres_exporter#adding-new-metrics-via-a-config-file) for more details.
## NetworkPolicy
To enable network policy for PostgreSQL, install [a networking plugin that implements the Kubernetes NetworkPolicy spec](https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy#before-you-begin), and set `networkPolicy.enabled` to `true`.
For Kubernetes v1.5 & v1.6, you must also turn on NetworkPolicy by setting the DefaultDeny namespace annotation. Note: this will enforce policy for _all_ pods in the namespace:
```console
$ kubectl annotate namespace default "net.beta.kubernetes.io/network-policy={\"ingress\":{\"isolation\":\"DefaultDeny\"}}"
```
With NetworkPolicy enabled, traffic will be limited to just port 5432.
For more precise policy, set `networkPolicy.allowExternal=false`. This will only allow pods with the generated client label to connect to PostgreSQL.
This label will be displayed in the output of a successful install.
## Upgrade
In order to upgrade from the `0.X.X` branch to `1.X.X`, you should follow the below steps:
- Obtain the service name (`SERVICE_NAME`) and password (`OLD_PASSWORD`) of the existing postgresql chart. You can find the instructions to obtain the password in the NOTES.txt, the service name can be obtained by running
```console
$ kubectl get svc
```
- Install (not upgrade) the new version
```console
$ helm repo update
$ helm install --name my-release stable/postgresql
```
- Connect to the new pod (you can obtain the name by running `kubectl get pods`):
```console
$ kubectl exec -it NAME bash
```
- Once logged in, create a dump file from the previous database using `pg_dump`, for that we should connect to the previous postgresql chart:
```console
$ pg_dump -h SERVICE_NAME -U postgres DATABASE_NAME > /tmp/backup.sql
```
After run above command you should be prompted for a password, this password is the previous chart password (`OLD_PASSWORD`).
This operation could take some time depending on the database size.
- Once you have the backup file, you can restore it with a command like the one below:
```console
$ psql -U postgres DATABASE_NAME < /tmp/backup.sql
```
In this case, you are accessing to the local postgresql, so the password should be the new one (you can find it in NOTES.txt).
If you want to restore the database and the database schema does not exist, it is necessary to first follow the steps described below.
```console
$ psql -U postgres
postgres=# drop database DATABASE_NAME;
postgres=# create database DATABASE_NAME;
postgres=# create user USER_NAME;
postgres=# alter role USER_NAME with password 'BITNAMI_USER_PASSWORD';
postgres=# grant all privileges on database DATABASE_NAME to USER_NAME;
postgres=# alter database DATABASE_NAME owner to USER_NAME;
```

View File

@@ -0,0 +1 @@
Copy here your postgresql.conf and/or pg_hba.conf files to use it as a config map.

View File

@@ -4,11 +4,11 @@
WARNING
By specifying "serviceType=LoadBalancer" and not specifying "postgresqlPassword"
you have most likely exposed the PostgreSQL service externally without any
you have most likely exposed the PostgreSQL service externally without any
authentication mechanism.
For security reasons, we strongly suggest that you switch to "ClusterIP" or
"NodePort". As alternative, you can also specify a valid password on the
"NodePort". As an alternative, you can also specify a valid password on the
"postgresqlPassword" parameter.
-------------------------------------------------------------------------------
@@ -27,7 +27,12 @@ To get the password for "{{ .Values.postgresqlUsername }}" run:
To connect to your database run the following command:
kubectl run {{ template "postgresql.fullname" . }}-client --rm --tty -i --image bitnami/postgresql --env="PGPASSWORD=$POSTGRESQL_PASSWORD" --command -- psql --host {{ template "postgresql.fullname" . }} -U {{ .Values.postgresqlUsername }}
kubectl run {{ template "postgresql.fullname" . }}-client --rm --tty -i --image bitnami/postgresql --env="PGPASSWORD=$POSTGRESQL_PASSWORD" {{- if and (.Values.networkPolicy.enabled) (not .Values.networkPolicy.allowExternal) }}
--labels="{{ template "postgresql.fullname" . }}-client=true" {{- end }} --command -- psql --host {{ template "postgresql.fullname" . }} -U {{ .Values.postgresqlUsername }}
{{ if and (.Values.networkPolicy.enabled) (not .Values.networkPolicy.allowExternal) }}
Note: Since NetworkPolicy is enabled, only pods with label {{ template "postgresql.fullname" . }}-client=true" will be able to connect to this PostgreSQL cluster.
{{- end }}
To connect to your database from outside the cluster execute the following commands:
@@ -42,7 +47,7 @@ To connect to your database from outside the cluster execute the following comma
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
Watch the status with: 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "postgresql.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "postgresql.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "postgresql.fullname" . }} --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
{{ if .Values.postgresqlPassword }}PGPASSWORD={{ .Values.postgresqlPassword}} "{{- end }}psql --host $SERVICE_IP --port {{ .Values.service.port }} -U {{ .Values.postgresqlUsername }}
{{- else if contains "ClusterIP" .Values.service.type }}

View File

@@ -28,6 +28,17 @@ We truncate at 63 chars because some Kubernetes name fields are limited to this
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for networkpolicy.
*/}}
{{- define "postgresql.networkPolicy.apiVersion" -}}
{{- if semverCompare ">=1.4-0, <1.7-0" .Capabilities.KubeVersion.GitVersion -}}
"extensions/v1beta1"
{{- else if semverCompare "^1.7-0" .Capabilities.KubeVersion.GitVersion -}}
"networking.k8s.io/v1"
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
@@ -58,6 +69,7 @@ Also, we can't use a single if because lazy evaluation is not an option
{{- end -}}
{{- end -}}
{{/*
Return the proper PostgreSQL metrics image name
*/}}

View File

@@ -1,13 +1,18 @@
{{ if (.Files.Glob "files/postgresql.conf") }}
{{ if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "postgresql.fullname" . }}-configuration
labels:
app: "{{ template "postgresql.name" . }}"
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
data:
{{- if (.Files.Glob "files/postgresql.conf") }}
{{ (.Files.Glob "files/postgresql.conf").AsConfig | indent 2 }}
{{ end }}
{{- end }}
{{- if (.Files.Glob "files/pg_hba.conf") }}
{{ (.Files.Glob "files/pg_hba.conf").AsConfig | indent 2 }}
{{- end }}
{{ end }}

View File

@@ -3,9 +3,9 @@ kind: ConfigMap
metadata:
name: {{ template "postgresql.fullname" . }}-init-scripts
labels:
app: "{{ template "postgresql.name" . }}"
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
data:
{{ (.Files.Glob "files/docker-entrypoint-initdb.d/*").AsConfig | indent 2 }}
{{ (.Files.Glob "files/docker-entrypoint-initdb.d/*").AsConfig | indent 2 }}

View File

@@ -6,13 +6,13 @@ metadata:
labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
release: {{ .Release.Name | quote}}
heritage: {{ .Release.Service | quote }}
spec:
template:
metadata:
labels:
release: "{{ .Release.Name }}"
release: {{ .Release.Name | quote }}
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
role: metrics
@@ -43,8 +43,15 @@ spec:
image: {{ template "metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
env:
- name: DATA_SOURCE_NAME
value: {{ printf "postgresql://%s:%s@%s:%d/?sslmode=disable" (.Values.postgresqlUsername) (.Values.postgresqlPassword) ( include "postgresql.fullname" . ) (int .Values.service.port) | quote }}
- name: DATA_SOURCE_URI
value: {{ printf "%s:%d/?sslmode=disable" ( include "postgresql.fullname" . ) (int .Values.service.port) | quote }}
- name: DATA_SOURCE_PASS
valueFrom:
secretKeyRef:
name: {{ template "postgresql.fullname" . }}
key: postgresql-password
- name: DATA_SOURCE_USER
value: {{ .Values.postgresqlUsername }}
{{- if .Values.livenessProbe.enabled }}
livenessProbe:
httpGet:

View File

@@ -6,16 +6,15 @@ metadata:
labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
annotations:
{{ toYaml .Values.metrics.service.annotations | indent 4 }}
spec:
type: {{ .Values.metrics.service.type }}
{{ if eq .Values.metrics.service.type "LoadBalancer" -}} {{ if .Values.metrics.service.loadBalancerIP -}}
{{- if and (eq .Values.metrics.service.type "LoadBalancer") .Values.metrics.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.metrics.service.loadBalancerIP }}
{{ end -}}
{{- end -}}
{{- end }}
ports:
- name: metrics
port: 9187
@@ -24,4 +23,4 @@ spec:
app: {{ template "postgresql.name" . }}
release: {{ .Release.Name }}
role: metrics
{{- end }}
{{- end }}

View File

@@ -0,0 +1,29 @@
{{- if .Values.networkPolicy.enabled }}
kind: NetworkPolicy
apiVersion: {{ template "postgresql.networkPolicy.apiVersion" . }}
metadata:
name: {{ template "postgresql.fullname" . }}
labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
spec:
podSelector:
matchLabels:
app: {{ template "postgresql.name" . }}
release: {{ .Release.Name | quote }}
ingress:
# Allow inbound connections
- ports:
- port: 5432
{{- if not .Values.networkPolicy.allowExternal }}
from:
- podSelector:
matchLabels:
{{ template "postgresql.fullname" . }}-client: "true"
{{- end }}
# Allow prometheus scrapes
- ports:
- port: 9187
{{- end }}

View File

@@ -5,8 +5,8 @@ metadata:
labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
type: Opaque
data:
{{ if .Values.postgresqlPassword }}

View File

@@ -20,9 +20,9 @@ spec:
role: slave
template:
metadata:
name: "{{ template "postgresql.fullname" . }}"
name: {{ template "postgresql.fullname" . }}
labels:
app: "{{ template "postgresql.name" . }}"
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
@@ -48,8 +48,8 @@ spec:
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
containers:
- name: "{{ template "postgresql.fullname" . }}"
image: "{{ template "postgresql.image" . }}"
- name: {{ template "postgresql.fullname" . }}
image: {{ template "postgresql.image" . }}
imagePullPolicy: "{{ .Values.image.pullPolicy }}"
resources:
{{ toYaml .Values.Resources | indent 10 }}
@@ -84,7 +84,7 @@ spec:
command:
- sh
- -c
- exec pg_isready -U {{ default "" .Values.postgresqlUsername | quote }} --host $POD_IP
- exec pg_isready -U {{ .Values.postgresqlUsername | quote }} --host $POD_IP
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
@@ -97,7 +97,7 @@ spec:
command:
- sh
- -c
- exec pg_isready -U {{ default "" .Values.postgresqlUsername | quote }} --host $POD_IP
- exec pg_isready -U {{ .Values.postgresqlUsername | quote }} --host $POD_IP
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
@@ -114,8 +114,13 @@ spec:
mountPath: /opt/bitnami/postgresql/conf/postgresql.conf
subPath: postgresql.conf
{{ end }}
{{ if (.Files.Glob "files/pg_hba.conf") }}
- name: postgresql-config
mountPath: /opt/bitnami/postgresql/conf/pg_hba.conf
subPath: pg_hba.conf
{{ end }}
volumes:
{{ if (.Files.Glob "files/postgresql.conf") }}
{{ if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") }}
- name: postgresql-config
configMap:
name: {{ template "postgresql.fullname" . }}-configuration
@@ -148,5 +153,3 @@ spec:
emptyDir: {}
{{- end }}
{{- end }}

View File

@@ -1,7 +1,7 @@
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: "{{ template "postgresql.master.fullname" . }}"
name: {{ template "postgresql.master.fullname" . }}
labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
@@ -19,7 +19,7 @@ spec:
role: master
template:
metadata:
name: "{{ template "postgresql.fullname" . }}"
name: {{ template "postgresql.fullname" . }}
labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
@@ -45,10 +45,13 @@ spec:
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
{{- if .Values.terminationGracePeriodSeconds }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
{{- end }}
containers:
- name: "{{ template "postgresql.fullname" . }}"
image: "{{ template "postgresql.image" . }}"
- name: {{ template "postgresql.fullname" . }}
image: {{ template "postgresql.image" . }}
imagePullPolicy: "{{ .Values.image.pullPolicy }}"
resources:
{{ toYaml .Values.Resources | indent 10 }}
@@ -75,6 +78,9 @@ spec:
value: {{ .Values.postgresqlDatabase | quote }}
- name: POD_IP
valueFrom: { fieldRef: { fieldPath: status.podIP } }
{{- if .Values.extraEnv }}
{{ toYaml .Values.extraEnv | indent 8 }}
{{- end }}
ports:
- name: postgresql
containerPort: {{ .Values.service.port }}
@@ -84,7 +90,7 @@ spec:
command:
- sh
- -c
- exec pg_isready -U {{ default "" .Values.postgresqlUsername | quote }} --host $POD_IP
- exec pg_isready -U {{ .Values.postgresqlUsername | quote }} --host $POD_IP
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
@@ -97,7 +103,7 @@ spec:
command:
- sh
- -c
- exec pg_isready -U {{ default "" .Values.postgresqlUsername | quote }} --host $POD_IP
- exec pg_isready -U {{ .Values.postgresqlUsername | quote }} --host $POD_IP
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
@@ -116,8 +122,13 @@ spec:
mountPath: /opt/bitnami/postgresql/conf/postgresql.conf
subPath: postgresql.conf
{{ end }}
{{ if (.Files.Glob "files/pg_hba.conf") }}
- name: postgresql-config
mountPath: /opt/bitnami/postgresql/conf/pg_hba.conf
subPath: pg_hba.conf
{{ end }}
volumes:
{{ if (.Files.Glob "files/postgresql.conf") }}
{{ if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") }}
- name: postgresql-config
configMap:
name: {{ template "postgresql.fullname" . }}-configuration
@@ -152,5 +163,3 @@ spec:
- name: data
emptyDir: {}
{{- end }}

View File

@@ -5,8 +5,8 @@ metadata:
labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
spec:
type: ClusterIP
clusterIP: None
@@ -16,4 +16,4 @@ spec:
targetPort: postgresql
selector:
app: {{ template "postgresql.name" . }}
release: "{{ .Release.Name }}"
release: {{ .Release.Name | quote }}

View File

@@ -0,0 +1,29 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "postgresql.fullname" . }}
labels:
app: {{ template "postgresql.name" . }}
chart: {{ template "postgresql.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
{{- with .Values.service.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
type: {{ .Values.service.type }}
{{- if and .Values.service.loadBalancerIP (eq .Values.service.type "LoadBalancer") }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
ports:
- name: postgresql
port: {{ .Values.service.port }}
targetPort: postgresql
{{- if .Values.service.nodePort }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
selector:
app: {{ template "postgresql.name" . }}
release: {{ .Release.Name | quote }}
role: master

View File

@@ -1,8 +1,8 @@
## Global Docker image registry
## Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
##
# global:
# imageRegistry:
### Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
###
## global:
## imageRegistry:
## Bitnami PostgreSQL image version
## ref: https://hub.docker.com/r/bitnami/postgresql/tags/
@@ -16,14 +16,14 @@ image:
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: Always
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistrKeySecretName
# - myRegistrKeySecretName
## Set to true if you would like to see extra information on logs
## It turns BASH and NAMI debugging in minideb
## ref: https://github.com/bitnami/minideb-extras/#turn-on-bash-debugging
@@ -57,13 +57,24 @@ postgresqlUsername: postgres
##
# postgresqlDatabase:
## Kubernetes configuration
## For minikube, set this to NodePort, elsewhere use LoadBalancer
##
## PostgreSQL service configuration
service:
## PosgresSQL service type
type: ClusterIP
port: 5432
## Specify the nodePort value for the LoadBalancer and NodePort service types.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
##
# nodePort:
## Provide any additional annotations which may be required. This can be used to
annotations: {}
## Set the LoadBalancer service type to internal only.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
##
# loadBalancerIP:
## PostgreSQL data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
@@ -93,6 +104,18 @@ resources:
memory: 256Mi
cpu: 250m
networkPolicy:
## Enable creation of NetworkPolicy resources.
##
enabled: false
## The Policy model to apply. When set to false, only pods with the correct
## client label will have network access to the port PostgreSQL is listening
## on. When true, PostgreSQL will accept connections from any source
## (with the correct destination port).
##
allowExternal: true
## Configure extra options for liveness and readiness probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
livenessProbe:
@@ -112,16 +135,16 @@ readinessProbe:
successThreshold: 1
## Configure metrics exporter
##
##
metrics:
enabled: false
enabled: true
# resources: {}
# podAnnotations: {}
service:
type: ClusterIP
annotations: {}
# prometheus.io/scrape: "true"
# prometheus.io/port: "9187"
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9187"
loadBalancerIP:
image:
registry: docker.io
@@ -151,7 +174,7 @@ metrics:
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 5
@@ -159,3 +182,6 @@ metrics:
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
# Define custom environment variables to pass to the image here
extraEnv: {}

View File

@@ -1,8 +1,8 @@
## Global Docker image registry
## Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
##
# global:
# imageRegistry:
### Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
###
## global:
## imageRegistry:
## Bitnami PostgreSQL image version
## ref: https://hub.docker.com/r/bitnami/postgresql/tags/
@@ -16,14 +16,14 @@ image:
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: Always
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistrKeySecretName
# - myRegistrKeySecretName
## Set to true if you would like to see extra information on logs
## It turns BASH and NAMI debugging in minideb
## ref: https://github.com/bitnami/minideb-extras/#turn-on-bash-debugging
@@ -57,13 +57,29 @@ postgresqlUsername: postgres
##
# postgresqlDatabase:
## Kubernetes configuration
## For minikube, set this to NodePort, elsewhere use LoadBalancer
## Optional duration in seconds the pod needs to terminate gracefully.
## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods
##
# terminationGracePeriodSeconds: 30
## PostgreSQL service configuration
service:
## PosgresSQL service type
type: ClusterIP
port: 5432
## Specify the nodePort value for the LoadBalancer and NodePort service types.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
##
# nodePort:
## Provide any additional annotations which may be required. This can be used to
annotations: {}
## Set the LoadBalancer service type to internal only.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
##
# loadBalancerIP:
## PostgreSQL data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
@@ -93,6 +109,18 @@ resources:
memory: 256Mi
cpu: 250m
networkPolicy:
## Enable creation of NetworkPolicy resources.
##
enabled: false
## The Policy model to apply. When set to false, only pods with the correct
## client label will have network access to the port PostgreSQL is listening
## on. When true, PostgreSQL will accept connections from any source
## (with the correct destination port).
##
allowExternal: true
## Configure extra options for liveness and readiness probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
livenessProbe:
@@ -112,16 +140,16 @@ readinessProbe:
successThreshold: 1
## Configure metrics exporter
##
##
metrics:
enabled: false
# resources: {}
# podAnnotations: {}
service:
type: ClusterIP
annotations: {}
# prometheus.io/scrape: "true"
# prometheus.io/port: "9187"
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9187"
loadBalancerIP:
image:
registry: docker.io
@@ -151,7 +179,7 @@ metrics:
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 5
@@ -159,3 +187,6 @@ metrics:
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
# Define custom environment variables to pass to the image here
extraEnv: {}

View File

@@ -1 +1,2 @@
.git
OWNERS

View File

@@ -1,5 +1,5 @@
name: rabbitmq
version: 3.5.1
version: 3.5.3
appVersion: 3.7.8
description: Open source message broker software that implements the Advanced Message Queuing Protocol (AMQP)
keywords:
@@ -13,4 +13,6 @@ sources:
maintainers:
- name: Bitnami
email: containers@bitnami.com
- name: desaintmartin
email: cedric@desaintmartin.fr
engine: gotpl

View File

@@ -1,12 +1,14 @@
approvers:
- prydonius
- tompizmor
- sameersbn
- carrodher
- desaintmartin
- juan131
- prydonius
- sameersbn
- tompizmor
reviewers:
- prydonius
- tompizmor
- sameersbn
- carrodher
- desaintmartin
- juan131
- prydonius
- sameersbn
- tompizmor

View File

@@ -60,7 +60,7 @@ The following table lists the configurable parameters of the RabbitMQ chart and
| `rabbitmq.erlangCookie` | Erlang cookie | _random 32 character long alphanumeric string_ |
| `rabbitmq.amqpPort` | Amqp port | `5672` |
| `rabbitmq.distPort` | Erlang distribution server port | `25672` |
| `rabbitmq.nodePort` | Node port override, if serviceType NodePort | _random avaliable between 30000-32767_ |
| `rabbitmq.nodePort` | Node port override, if serviceType NodePort | _random available between 30000-32767_ |
| `rabbitmq.managerPort` | RabbitMQ Manager port | `15672` |
| `rabbitmq.diskFreeLimit` | Disk free limit | `"6GiB"` |
| `rabbitmq.plugins` | configuration file for plugins to enable | `[rabbitmq_management,rabbitmq_peer_discovery_k8s].` |

View File

@@ -1,5 +1,5 @@
name: redis
version: 4.2.2
version: 4.2.4
appVersion: 4.0.11
description: Open source, advanced key-value store. It is often referred to as a data structure server since keys can contain strings, hashes, lists, sets and sorted sets.
keywords:
@@ -13,4 +13,6 @@ sources:
maintainers:
- name: Bitnami
email: containers@bitnami.com
- name: desaintmartin
email: cedric@desaintmartin.fr
engine: gotpl

View File

@@ -1,12 +1,14 @@
approvers:
- prydonius
- tompizmor
- sameersbn
- carrodher
- desaintmartin
- juan131
reviewers:
- prydonius
- tompizmor
- sameersbn
- tompizmor
reviewers:
- carrodher
- juan131
- desaintmartin
- juan131
- prydonius
- sameersbn
- tompizmor

View File

@@ -58,7 +58,7 @@ This version removes the `chart` label from the `spec.selector.matchLabels`
which is immutable since `StatefulSet apps/v1beta2`. It has been inadvertently
added, causing any subsequent upgrade to fail. See https://github.com/helm/charts/issues/7726.
It also fixes https://github.com/helm/charts/issues/7726 where a deployment `extensions/v1beta1` can not be upgraded if `spec.selector` is not explicitely set.
It also fixes https://github.com/helm/charts/issues/7726 where a deployment `extensions/v1beta1` can not be upgraded if `spec.selector` is not explicitly set.
Finally, it fixes https://github.com/helm/charts/issues/7803 by removing mutable labels in `spec.VolumeClaimTemplate.metadata.labels` so that it is upgradable.

View File

@@ -1 +1,2 @@
.git
OWNERS

View File

@@ -1,5 +1,5 @@
name: redmine
version: 5.2.1
version: 6.0.1
appVersion: 3.4.6
description: A flexible project management web application.
keywords:

View File

@@ -1,9 +1,9 @@
dependencies:
- name: mariadb
repository: https://kubernetes-charts.storage.googleapis.com/
version: 5.2.0
version: 5.2.1
- name: postgresql
repository: https://kubernetes-charts.storage.googleapis.com/
version: 0.19.0
digest: sha256:bd7da903db69d89a8de155f6259a2ef20de455280360674b2955bb6515c13eee
generated: 2018-10-16T08:50:37.621182+02:00
version: 2.1.0
digest: sha256:0634de3cb0459ae2959df51ccac306fff4ae4618410bf6fae996ab085dbad62f
generated: 2018-10-24T11:56:52.143391+02:00

View File

@@ -4,6 +4,6 @@ dependencies:
repository: https://kubernetes-charts.storage.googleapis.com/
condition: databaseType.mariadb
- name: postgresql
version: 0.x.x
version: 2.x.x
repository: https://kubernetes-charts.storage.googleapis.com/
condition: databaseType.postgresql

View File

@@ -43,7 +43,7 @@ spec:
valueFrom:
secretKeyRef:
name: {{ template "redmine.postgresql.fullname" . }}
key: postgres-password
key: postgresql-password
{{- else }}
- name: REDMINE_DB_MYSQL
value: {{ template "redmine.mariadb.fullname" . }}

View File

@@ -114,6 +114,8 @@ mariadb:
##
## PostgreSQL chart configuration
##
## https://github.com/helm/charts/blob/master/stable/postgresql/values.yaml
##
postgresql:
## PostgreSQL admin password
## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#setting-the-root-password-on-first-run