[bitnami/nats] New major version (#4515)

* [bitnami/nats] New major version

Signed-off-by: juan131 <juanariza@vmware.com>

* Fix linting issues

Signed-off-by: juan131 <juanariza@vmware.com>
This commit is contained in:
Juan Ariza Toledano
2020-11-30 11:06:46 +01:00
committed by GitHub
parent 3f6054c1c9
commit ab944a997b
21 changed files with 1409 additions and 1020 deletions

6
bitnami/nats/Chart.lock Normal file
View File

@@ -0,0 +1,6 @@
dependencies:
- name: common
repository: https://charts.bitnami.com/bitnami
version: 1.1.1
digest: sha256:2d81f65661ede4b27144fa09f73db18cab9025d174ae0ba4e1fc3a1a60a7ba8e
generated: "2020-11-26T19:51:23.764557+01:00"

View File

@@ -2,6 +2,12 @@ annotations:
category: Infrastructure
apiVersion: v2
appVersion: 2.1.9
dependencies:
- name: common
repository: https://charts.bitnami.com/bitnami
tags:
- bitnami-common
version: 1.x.x
description: An open-source, cloud-native messaging system
engine: gotpl
home: https://github.com/bitnami/charts/tree/master/bitnami/nats
@@ -18,4 +24,4 @@ name: nats
sources:
- https://github.com/bitnami/bitnami-docker-nats
- https://nats.io/
version: 5.0.0
version: 6.0.0

View File

@@ -44,113 +44,160 @@ The command removes all the Kubernetes components associated with the chart and
## Parameters
The following table lists the configurable parameters of the NATS chart and their default values.
The following tables lists the configurable parameters of the NATS chart and their default values per section/component:
### Global parameters
| Parameter | Description | Default |
|-----------------------------------------|------------------------------------------------------------|---------------------------------------------------------|
| `global.imageRegistry` | Global Docker image registry | `nil` |
| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
### Common parameters
| Parameter | Description | Default |
|-----------------------------------------|------------------------------------------------------------|---------------------------------------------------------|
| `nameOverride` | String to partially override common.names.fullname | `nil` |
| `fullnameOverride` | String to fully override common.names.fullname | `nil` |
| `commonLabels` | Labels to add to all deployed objects | `{}` |
| `commonAnnotations` | Annotations to add to all deployed objects | `{}` |
| `clusterDomain` | Default Kubernetes cluster domain | `cluster.local` |
| `extraDeploy` | Array of extra objects to deploy with the release | `[]` (evaluated as a template) |
### NATS parameters
| Parameter | Description | Default |
|-----------------------------------------|------------------------------------------------------------------------------------------|---------------------------------------------------------|
| `image.registry` | NATS image registry | `docker.io` |
| `image.repository` | NATS image name | `bitnami/nats` |
| `image.tag` | NATS image tag | `{TAG_NAME}` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `auth.enabled` | Switch to enable/disable client authentication | `true` |
| `auth.user` | Client authentication user | `nats_client` |
| `auth.password` | Client authentication password | `random alhpanumeric string (10)` |
| `auth.token` | Client authentication token | `nil` |
| `clusterAuth.enabled` | Switch to enable/disable cluster authentication | `true` |
| `clusterAuth.user` | Cluster authentication user | `nats_cluster` |
| `clusterAuth.password` | Cluster authentication password | `random alhpanumeric string (10)` |
| `clusterAuth.token` | Cluster authentication token | `nil` |
| `debug.enabled` | Switch to enable/disable debug on logging | `false` |
| `debug.trace` | Switch to enable/disable trace debug level on logging | `false` |
| `debug.logtime` | Switch to enable/disable logtime on logging | `false` |
| `maxConnections` | Max. number of client connections | `nil` |
| `maxControlLine` | Max. protocol control line | `nil` |
| `maxPayload` | Max. payload | `nil` |
| `writeDeadline` | Duration the server can block on a socket write to a client | `nil` |
| `natsFilename` | Filename used by several NATS files (binary, configurarion file, and pid file) | `nats-server` |
| `command` | Override default container command (useful when using custom images) | `nil` |
| `args` | Override default container args (useful when using custom images) | `nil` |
| `metrics.kafka.extraFlags` | Extra flags to be passed to NATS | `{}` |
| `extraEnvVars` | Extra environment variables to be set on NATS container | `{}` |
| `extraEnvVarsCM` | Name of existing ConfigMap containing extra env vars | `nil` |
| `extraEnvVarsSecret` | Name of existing Secret containing extra env vars | `nil` |
### NATS deployment/statefulset parameters
| Parameter | Description | Default |
|-----------------------------------------|------------------------------------------------------------------------------------------|---------------------------------------------------------|
| `resourceType` | NATS cluster resource type under Kubernetes (Supported: StatefulSets, or Deployment) | `statefulset` |
| `replicaCount` | Number of NATS nodes | `1` |
| `schedulerName` | Name of an alternate | `nil` |
| `priorityClassName` | Name of pod priority class | `nil` |
| `podSecurityContext` | NATS pods' Security Context | Check `values.yaml` file |
| `updateStrategy` | Strategy to use to update Pods | Check `values.yaml` file |
| `containerSecurityContext` | NATS containers' Security Context | Check `values.yaml` file |
| `resources.limits` | The resources limits for the NATS container | `{}` |
| `resources.requests` | The requested resources for the NATS container | `{}` |
| `leavinessProbe` | Leaviness probe configuration for NATS | Check `values.yaml` file |
| `readinessProbe` | Readiness probe configuration for NATS | Check `values.yaml` file |
| `customLivenessProbe` | Override default liveness probe | `nil` |
| `customReadinessProbe` | Override default readiness probe | `nil` |
| `podAffinityPreset` | Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `podAntiAffinityPreset` | Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `soft` |
| `nodeAffinityPreset.type` | Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard`| `""` |
| `nodeAffinityPreset.key` | Node label key to match. Ignored if `affinity` is set. | `""` |
| `nodeAffinityPreset.values` | Node label values to match. Ignored if `affinity` is set. | `[]` |
| `affinity` | Affinity for pod assignment | `{}` (evaluated as a template) |
| `nodeSelector` | Node labels for pod assignment | `{}` (evaluated as a template) |
| `tolerations` | Tolerations for pod assignment | `[]` (evaluated as a template) |
| `podLabels` | Extra labels for NATS pods | `{}` (evaluated as a template) |
| `podAnnotations` | Annotations for NATS pods | `{}` (evaluated as a template) |
| `extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for NATS container(s) | `[]` |
| `extraVolumes` | Optionally specify extra list of additional volumes for NATS pods | `[]` |
| `initContainers` | Add additional init containers to the NATS pods | `{}` (evaluated as a template) |
| `sidecars` | Add additional sidecar containers to the NATS pods | `{}` (evaluated as a template) |
### Exposure parameters
| Parameter | Description | Default |
|-----------------------------------------|------------------------------------------------------------------------------------------|---------------------------------------------------------|
| `client.service.type` | Kubernetes Service type (NATS client) | `ClusterIP` |
| `client.service.port` | NATS client port | `4222` |
| `client.service.nodePort` | Port to bind to for NodePort service type (NATS client) | `nil` |
| `client.service.annotations` | Annotations for NATS client service | {} |
| `client.service.loadBalancerIP` | loadBalancerIP if NATS client service type is `LoadBalancer` | `nil` |
| `cluster.service.type` | Kubernetes Service type (NATS cluster) | `ClusterIP` |
| `cluster.service.port` | NATS cluster port | `6222` |
| `cluster.service.nodePort` | Port to bind to for NodePort service type (NATS cluster) | `nil` |
| `cluster.service.annotations` | Annotations for NATS cluster service | {} |
| `cluster.service.loadBalancerIP` | loadBalancerIP if NATS cluster service type is `LoadBalancer` | `nil` |
| `cluster.connectRetries` | Configure number of connect retries for implicit routes | `nil` |
| `monitoring.service.type` | Kubernetes Service type (NATS monitoring) | `ClusterIP` |
| `monitoring.service.port` | NATS monitoring port | `8222` |
| `monitoring.service.nodePort` | Port to bind to for NodePort service type (NATS monitoring) | `nil` |
| `monitoring.service.annotations` | Annotations for NATS monitoring service | {} |
| `monitoring.service.loadBalancerIP` | loadBalancerIP if NATS monitoring service type is `LoadBalancer` | `nil` |
| `ingress.enabled` | Enable ingress controller resource | `false` |
| `ingress.certManager` | Add annotations for cert-manager | `false` |
| `ingress.hostname` | Default hostname for the NATS monitoring ingress resource | `nats.local` |
| `ingress.tls` | Enable TLS configuration for the hostname defined at `ingress.hostname` parameter | `false` |
| `ingress.annotations` | Ingress annotations | `{}` (evaluated as a template) |
| `ingress.extraHosts[0].name` | Additional hostnames to be covered | `nil` |
| `ingress.extraHosts[0].path` | Additional hostnames to be covered | `nil` |
| `ingress.extraTls[0].hosts[0]` | TLS configuration for additional hostnames to be covered | `nil` |
| `ingress.extraTls[0].secretName` | TLS configuration for additional hostnames to be covered | `nil` |
| `ingress.secrets[0].name` | TLS Secret Name | `nil` |
| `ingress.secrets[0].certificate` | TLS Secret Certificate | `nil` |
| `ingress.secrets[0].key` | TLS Secret Key | `nil` |
| `networkPolicy.enabled` | Enable the default NetworkPolicy policy | `false` |
| `networkPolicy.allowExternal` | Don't require client label for connections | `true` |
| `networkPolicy.additionalRules` | Additional NetworkPolicy rules | `{}` (evaluated as a template) |
### Metrics parameters
| Parameter | Description | Default |
| ------------------------------------------ | ------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------- |
| `global.imageRegistry` | Global Docker image registry | `nil` |
| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `image.registry` | NATS image registry | `docker.io` |
| `image.repository` | NATS Image name | `bitnami/nats` |
| `image.tag` | NATS Image tag | `{TAG_NAME}` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `nameOverride` | String to partially override nats.fullname template with a string (will prepend the release name) | `nil` |
| `fullnameOverride` | String to fully override nats.fullname template with a string | `nil` |
| `auth.enabled` | Switch to enable/disable client authentication | `true` |
| `auth.user` | Client authentication user | `nats_client` |
| `auth.password` | Client authentication password | `random alhpanumeric string (10)` |
| `auth.token` | Client authentication token | `nil` |
| `clusterAuth.enabled` | Switch to enable/disable cluster authentication | `true` |
| `clusterAuth.user` | Cluster authentication user | `nats_cluster` |
| `clusterAuth.password` | Cluster authentication password | `random alhpanumeric string (10)` |
| `clusterAuth.token` | Cluster authentication token | `nil` |
| `debug.enabled` | Switch to enable/disable debug on logging | `false` |
| `debug.trace` | Switch to enable/disable trace debug level on logging | `false` |
| `debug.logtime` | Switch to enable/disable logtime on logging | `false` |
| `maxConnections` | Max. number of client connections | `nil` |
| `maxControlLine` | Max. protocol control line | `nil` |
| `maxPayload` | Max. payload | `nil` |
| `writeDeadline` | Duration the server can block on a socket write to a client | `nil` |
| `replicaCount` | Number of NATS nodes | `1` |
| `resourceType` | NATS cluster resource type under Kubernetes (Supported: StatefulSets, or Deployment) | `statefulset` |
| `securityContext.enabled` | Enable security context | `true` |
| `securityContext.fsGroup` | Group ID for the container | `1001` |
| `securityContext.runAsUser` | User ID for the container | `1001` |
| `statefulset.updateStrategy` | Statefulsets Update strategy | `OnDelete` |
| `statefulset.rollingUpdatePartition` | Partition for Rolling Update strategy | `nil` |
| `podLabels` | Additional labels to be added to pods | {} |
| `priorityClassName` | Name of pod priority class | `nil` |
| `podAnnotations` | Annotations to be added to pods | {} |
| `pdb.create` | If true, create a pod disruption budget for NATS pods | `false` |
| `pdb.minAvailable` | Minimum number / percentage of pods that should remain scheduled | `1` |
| `pdb.maxUnavailable` | Maximum number / percentage of pods that may be made unavailable | `""` |
| `nodeSelector` | Node labels for pod assignment | `nil` |
| `schedulerName` | Name of an alternate | `nil` |
| `antiAffinity` | Anti-affinity for pod assignment | `soft` |
| `tolerations` | Toleration labels for pod assignment | `nil` |
| `resources` | CPU/Memory resource requests/limits | {} |
| `extraArgs` | Optional flags for NATS | `[]` |
| `natsFilename` | Filename used by several NATS files (binary, configurarion file, and pid file) | `nats-server` |
| `livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `30` |
| `livenessProbe.periodSeconds` | How often to perform the probe | `10` |
| `livenessProbe.timeoutSeconds` | When the probe times out | `5` |
| `livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed. | `1` |
| `livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | `6` |
| `readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `5` |
| `readinessProbe.periodSeconds` | How often to perform the probe | `10` |
| `readinessProbe.timeoutSeconds` | When the probe times out | `5` |
| `readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | `6` |
| `readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed. | `1` |
| `client.service.type` | Kubernetes Service type (NATS client) | `ClusterIP` |
| `client.service.port` | NATS client port | `4222` |
| `client.service.nodePort` | Port to bind to for NodePort service type (NATS client) | `nil` |
| `client.service.annotations` | Annotations for NATS client service | {} |
| `client.service.loadBalancerIP` | loadBalancerIP if NATS client service type is `LoadBalancer` | `nil` |
| `cluster.service.type` | Kubernetes Service type (NATS cluster) | `ClusterIP` |
| `cluster.service.port` | NATS cluster port | `6222` |
| `cluster.service.nodePort` | Port to bind to for NodePort service type (NATS cluster) | `nil` |
| `cluster.service.annotations` | Annotations for NATS cluster service | {} |
| `cluster.service.loadBalancerIP` | loadBalancerIP if NATS cluster service type is `LoadBalancer` | `nil` |
| `cluster.connectRetries` | Configure number of connect retries for implicit routes | `nil` |
| `monitoring.service.type` | Kubernetes Service type (NATS monitoring) | `ClusterIP` |
| `monitoring.service.port` | NATS monitoring port | `8222` |
| `monitoring.service.nodePort` | Port to bind to for NodePort service type (NATS monitoring) | `nil` |
| `monitoring.service.annotations` | Annotations for NATS monitoring service | {} |
| `monitoring.service.loadBalancerIP` | loadBalancerIP if NATS monitoring service type is `LoadBalancer` | `nil` |
| `ingress.enabled` | Enable ingress controller resource | `false` |
| `ingress.hosts[0].name` | Hostname for NATS monitoring | `nats.local` |
| `ingress.hosts[0].path` | Path within the url structure | `/` |
| `ingress.hosts[0].tls` | Utilize TLS backend in ingress | `false` |
| `ingress.hosts[0].tlsSecret` | TLS Secret (certificates) | `nats.local-tls-secret` |
| `ingress.hosts[0].annotations` | Annotations for this host's ingress record | `[]` |
| `ingress.secrets[0].name` | TLS Secret Name | `nil` |
| `ingress.secrets[0].certificate` | TLS Secret Certificate | `nil` |
| `ingress.secrets[0].key` | TLS Secret Key | `nil` |
| `networkPolicy.enabled` | Enable NetworkPolicy | `false` |
| `networkPolicy.allowExternal` | Allow external connections | `true` |
|--------------------------------------------|--------------------------------------------------------------------------------------------------------|---------------------------------------------------------------|
| `metrics.enabled` | Enable Prometheus metrics via exporter side-car | `false` |
| `metrics.image.registry` | Prometheus metrics exporter image registry | `docker.io` |
| `metrics.image.repository` | Prometheus metrics exporter image name | `bitnami/nats-exporter` |
| `metrics.image.tag` | Prometheus metrics exporter image tag | `{TAG_NAME}` |
| `metrics.image.pullPolicy` | Prometheus metrics image pull policy | `IfNotPresent` |
| `metrics.image.pullSecrets` | Prometheus metrics image pull secrets | `[]` (does not add image pull secrets to deployed pods) |
| `metrics.port` | Prometheus metrics exporter port | `7777` |
| `metrics.podAnnotations` | Prometheus metrics exporter annotations | `prometheus.io/scrape: "true"`, `prometheus.io/port: "7777"` |
| `metrics.resources` | Prometheus metrics exporter resource requests/limit | {} |
| `metrics.serviceMonitor.enabled` | if `true`, creates a Prometheus Operator ServiceMonitor (also requires `metrics.enabled` to be `true`) | `false` |
| `metrics.serviceMonitor.namespace` | Namespace in which Prometheus is running | `nil` |
| `metrics.serviceMonitor.interval` | Interval at which metrics should be scraped. | `nil` (Prometheus Operator default value) |
| `metrics.serviceMonitor.scrapeTimeout` | Timeout after which the scrape is ended | `nil` (Prometheus Operator default value) |
| `metrics.serviceMonitor.selector` | Prometheus instance selector labels | `nil` |
| `metrics.flags` | Flags to be passed to Prometheus metrics | Check `values.yaml` file |
| `metrics.containerPort` | Prometheus metrics exporter port | `7777` |
| `metrics.resources` | Prometheus metrics exporter resource requests/limit | `{}` |
| `metrics.service.type` | Kubernetes service type (`ClusterIP`, `NodePort` or `LoadBalancer`) | `ClusterIP` |
| `metrics.service.port` | InfluxDB Prometheus port | `9122` |
| `metrics.service.port` | Prometheus metrics svc port | `7777` |
| `metrics.service.annotations` | Prometheus metrics exporter annotations | `prometheus.io/scrape: "true"`, `prometheus.io/port: "7777"` |
| `metrics.service.nodePort` | Kubernetes HTTP node port | `""` |
| `metrics.service.annotations` | Annotations for Prometheus metrics service | `Check values.yaml file` |
| `metrics.service.loadBalancerIP` | loadBalancerIP if service type is `LoadBalancer` | `nil` |
| `metrics.service.loadBalancerSourceRanges` | Address that are allowed when service is LoadBalancer | `[]` |
| `metrics.service.clusterIP` | Static clusterIP or None for headless services | `nil` |
| `sidecars` | Attach additional containers to the pod | `nil` |
| `metrics.serviceMonitor.enabled` | if `true`, creates a Prometheus Operator ServiceMonitor (also requires `metrics.enabled` to be `true`) | `false` |
| `metrics.serviceMonitor.namespace` | Namespace in which Prometheus is running | `nil` |
| `metrics.serviceMonitor.interval` | Interval at which metrics should be scraped. | `nil` (Prometheus Operator default value) |
| `metrics.serviceMonitor.scrapeTimeout` | Timeout after which the scrape is ended | `nil` (Prometheus Operator default value) |
| `metrics.serviceMonitor.selector` | Prometheus instance selector labels | `nil` |
### Other parameters
| Parameter | Description | Default |
|-----------------------------------------|---------------------------------------------------------------------|---------------------------------------------------------|
| `pdb.create` | Enable/disable a Pod Disruption Budget creation | `false` |
| `pdb.minAvailable` | Minimum number/percentage of pods that should remain scheduled | `1` |
| `pdb.maxUnavailable` | Maximum number/percentage of pods that may be made unavailable | `nil` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
@@ -183,12 +230,14 @@ Bitnami will release a new chart updating its containers if a new version of the
This chart includes a `values-production.yaml` file where you can find some parameters oriented to production configuration in comparison to the regular `values.yaml`. You can use this file instead of the default one.
- Number of NATS nodes
```diff
- replicaCount: 1
+ replicaCount: 3
```
- Enable and set the max. number of client connections, protocol control line, payload and duration the server can block on a socket write to a client
```diff
- # maxConnections: 100
- # maxControlLine: 512
@@ -201,30 +250,35 @@ This chart includes a `values-production.yaml` file where you can find some para
```
- Enable NetworkPolicy:
```diff
- networkPolicy.enabled: false
+ networkPolicy.enabled: true
```
- Allow external connections:
- Disallow external connections:
```diff
- networkPolicy.allowExternal: true
+ networkPolicy.allowExternal: false
```
- Enable ingress controller resource:
```diff
- ingress.enabled: false
+ ingress.enabled: true
```
- Enable Prometheus metrics via exporter side-car:
```diff
- metrics.enabled: false
+ metrics.enabled: true
```
- Enable PodDisruptionBudget:
```diff
- pdb.create: false
+ pdb.create: true
@@ -232,20 +286,54 @@ This chart includes a `values-production.yaml` file where you can find some para
To horizontally scale this chart, you can use the `--replicas` flag to modify the number of nodes in your NATS replica set.
### Sidecars
### Adding extra environment variables
If you have a need for additional containers to run within the same pod as NATS (e.g. an additional metrics or logging exporter), you can do so via the `sidecars` config parameter. Simply define your container according to the Kubernetes container spec.
In case you want to add extra environment variables (useful for advanced operations like custom init scripts), you can use the `extraEnvVars` property.
```yaml
extraEnvVars:
- name: LOG_LEVEL
value: DEBUG
```
Alternatively, you can use a ConfigMap or a Secret with the environment variables. To do so, use the `extraEnvVarsCM` or the `extraEnvVarsSecret` values.
### Sidecars and Init Containers
If you have a need for additional containers to run within the same pod as the NATS app (e.g. an additional metrics or logging exporter), you can do so via the `sidecars` config parameter. Simply define your container according to the Kubernetes container spec.
```yaml
sidecars:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
```
Similarly, you can add extra init containers using the `initContainers` parameter.
```yaml
initContainers:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
```
### Deploying extra resources
There are cases where you may want to deploy extra objects, such a ConfigMap containing your app's configuration or some extra deployment with a micro service used by your app. For covering this case, the chart allows adding the full specification of other objects using the `extraDeploy` parameter.
### Setting Pod's affinity
This chart allows you to set your custom affinity using the `affinity` paremeter. Find more infomation about Pod's affinity in the [kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity).
As an alternative, you can use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the [bitnami/common](https://github.com/bitnami/charts/tree/master/bitnami/common#affinities) chart. To do so, set the `podAffinityPreset`, `podAntiAffinityPreset`, or `nodeAffinityPreset` parameters.
## Troubleshooting
Find more information about how to deal with common errors related to Bitnamis Helm charts in [this troubleshooting guide](https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues).
@@ -261,6 +349,18 @@ however, it is still possible to use the chart to deploy NATS version 1.x.x usin
helm install nats-v1 --set natsFilename=gnatsd --set image.tag=1.4.1 bitnami/nats
```
### To 6.0.0
- Some parameters were renamed or dissapeared in favor of new ones on this major version. For instance:
- `securityContext.*` is deprecated in favor of `podSecurityContext` and `containerSecurityContext`.
- Ingress configuration was adapted to follow the Helm charts best practices.
- Chart labels were also adapted to follow the [Helm charts standard labels](https://helm.sh/docs/chart_best_practices/labels/#standard-labels).
- This version also introduces `bitnami/common`, a [library chart](https://helm.sh/docs/topics/library_charts/#helm) as a dependency. More documentation about this new utility could be found [here](https://github.com/bitnami/charts/tree/master/bitnami/common#bitnami-common-library-chart). Please, make sure that you have updated the chart dependencies before executing any upgrade.
Consequences:
- Backwards compatibility is not guaranteed.
### To 5.0.0
[On November 13, 2020, Helm v2 support was formally finished](https://github.com/helm/charts#status-of-the-project), this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL.

View File

@@ -22,21 +22,31 @@
NATS can be accessed via port {{ .Values.client.service.port }} on the following DNS name from within your cluster:
{{ template "nats.fullname" . }}-client.{{ .Release.Namespace }}.svc.cluster.local
{{ template "common.names.fullname" . }}-client.{{ .Release.Namespace }}.svc.cluster.local
{{- if .Values.auth.enabled }}
To get the authentication credentials, run:
export NATS_USER=$(kubectl get cm --namespace {{ .Release.Namespace }} {{ template "nats.fullname" . }} -o jsonpath='{.data.*}' | grep -m 1 user | awk '{print $2}')
export NATS_PASS=$(kubectl get cm --namespace {{ .Release.Namespace }} {{ template "nats.fullname" . }} -o jsonpath='{.data.*}' | grep -m 1 password | awk '{print $2}')
export NATS_USER=$(kubectl get cm --namespace {{ .Release.Namespace }} {{ template "common.names.fullname" . }} -o jsonpath='{.data.*}' | grep -m 1 user | awk '{print $2}')
export NATS_PASS=$(kubectl get cm --namespace {{ .Release.Namespace }} {{ template "common.names.fullname" . }} -o jsonpath='{.data.*}' | grep -m 1 password | awk '{print $2}')
echo -e "Client credentials:\n\tUser: $NATS_USER\n\tPassword: $NATS_PASS"
{{- end }}
NATS monitoring service can be accessed via port {{ .Values.monitoring.service.port }} on the following DNS name from within your cluster:
{{ template "nats.fullname" . }}-monitoring.{{ .Release.Namespace }}.svc.cluster.local
{{ template "common.names.fullname" . }}-monitoring.{{ .Release.Namespace }}.svc.cluster.local
You can create a Golang pod to be used as a NATS client:
kubectl run {{ include "common.names.fullname" . }}-client --restart='Never' --image docker.io/bitnami/golang --namespace {{ .Release.Namespace }} --command -- sleep infinity
kubectl exec --tty -i {{ include "common.names.fullname" . }}-client --namespace {{ .Release.Namespace }} -- bash
go get github.com/nats-io/nats.go
cd $GOPATH/src/github.com/nats-io/nats.go/examples/nats-pub && go install && cd
cd $GOPATH/src/github.com/nats-io/nats.go/examples/nats-echo && go install && cd
nats-echo -s nats://{{ template "common.names.fullname" . }}-client.{{ .Release.Namespace }}.svc.cluster.local:{{ .Values.client.service.port }} SomeSubject
nats-pub -s nats://{{ template "common.names.fullname" . }}-client.{{ .Release.Namespace }}.svc.cluster.local:{{ .Values.client.service.port }} -reply Hi SomeSubject "Hi everyone"
To access the Monitoring svc from outside the cluster, follow the steps below:
@@ -45,11 +55,8 @@ To access the Monitoring svc from outside the cluster, follow the steps below:
1. Get the hostname indicated on the Ingress Rule and associate it to your cluster external IP:
export CLUSTER_IP=$(minikube ip) # On Minikube. Use: `kubectl cluster-info` on others K8s clusters
export HOSTNAME=$(kubectl get ingress --namespace {{ .Release.Namespace }} {{ template "nats.fullname" . }}-monitoring -o jsonpath='{.spec.rules[0].host}')
echo "Monitoring URL: http://$HOSTNAME/"
echo "$CLUSTER_IP $HOSTNAME" | sudo tee -a /etc/hosts
2. Open a browser and access the NATS monitoring browsing to the Monitoring URL
echo "Monitoring URL: http{{ if .Values.ingress.tls }}s{{ end }}://{{ .Values.ingress.hostname }}"
echo "$CLUSTER_IP {{ .Values.ingress.hostname }}" | sudo tee -a /etc/hosts
{{- else }}
@@ -58,42 +65,37 @@ To access the Monitoring svc from outside the cluster, follow the steps below:
{{- if contains "NodePort" .Values.monitoring.service.type }}
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "nats.fullname" . }}-monitoring)
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ printf "%s-monitoring" (include "common.names.fullname" .) }})
echo "Monitoring URL: http://$NODE_IP:$NODE_PORT/"
{{- else if contains "LoadBalancer" .Values.monitoring.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
Watch the status with: 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "nats.fullname" . }}-monitoring'
Watch the status with: 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ printf "%s-monitoring" (include "common.names.fullname" .) }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "nats.fullname" . }}-monitoring --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ printf "%s-monitoring" (include "common.names.fullname" .) }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo "Monitoring URL: http://$SERVICE_IP/"
{{- else if contains "ClusterIP" .Values.monitoring.service.type }}
echo "Monitoring URL: http://127.0.0.1:{{ .Values.monitoring.service.port }}"
kubectl port-forward --namespace {{ .Release.Namespace }} svc/{{ template "nats.fullname" . }}-monitoring {{ .Values.monitoring.service.port }}:{{ .Values.monitoring.service.port }}
kubectl port-forward --namespace {{ .Release.Namespace }} svc/{{ printf "%s-monitoring" (include "common.names.fullname" .) }} {{ .Values.monitoring.service.port }}:{{ .Values.monitoring.service.port }}
{{- end }}
{{- end }}
2. Access NATS monitoring by opening the URL obtained in a browser.
{{- end }}
2. Open a browser and access the NATS monitoring browsing to the Monitoring URL
{{- if .Values.metrics.enabled }}
3. Get the NATS Prometheus Metrics URL by running:
echo "Prometheus Metrics URL: http://127.0.0.1:{{ .Values.metrics.port }}/metrics"
kubectl port-forward --namespace {{ .Release.Namespace }} {{ template "nats.fullname" . }}-0 {{ .Values.metrics.port }}:{{ .Values.metrics.port }}
echo "Prometheus Metrics URL: http://127.0.0.1:{{ .Values.metrics.service.port }}/metrics"
kubectl port-forward --namespace {{ .Release.Namespace }} svc/{{ printf "%s-metrics" (include "common.names.fullname" .) }} {{ .Values.metrics.service.port }}:{{ .Values.metrics.service.port }}
4. Access NATS Prometheus metrics by opening the URL obtained in a browser.
{{- end }}
{{- include "nats.validateValues" . -}}
{{- if and (contains "bitnami/" .Values.image.repository) (not (.Values.image.tag | toString | regexFind "-r\\d+$|sha256:")) }}
WARNING: Rolling tag detected ({{ .Values.image.repository }}:{{ .Values.image.tag }}), please note that it is strongly recommended to avoid using rolling tags in a production environment.
+info https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/
{{- end }}
{{- include "nats.checkRollingTags" . -}}

View File

@@ -1,54 +1,24 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "nats.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "nats.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- define "nats.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Return the proper Nats image name
*/}}
{{- define "nats.image" -}}
{{- $registryName := .Values.image.registry -}}
{{- $repositoryName := .Values.image.repository -}}
{{- $tag := .Values.image.tag | toString -}}
{{/*
Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
Also, we can't use a single if because lazy evaluation is not an option
*/}}
{{- if .Values.global }}
{{- if .Values.global.imageRegistry }}
{{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
{{- else -}}
{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
{{- end -}}
{{- else -}}
{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
{{ include "common.images.image" (dict "imageRoot" .Values.image "global" .Values.global) }}
{{- end -}}
{{/*
Return the proper image name (for the metrics image)
*/}}
{{- define "nats.metrics.image" -}}
{{ include "common.images.image" (dict "imageRoot" .Values.metrics.image "global" .Values.global) }}
{{- end -}}
{{/*
Return the proper Docker Image Registry Secret Names
*/}}
{{- define "nats.imagePullSecrets" -}}
{{- include "common.images.pullSecrets" (dict "images" (list .Values.image .Values.metrics.image) "global" .Values.global) -}}
{{- end -}}
{{/*
@@ -71,72 +41,11 @@ Return the appropriate apiVersion for networkpolicy.
{{- end -}}
{{/*
Return the appropriate apiVersion for ingress.
Check if there are rolling tags in the images
*/}}
{{- define "ingress.apiVersion" -}}
{{- if .Capabilities.APIVersions.Has "networking.k8s.io/v1beta1" -}}
{{- print "networking.k8s.io/v1beta1" -}}
{{- else -}}
{{- print "extensions/v1beta1" -}}
{{- end -}}
{{- end -}}
{{/*
Return the proper image name (for the metrics image)
*/}}
{{- define "nats.metrics.image" -}}
{{- $registryName := .Values.metrics.image.registry -}}
{{- $repositoryName := .Values.metrics.image.repository -}}
{{- $tag := .Values.metrics.image.tag | toString -}}
{{/*
Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
Also, we can't use a single if because lazy evaluation is not an option
*/}}
{{- if .Values.global }}
{{- if .Values.global.imageRegistry }}
{{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
{{- else -}}
{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
{{- end -}}
{{- else -}}
{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
{{- end -}}
{{- end -}}
{{/*
Return the proper Docker Image Registry Secret Names
*/}}
{{- define "nats.imagePullSecrets" -}}
{{/*
Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
Also, we can not use a single if because lazy evaluation is not an option
*/}}
{{- if .Values.global }}
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- range .Values.global.imagePullSecrets }}
- name: {{ . }}
{{- end }}
{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
imagePullSecrets:
{{- range .Values.image.pullSecrets }}
- name: {{ . }}
{{- end }}
{{- range .Values.metrics.image.pullSecrets }}
- name: {{ . }}
{{- end }}
{{- end -}}
{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
imagePullSecrets:
{{- range .Values.image.pullSecrets }}
- name: {{ . }}
{{- end }}
{{- range .Values.metrics.image.pullSecrets }}
- name: {{ . }}
{{- end }}
{{- end -}}
{{- define "nats.checkRollingTags" -}}
{{- include "common.warnings.rollingTag" .Values.image }}
{{- include "common.warnings.rollingTag" .Values.metrics.image }}
{{- end -}}
{{/*

View File

@@ -1,15 +1,20 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "nats.fullname" . }}-client
labels:
app: "{{ template "nats.name" . }}"
chart: "{{ template "nats.chart" . }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
{{- if .Values.client.service.annotations }}
name: {{ printf "%s-client" (include "common.names.fullname" .) }}
namespace: {{ .Release.Namespace }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.commonLabels }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
{{- end }}
{{- if or .Values.client.service.annotations .Values.commonAnnotations }}
annotations:
{{ toYaml .Values.client.service.annotations | indent 4 }}
{{- if .Values.client.service.annotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.client.service.annotations "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
{{- end }}
spec:
type: {{ .Values.client.service.type }}
@@ -23,6 +28,4 @@ spec:
{{- if and (eq .Values.client.service.type "NodePort") (not (empty .Values.client.service.nodePort)) }}
nodePort: {{ .Values.client.service.nodePort }}
{{- end }}
selector:
app: "{{ template "nats.name" . }}"
release: {{ .Release.Name | quote }}
selector: {{ include "common.labels.matchLabels" . | nindent 4 }}

View File

@@ -1,15 +1,20 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "nats.fullname" . }}-cluster
labels:
app: "{{ template "nats.name" . }}"
chart: "{{ template "nats.chart" . }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
{{- if .Values.cluster.service.annotations }}
name: {{ printf "%s-cluster" (include "common.names.fullname" .) }}
namespace: {{ .Release.Namespace }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.commonLabels }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
{{- end }}
{{- if or .Values.cluster.service.annotations .Values.commonAnnotations }}
annotations:
{{ toYaml .Values.cluster.service.annotations | indent 4 }}
{{- if .Values.cluster.service.annotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.cluster.service.annotations "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
{{- end }}
spec:
type: {{ .Values.cluster.service.type }}
@@ -23,6 +28,4 @@ spec:
{{- if and (eq .Values.cluster.service.type "NodePort") (not (empty .Values.cluster.service.nodePort)) }}
nodePort: {{ .Values.cluster.service.nodePort }}
{{- end }}
selector:
app: "{{ template "nats.name" . }}"
release: {{ .Release.Name | quote }}
selector: {{ include "common.labels.matchLabels" . | nindent 4 }}

View File

@@ -3,12 +3,15 @@
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: "{{ template "nats.name" . }}"
chart: "{{ template "nats.chart" . }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
name: {{ template "nats.fullname" . }}
name: {{ template "common.names.fullname" . }}
namespace: {{ .Release.Namespace }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.commonLabels }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
data:
{{ .Values.natsFilename }}.conf: |-
listen: 0.0.0.0:{{ .Values.client.service.port }}
@@ -73,12 +76,12 @@ data:
routes = [
{{- if .Values.clusterAuth.enabled }}
{{- if .Values.clusterAuth.user }}
nats://{{ .Values.clusterAuth.user }}:{{ $clusterAuthPwd }}@{{ template "nats.fullname" . }}-cluster:{{ .Values.cluster.service.port }}
nats://{{ .Values.clusterAuth.user }}:{{ $clusterAuthPwd }}@{{ template "common.names.fullname" . }}-cluster:{{ .Values.cluster.service.port }}
{{- else if .Values.clusterAuth.token }}
nats://{{ .Values.clusterAuth.token }}@{{ template "nats.fullname" . }}-cluster:{{ .Values.cluster.service.port }}
nats://{{ .Values.clusterAuth.token }}@{{ template "common.names.fullname" . }}-cluster:{{ .Values.cluster.service.port }}
{{- end }}
{{- else }}
nats://{{ template "nats.fullname" . }}-cluster:{{ .Values.cluster.service.port }}
nats://{{ template "common.names.fullname" . }}-cluster:{{ .Values.cluster.service.port }}
{{- end }}
]

View File

@@ -1,164 +1,159 @@
{{- if eq .Values.resourceType "deployment" }}
apiVersion: apps/v1
apiVersion: {{ include "common.capabilities.deployment.apiVersion" . }}
kind: Deployment
metadata:
name: {{ template "nats.fullname" . }}
labels:
app: "{{ template "nats.name" . }}"
chart: "{{ template "nats.chart" . }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
name: {{ template "common.names.fullname" . }}
namespace: {{ .Release.Namespace }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.commonLabels }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
spec:
replicas: {{ .Values.replicaCount }}
strategy:
rollingUpdate:
maxSurge: {{ .Values.deployment.maxSurge }}
maxUnavailable: {{ .Values.deployment.maxUnavailable }}
type: {{ .Values.deployment.updateType }}
strategy: {{- include "common.tplvalues.render" (dict "value" .Values.updateStrategy "context" $ ) | nindent 4 }}
selector:
matchLabels:
app: "{{ template "nats.name" . }}"
release: {{ .Release.Name | quote }}
matchLabels: {{- include "common.labels.matchLabels" . | nindent 6 }}
template:
metadata:
labels:
app: "{{ template "nats.name" . }}"
chart: "{{ template "nats.chart" . }}"
release: {{ .Release.Name | quote }}
labels: {{- include "common.labels.standard" . | nindent 8 }}
{{- if .Values.podLabels }}
{{ toYaml .Values.podLabels | indent 8 }}
{{- include "common.tplvalues.render" (dict "value" .Values.podLabels "context" $) | nindent 8 }}
{{- end }}
{{- if or .Values.podAnnotations .Values.metrics.enabled }}
{{- if or .Values.podAnnotations .Values.metrics.enabled }}
annotations:
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
{{- if .Values.metrics.podAnnotations }}
{{ toYaml .Values.metrics.podAnnotations | indent 8 }}
{{- end }}
{{- end }}
spec:
{{- include "nats.imagePullSecrets" . | indent 6 }}
{{- if .Values.securityContext.enabled }}
securityContext:
fsGroup: {{ .Values.securityContext.fsGroup }}
runAsUser: {{ .Values.securityContext.runAsUser }}
{{- if .Values.podAnnotations }}
{{- include "common.tplvalues.render" (dict "value" .Values.podAnnotations "context" $) | nindent 8 }}
{{- end }}
{{- end }}
spec:
{{- include "nats.imagePullSecrets" . | nindent 6 }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName | quote }}
{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
{{- if .Values.schedulerName }}
schedulerName: {{ .Values.schedulerName | quote }}
{{- end }}
{{- if eq .Values.antiAffinity "hard" }}
{{- if .Values.affinity }}
affinity: {{- include "common.tplvalues.render" ( dict "value" .Values.affinity "context" $) | nindent 8 }}
{{- else }}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: "kubernetes.io/hostname"
labelSelector:
matchLabels:
app: "{{ template "nats.name" . }}"
release: {{ .Release.Name | quote }}
{{- else if eq .Values.antiAffinity "soft" }}
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
app: "{{ template "nats.name" . }}"
release: {{ .Release.Name | quote }}
podAffinity: {{- include "common.affinities.pods" (dict "type" .Values.podAffinityPreset "context" $) | nindent 10 }}
podAntiAffinity: {{- include "common.affinities.pods" (dict "type" .Values.podAntiAffinityPreset "context" $) | nindent 10 }}
nodeAffinity: {{- include "common.affinities.nodes" (dict "type" .Values.nodeAffinityPreset.type "key" .Values.nodeAffinityPreset.key "values" .Values.nodeAffinityPreset.values) | nindent 10 }}
{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector: {{- include "common.tplvalues.render" ( dict "value" .Values.nodeSelector "context" $) | nindent 8 }}
{{- end }}
{{- if .Values.tolerations }}
tolerations: {{- include "common.tplvalues.render" (dict "value" .Values.tolerations "context" .) | nindent 8 }}
{{- end }}
{{- if .Values.podSecurityContext.enabled }}
securityContext: {{- omit .Values.podSecurityContext "enabled" | toYaml | nindent 8 }}
{{- end }}
{{- if .Values.initContainers }}
initContainers: {{- include "common.tplvalues.render" (dict "value" .Values.initContainers "context" $) | nindent 8 }}
{{- end }}
containers:
- name: {{ template "nats.name" . }}
image: {{ template "nats.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- {{ .Values.natsFilename }}
args:
- -c
- /opt/bitnami/nats/{{ .Values.natsFilename }}.conf
# to ensure nats could run with non-root user, we put the configuration
# file under `/opt/bitnami/nats/{{ .Values.natsFilename }}.conf`, please check the link below
# for the implementation inside Dockerfile:
# - https://github.com/bitnami/bitnami-docker-nats#configuration
{{- if .Values.extraArgs }}
{{ toYaml .Values.extraArgs | indent 8 }}
{{- end }}
ports:
- name: client
containerPort: {{ .Values.client.service.port }}
- name: cluster
containerPort: {{ .Values.cluster.service.port }}
- name: monitoring
containerPort: {{ .Values.monitoring.service.port }}
{{- if .Values.livenessProbe.enabled }}
livenessProbe:
httpGet:
path: /
port: monitoring
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
successThreshold: {{ .Values.livenessProbe.successThreshold }}
failureThreshold: {{ .Values.livenessProbe.failureThreshold }}
{{- end }}
{{- if .Values.readinessProbe.enabled }}
readinessProbe:
httpGet:
path: /
port: monitoring
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
successThreshold: {{ .Values.readinessProbe.successThreshold }}
failureThreshold: {{ .Values.readinessProbe.failureThreshold }}
{{- end }}
resources:
{{ toYaml .Values.resources | indent 10 }}
volumeMounts:
- name: config
mountPath: /opt/bitnami/nats/{{ .Values.natsFilename }}.conf
subPath: {{ .Values.natsFilename }}.conf
{{- if .Values.sidecars }}
{{ toYaml .Values.sidecars | indent 6 }}
{{- end }}
{{- if .Values.metrics.enabled }}
- name: metrics
image: {{ template "nats.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
args: {{- toYaml .Values.metrics.args | nindent 10 }}
- "http://localhost:{{ .Values.monitoring.service.port }}"
ports:
- name: nats
image: {{ template "nats.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
{{- if .Values.command }}
{{- include "common.tplvalues.render" (dict "value" .Values.command "context" $) | nindent 12 }}
{{- else }}
- {{ .Values.natsFilename }}
{{- end }}
args:
{{- if .Values.args }}
{{- include "common.tplvalues.render" (dict "value" .Values.args "context" $) | nindent 12 }}
{{- else }}
- -c
- /opt/bitnami/nats/{{ .Values.natsFilename }}.conf
# to ensure nats could run with non-root user, we put the configuration
# file under `/opt/bitnami/nats/{{ .Values.natsFilename }}.conf`, please check the link below
# for the implementation inside Dockerfile:
# - https://github.com/bitnami/bitnami-docker-nats#configuration
{{- range $key, $value := .Values.extraFlags }}
--{{ $key }}{{ if $value }}={{ $value }}{{ end }}
{{- end }}
{{- end }}
{{- if .Values.extraEnvVars }}
env: {{- include "common.tplvalues.render" (dict "value" .Values.extraEnvVars "context" $) | nindent 12 }}
{{- end }}
{{- if or .Values.extraEnvVarsCM .Values.extraEnvVarsSecret }}
envFrom:
{{- if .Values.extraEnvVarsCM }}
- configMapRef:
name: {{ include "common.tplvalues.render" (dict "value" .Values.extraEnvVarsCM "context" $) }}
{{- end }}
{{- if .Values.extraEnvVarsSecret }}
- secretRef:
name: {{ include "common.tplvalues.render" (dict "value" .Values.extraEnvVarsSecret "context" $) }}
{{- end }}
{{- end }}
ports:
- name: client
containerPort: {{ .Values.client.service.port }}
- name: cluster
containerPort: {{ .Values.cluster.service.port }}
- name: monitoring
containerPort: {{ .Values.monitoring.service.port }}
{{- if .Values.livenessProbe.enabled }}
livenessProbe: {{- include "common.tplvalues.render" (dict "value" (omit .Values.livenessProbe "enabled") "context" $) | nindent 12 }}
{{- else if .Values.customLivenessProbe }}
livenessProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customLivenessProbe "context" $) | nindent 12 }}
{{- end }}
{{- if .Values.readinessProbe.enabled }}
readinessProbe: {{- include "common.tplvalues.render" (dict "value" (omit .Values.readinessProbe "enabled") "context" $) | nindent 12 }}
{{- else if .Values.customReadinessProbe }}
readinessProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customReadinessProbe "context" $) | nindent 12 }}
{{- end }}
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 12 }}
{{- end }}
volumeMounts:
- name: config
mountPath: /opt/bitnami/nats/{{ .Values.natsFilename }}.conf
subPath: {{ .Values.natsFilename }}.conf
{{- if .Values.extraVolumeMounts }}
{{- include "common.tplvalues.render" (dict "value" .Values.extraVolumeMounts "context" $) | nindent 12 }}
{{- end }}
{{- if .Values.metrics.enabled }}
- name: metrics
containerPort: {{ .Values.metrics.port }}
livenessProbe:
httpGet:
path: /metrics
port: metrics
initialDelaySeconds: 15
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /metrics
port: metrics
initialDelaySeconds: 5
timeoutSeconds: 1
resources:
{{ toYaml .Values.metrics.resources | indent 10 }}
{{- end }}
image: {{ template "nats.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
args: {{- include "common.tplvalues.render" (dict "value" .Values.metrics.flags "context" $) | nindent 12 }}
- "http://localhost:{{ .Values.monitoring.service.port }}"
ports:
- name: metrics
containerPort: {{ .Values.metrics.containerPort }}
livenessProbe:
httpGet:
path: /metrics
port: metrics
initialDelaySeconds: 15
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /metrics
port: metrics
initialDelaySeconds: 5
timeoutSeconds: 1
{{- if .Values.metrics.resources }}
resources: {{- toYaml .Values.metrics.resources | nindent 12 }}
{{- end }}
{{- end }}
{{- if .Values.sidecars }}
{{- include "common.tplvalues.render" ( dict "value" .Values.sidecars "context" $) | nindent 8 }}
{{- end }}
volumes:
- name: config
configMap:
name: {{ template "nats.fullname" . }}
- name: config
configMap:
name: {{ template "common.names.fullname" . }}
{{- if .Values.extraVolumes }}
{{- include "common.tplvalues.render" (dict "value" .Values.extraVolumes "context" $) | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,4 @@
{{- range .Values.extraDeploy }}
---
{{ include "common.tplvalues.render" (dict "value" . "context" $) }}
{{- end }}

View File

@@ -1,22 +1,23 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "nats.fullname" . }}-headless
labels:
app: {{ template "nats.name" . }}
chart: {{ template "nats.chart" . }}
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
name: {{ printf "%s-headless" (include "common.names.fullname" .) }}
namespace: {{ .Release.Namespace }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.commonLabels }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
spec:
type: ClusterIP
clusterIP: None
ports:
- name: client
port: {{ .Values.client.service.port }}
targetPort: client
- name: cluster
port: {{ .Values.cluster.service.port }}
targetPort: cluster
selector:
app: {{ template "nats.name" . }}
release: {{ .Release.Name | quote }}
- name: client
port: {{ .Values.client.service.port }}
targetPort: client
- name: cluster
port: {{ .Values.cluster.service.port }}
targetPort: cluster
selector: {{ include "common.labels.matchLabels" . | nindent 4 }}

View File

@@ -1,36 +1,53 @@
{{- if .Values.ingress.enabled -}}
{{- range .Values.ingress.hosts }}
apiVersion: {{ include "ingress.apiVersion" $ }}
{{- if .Values.ingress.enabled }}
apiVersion: {{ include "common.capabilities.ingress.apiVersion" . }}
kind: Ingress
metadata:
name: {{ template "nats.fullname" $ }}-monitoring
labels:
app: "{{ template "nats.name" $ }}"
chart: "{{ template "nats.chart" $ }}"
release: {{ $.Release.Name | quote }}
heritage: {{ $.Release.Service | quote }}
name: {{ printf "%s-monitoring" (include "common.names.fullname" .) }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.commonLabels }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
{{- end }}
{{- if or .Values.ingress.annotations .Values.commonAnnotations .Values.ingress.certManager }}
annotations:
{{- if .tls }}
ingress.kubernetes.io/secure-backends: "true"
{{- if .Values.ingress.certManager }}
kubernetes.io/tls-acme: "true"
{{- end }}
{{- range $key, $value := .annotations }}
{{ $key }}: {{ $value | quote }}
{{- if .Values.ingress.annotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.ingress.annotations "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
{{- end }}
spec:
rules:
{{- if .Values.ingress.hostname }}
- host: {{ .Values.ingress.hostname }}
http:
paths:
- path: /
backend:
serviceName: {{ printf "%s-monitoring" (include "common.names.fullname" .) }}
servicePort: monitoring
{{- end }}
{{- range .Values.ingress.extraHosts }}
- host: {{ .name }}
http:
paths:
- path: {{ default "/" .path }}
backend:
serviceName: {{ template "nats.fullname" $ }}-monitoring
servicePort: monitoring
{{- if .tls }}
- path: {{ default "/" .path }}
backend:
serviceName: {{ printf "%s-monitoring" (include "common.names.fullname" .) }}
servicePort: monitoring
{{- end }}
{{- if or .Values.ingress.tls .Values.ingress.extraTls }}
tls:
- hosts:
- {{ .name }}
secretName: {{ .tlsSecret }}
{{- end }}
---
{{- end }}
{{- if .Values.ingress.tls }}
- hosts:
- {{ .Values.ingress.hostname }}
secretName: {{ printf "%s-monitoring-tls" .Values.ingress.hostname }}
{{- end }}
{{- if .Values.ingress.extraTls }}
{{- include "common.tplvalues.render" ( dict "value" .Values.ingress.extraTls "context" $ ) | nindent 4 }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -2,19 +2,21 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "nats.fullname" . }}-metrics
name: {{ template "common.names.fullname" . }}-metrics
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "nats.name" . }}
chart: {{ template "nats.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
app.kubernetes.io/component: "metrics"
{{- if .Values.metrics.service.labels -}}
{{- toYaml .Values.metrics.service.labels | nindent 4 }}
{{- end -}}
{{- if .Values.metrics.service.annotations }}
annotations: {{- toYaml .Values.metrics.service.annotations | nindent 4 }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.commonLabels }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
{{- end }}
app.kubernetes.io/component: metrics
{{- if or .Values.metrics.service.annotations .Values.commonAnnotations }}
annotations:
{{- if .Values.metrics.service.annotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.metrics.service.annotations "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
{{- end }}
spec:
type: {{ .Values.metrics.service.type }}
@@ -23,9 +25,7 @@ spec:
{{- end }}
ports:
- name: metrics
port: {{ .Values.metrics.port }}
port: {{ .Values.metrics.service.port }}
targetPort: metrics
selector:
app: {{ template "nats.name" . }}
release: {{ .Release.Name }}
selector: {{ include "common.labels.matchLabels" . | nindent 4 }}
{{- end }}

View File

@@ -1,15 +1,20 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "nats.fullname" . }}-monitoring
labels:
app: "{{ template "nats.name" . }}"
chart: "{{ template "nats.chart" . }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
{{- if .Values.monitoring.service.annotations }}
name: {{ printf "%s-monitoring" (include "common.names.fullname" .) }}
namespace: {{ .Release.Namespace }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.commonLabels }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
{{- end }}
{{- if or .Values.monitoring.service.annotations .Values.commonAnnotations }}
annotations:
{{ toYaml .Values.monitoring.service.annotations | indent 4 }}
{{- if .Values.monitoring.service.annotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.monitoring.service.annotations "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
{{- end }}
spec:
type: {{ .Values.monitoring.service.type }}
@@ -23,6 +28,4 @@ spec:
{{- if and (eq .Values.monitoring.service.type "NodePort") (not (empty .Values.monitoring.service.nodePort)) }}
nodePort: {{ .Values.monitoring.service.nodePort }}
{{- end }}
selector:
app: "{{ template "nats.name" . }}"
release: {{ .Release.Name | quote }}
selector: {{ include "common.labels.matchLabels" . | nindent 4 }}

View File

@@ -2,29 +2,30 @@
kind: NetworkPolicy
apiVersion: {{ template "networkPolicy.apiVersion" . }}
metadata:
name: {{ template "nats.fullname" . }}
labels:
app: "{{ template "nats.name" . }}"
chart: "{{ template "nats.chart" . }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
name: {{ template "common.names.fullname" . }}
namespace: {{ .Release.Namespace }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.commonLabels }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
spec:
podSelector:
matchLabels:
app: "{{ template "nats.name" . }}"
release: {{ .Release.Name | quote }}
matchLabels: {{ include "common.labels.matchLabels" . | nindent 6 }}
ingress:
# Allow inbound connections
- ports:
- port: {{ .Values.client.service.port }}
- port: {{ .Values.client.service.port }}
{{- if not .Values.networkPolicy.allowExternal }}
from:
- podSelector:
matchLabels:
{{ template "nats.fullname" . }}-client: "true"
{{ template "common.names.fullname" . }}-client: "true"
{{- end }}
- ports:
- port: {{ .Values.cluster.service.port }}
- port: {{ .Values.cluster.service.port }}
- ports:
- port: {{ .Values.monitoring.service.port }}
- port: {{ .Values.monitoring.service.port }}
{{- end }}

View File

@@ -2,17 +2,18 @@
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: {{ template "nats.fullname" . }}
labels:
app: "{{ template "nats.name" . }}"
chart: "{{ template "nats.chart" . }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
name: {{ template "common.names.fullname" . }}
namespace: {{ .Release.Namespace }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.commonLabels }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
spec:
selector:
matchLabels:
app: "{{ template "nats.name" . }}"
release: {{ .Release.Name | quote }}
matchLabels: {{ include "common.labels.matchLabels" . | nindent 6 }}
{{- if .Values.pdb.minAvailable }}
minAvailable: {{ .Values.pdb.minAvailable }}
{{- end }}

View File

@@ -2,32 +2,32 @@
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ template "nats.fullname" . }}
name: {{ template "common.names.fullname" . }}
{{- if .Values.metrics.serviceMonitor.namespace }}
namespace: {{ .Values.metrics.serviceMonitor.namespace }}
{{- else }}
namespace: {{ .Release.Namespace }}
{{- end }}
labels:
app: {{ template "nats.name" . }}
chart: {{ template "nats.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- range $key, $value := .Values.metrics.serviceMonitor.selector }}
{{ $key }}: {{ $value | quote }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.commonLabels }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.metrics.serviceMonitor.selector }}
{{- include "common.tplvalues.render" ( dict "value" .Values.metrics.serviceMonitor.selector "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
spec:
endpoints:
- port: metrics
{{- if .Values.metrics.serviceMonitor.interval }}
interval: {{ .Values.metrics.serviceMonitor.interval }}
{{- end }}
- port: metrics
{{- if .Values.metrics.serviceMonitor.interval }}
interval: {{ .Values.metrics.serviceMonitor.interval }}
{{- end }}
selector:
matchLabels:
app: {{ template "nats.name" . }}
release: {{ .Release.Name }}
app.kubernetes.io/component: "metrics"
matchLabels: {{ include "common.labels.matchLabels" . | nindent 6 }}
app.kubernetes.io/component: metrics
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
- {{ .Release.Namespace }}
{{- end -}}

View File

@@ -2,169 +2,159 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ template "nats.fullname" . }}
labels:
app: "{{ template "nats.name" . }}"
chart: "{{ template "nats.chart" . }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
name: {{ template "common.names.fullname" . }}
namespace: {{ .Release.Namespace }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.commonLabels }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
spec:
serviceName: {{ template "nats.fullname" . }}-headless
replicas: {{ .Values.replicaCount }}
updateStrategy:
type: {{ .Values.statefulset.updateStrategy }}
{{- if (eq "Recreate" .Values.statefulset.updateStrategy) }}
rollingUpdate: null
{{- else }}
{{- if .Values.statefulset.rollingUpdatePartition }}
rollingUpdate:
partition: {{ .Values.statefulset.rollingUpdatePartition }}
{{- end }}
{{- end }}
serviceName: {{ printf "%s-headless" (include "common.names.fullname" .) }}
updateStrategy: {{- include "common.tplvalues.render" (dict "value" .Values.updateStrategy "context" $ ) | nindent 4 }}
selector:
matchLabels:
app: "{{ template "nats.name" . }}"
release: {{ .Release.Name | quote }}
matchLabels: {{- include "common.labels.matchLabels" . | nindent 6 }}
template:
metadata:
labels:
app: "{{ template "nats.name" . }}"
chart: "{{ template "nats.chart" . }}"
release: {{ .Release.Name | quote }}
labels: {{- include "common.labels.standard" . | nindent 8 }}
{{- if .Values.podLabels }}
{{ toYaml .Values.podLabels | indent 8 }}
{{- include "common.tplvalues.render" (dict "value" .Values.podLabels "context" $) | nindent 8 }}
{{- end }}
{{- if or .Values.podAnnotations .Values.metrics.enabled }}
{{- if or .Values.podAnnotations .Values.metrics.enabled }}
annotations:
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
{{- if .Values.metrics.podAnnotations }}
{{ toYaml .Values.metrics.podAnnotations | indent 8 }}
{{- end }}
{{- end }}
spec:
{{- include "nats.imagePullSecrets" . | indent 6 }}
{{- if .Values.securityContext.enabled }}
securityContext:
fsGroup: {{ .Values.securityContext.fsGroup }}
runAsUser: {{ .Values.securityContext.runAsUser }}
{{- if .Values.podAnnotations }}
{{- include "common.tplvalues.render" (dict "value" .Values.podAnnotations "context" $) | nindent 8 }}
{{- end }}
{{- end }}
spec:
{{- include "nats.imagePullSecrets" . | nindent 6 }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName | quote }}
{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
{{- if .Values.schedulerName }}
schedulerName: {{ .Values.schedulerName | quote }}
{{- end }}
{{- if eq .Values.antiAffinity "hard" }}
{{- if .Values.affinity }}
affinity: {{- include "common.tplvalues.render" ( dict "value" .Values.affinity "context" $) | nindent 8 }}
{{- else }}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: "kubernetes.io/hostname"
labelSelector:
matchLabels:
app: "{{ template "nats.name" . }}"
release: {{ .Release.Name | quote }}
{{- else if eq .Values.antiAffinity "soft" }}
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
app: "{{ template "nats.name" . }}"
release: {{ .Release.Name | quote }}
podAffinity: {{- include "common.affinities.pods" (dict "type" .Values.podAffinityPreset "context" $) | nindent 10 }}
podAntiAffinity: {{- include "common.affinities.pods" (dict "type" .Values.podAntiAffinityPreset "context" $) | nindent 10 }}
nodeAffinity: {{- include "common.affinities.nodes" (dict "type" .Values.nodeAffinityPreset.type "key" .Values.nodeAffinityPreset.key "values" .Values.nodeAffinityPreset.values) | nindent 10 }}
{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector: {{- include "common.tplvalues.render" ( dict "value" .Values.nodeSelector "context" $) | nindent 8 }}
{{- end }}
{{- if .Values.tolerations }}
tolerations: {{- include "common.tplvalues.render" (dict "value" .Values.tolerations "context" .) | nindent 8 }}
{{- end }}
{{- if .Values.podSecurityContext.enabled }}
securityContext: {{- omit .Values.podSecurityContext "enabled" | toYaml | nindent 8 }}
{{- end }}
{{- if .Values.initContainers }}
initContainers: {{- include "common.tplvalues.render" (dict "value" .Values.initContainers "context" $) | nindent 8 }}
{{- end }}
containers:
- name: {{ template "nats.name" . }}
image: {{ template "nats.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- {{ .Values.natsFilename }}
args:
- -c
- /opt/bitnami/nats/{{ .Values.natsFilename }}.conf
# to ensure nats could run with non-root user, we put the configuration
# file under `/opt/bitnami/nats/{{ .Values.natsFilename }}.conf`, please check the link below
# for the implementation inside Dockerfile:
# - https://github.com/bitnami/bitnami-docker-nats#configuration
{{- if .Values.extraArgs }}
{{ toYaml .Values.extraArgs | indent 8 }}
{{- end }}
ports:
- name: client
containerPort: {{ .Values.client.service.port }}
- name: cluster
containerPort: {{ .Values.cluster.service.port }}
- name: monitoring
containerPort: {{ .Values.monitoring.service.port }}
{{- if .Values.livenessProbe.enabled }}
livenessProbe:
httpGet:
path: /
port: monitoring
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
successThreshold: {{ .Values.livenessProbe.successThreshold }}
failureThreshold: {{ .Values.livenessProbe.failureThreshold }}
{{- end }}
{{- if .Values.readinessProbe.enabled }}
readinessProbe:
httpGet:
path: /
port: monitoring
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
successThreshold: {{ .Values.readinessProbe.successThreshold }}
failureThreshold: {{ .Values.readinessProbe.failureThreshold }}
{{- end }}
resources:
{{ toYaml .Values.resources | indent 10 }}
volumeMounts:
- name: config
mountPath: /opt/bitnami/nats/{{ .Values.natsFilename }}.conf
subPath: {{ .Values.natsFilename }}.conf
{{- if .Values.sidecars }}
{{ toYaml .Values.sidecars | indent 6 }}
{{- end }}
{{- if .Values.metrics.enabled }}
- name: metrics
image: {{ template "nats.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
args: {{- toYaml .Values.metrics.args | nindent 10 }}
- "http://localhost:{{ .Values.monitoring.service.port }}"
ports:
- name: nats
image: {{ template "nats.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
{{- if .Values.command }}
{{- include "common.tplvalues.render" (dict "value" .Values.command "context" $) | nindent 12 }}
{{- else }}
- {{ .Values.natsFilename }}
{{- end }}
args:
{{- if .Values.args }}
{{- include "common.tplvalues.render" (dict "value" .Values.args "context" $) | nindent 12 }}
{{- else }}
- -c
- /opt/bitnami/nats/{{ .Values.natsFilename }}.conf
# to ensure nats could run with non-root user, we put the configuration
# file under `/opt/bitnami/nats/{{ .Values.natsFilename }}.conf`, please check the link below
# for the implementation inside Dockerfile:
# - https://github.com/bitnami/bitnami-docker-nats#configuration
{{- range $key, $value := .Values.extraFlags }}
--{{ $key }}{{ if $value }}={{ $value }}{{ end }}
{{- end }}
{{- end }}
{{- if .Values.extraEnvVars }}
env: {{- include "common.tplvalues.render" (dict "value" .Values.extraEnvVars "context" $) | nindent 12 }}
{{- end }}
{{- if or .Values.extraEnvVarsCM .Values.extraEnvVarsSecret }}
envFrom:
{{- if .Values.extraEnvVarsCM }}
- configMapRef:
name: {{ include "common.tplvalues.render" (dict "value" .Values.extraEnvVarsCM "context" $) }}
{{- end }}
{{- if .Values.extraEnvVarsSecret }}
- secretRef:
name: {{ include "common.tplvalues.render" (dict "value" .Values.extraEnvVarsSecret "context" $) }}
{{- end }}
{{- end }}
ports:
- name: client
containerPort: {{ .Values.client.service.port }}
- name: cluster
containerPort: {{ .Values.cluster.service.port }}
- name: monitoring
containerPort: {{ .Values.monitoring.service.port }}
{{- if .Values.livenessProbe.enabled }}
livenessProbe: {{- include "common.tplvalues.render" (dict "value" (omit .Values.livenessProbe "enabled") "context" $) | nindent 12 }}
{{- else if .Values.customLivenessProbe }}
livenessProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customLivenessProbe "context" $) | nindent 12 }}
{{- end }}
{{- if .Values.readinessProbe.enabled }}
readinessProbe: {{- include "common.tplvalues.render" (dict "value" (omit .Values.readinessProbe "enabled") "context" $) | nindent 12 }}
{{- else if .Values.customReadinessProbe }}
readinessProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customReadinessProbe "context" $) | nindent 12 }}
{{- end }}
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 12 }}
{{- end }}
volumeMounts:
- name: config
mountPath: /opt/bitnami/nats/{{ .Values.natsFilename }}.conf
subPath: {{ .Values.natsFilename }}.conf
{{- if .Values.extraVolumeMounts }}
{{- include "common.tplvalues.render" (dict "value" .Values.extraVolumeMounts "context" $) | nindent 12 }}
{{- end }}
{{- if .Values.metrics.enabled }}
- name: metrics
containerPort: {{ .Values.metrics.port }}
livenessProbe:
httpGet:
path: /metrics
port: metrics
initialDelaySeconds: 15
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /metrics
port: metrics
initialDelaySeconds: 5
timeoutSeconds: 1
resources:
{{ toYaml .Values.metrics.resources | indent 10 }}
{{- end }}
image: {{ template "nats.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
args: {{- include "common.tplvalues.render" (dict "value" .Values.metrics.flags "context" $) | nindent 12 }}
- "http://localhost:{{ .Values.monitoring.service.port }}"
ports:
- name: metrics
containerPort: {{ .Values.metrics.containerPort }}
livenessProbe:
httpGet:
path: /metrics
port: metrics
initialDelaySeconds: 15
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /metrics
port: metrics
initialDelaySeconds: 5
timeoutSeconds: 1
{{- if .Values.metrics.resources }}
resources: {{- toYaml .Values.metrics.resources | nindent 12 }}
{{- end }}
{{- end }}
{{- if .Values.sidecars }}
{{- include "common.tplvalues.render" ( dict "value" .Values.sidecars "context" $) | nindent 8 }}
{{- end }}
volumes:
- name: config
configMap:
name: {{ template "nats.fullname" . }}
- name: config
configMap:
name: {{ template "common.names.fullname" . }}
{{- if .Values.extraVolumes }}
{{- include "common.tplvalues.render" (dict "value" .Values.extraVolumes "context" $) | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -1,18 +1,43 @@
{{- if .Values.ingress.enabled }}
{{- if .Values.ingress.secrets }}
{{- range .Values.ingress.secrets }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .name }}
labels:
app: "{{ template "nats.name" $ }}"
chart: "{{ template "nats.chart" $ }}"
release: {{ $.Release.Name | quote }}
heritage: {{ $.Release.Service | quote }}
namespace: {{ $.Release.Namespace }}
labels: {{- include "common.labels.standard" $ | nindent 4 }}
{{- if $.Values.commonLabels }}
{{- include "common.tplvalues.render" ( dict "value" $.Values.commonLabels "context" $ ) | nindent 4 }}
{{- end }}
{{- if $.Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" $.Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
type: kubernetes.io/tls
data:
tls.crt: {{ .certificate | b64enc }}
tls.key: {{ .key | b64enc }}
---
{{- end }}
{{- else if and .Values.ingress.tls (not .Values.ingress.certManager) }}
{{- $ca := genCA "nats-ca" 365 }}
{{- $cert := genSignedCert .Values.ingress.hostname nil (list .Values.ingress.hostname) 365 $ca }}
apiVersion: v1
kind: Secret
metadata:
name: {{ printf "%s-monitoring-tls" .Values.ingress.hostname }}
namespace: {{ .Release.Namespace }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.commonLabels }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
type: kubernetes.io/tls
data:
tls.crt: {{ $cert.Cert | b64enc | quote }}
tls.key: {{ $cert.Key | b64enc | quote }}
ca.crt: {{ $ca.Cert | b64enc | quote }}
{{- end }}
{{- end }}

View File

@@ -19,126 +19,37 @@ image:
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## Example:
## pullSecrets:
## - myRegistryKeySecretName
##
# pullSecrets:
# - name: myRegistryKeySecretName
pullSecrets: []
## String to partially override nats.fullname template (will maintain the release name)
## String to partially override common.names.fullname template (will maintain the release name)
##
# nameOverride:
## String to fully override nats.fullname template
## String to fully override common.names.fullname template
##
# fullnameOverride:
## NATS replicas
replicaCount: 3
## NATS Pod Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
## Add labels to all the deployed resources
##
securityContext:
enabled: true
fsGroup: 1001
runAsUser: 1001
commonLabels: {}
## NATS Node selector and tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
## Add annotations to all the deployed resources
##
# nodeSelector: {"beta.kubernetes.io/arch": "amd64"}
# tolerations: []
commonAnnotations: {}
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
## Kubernetes Cluster Domain
##
# schedulerName:
clusterDomain: cluster.local
## Pods anti-affinity
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## Extra objects to deploy (value evaluated as a template)
##
antiAffinity: soft
## Pod annotations
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}
## Additional pod labels
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}
## Pod disruption budget
## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
##
pdb:
## Specifies whether a Pod disruption budget should be created
##
create: true
## Minimum number / percentage of pods that should remain scheduled
##
minAvailable: 1
## Maximum number / percentage of pods that may be made unavailable
##
maxUnavailable: ""
## Pod Priority Class
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
# priorityClassName: ""
## NATS cluster resource type under Kubernetes. Allowed values: statefulset (default) or deployment
## ref:
## - https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
## - https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
resourceType: "statefulset"
## Update strategy for statefulset, can be set to RollingUpdate or OnDelete by default.
## https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets
statefulset:
updateStrategy: OnDelete
## Partition update strategy
## https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#partitions
# rollingUpdatePartition:
## Update strategy for deployment, can be set to RollingUpdate or OnDelete by default.
## https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
deployment:
updateType: RollingUpdate
# maxSurge: 25%
# maxUnavailable: 25%
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
# limits:
# cpu: 500m
# memory: 512Mi
# requests:
# cpu: 100m
# memory: 256Mi
## Configure extra options for liveness and readiness probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
livenessProbe:
enabled: true
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
extraDeploy: []
## Client Authentication
## ref: https://github.com/nats-io/gnatsd#authentication
@@ -174,19 +85,224 @@ maxControlLine: 512
maxPayload: 65536
writeDeadline: "2s"
## Network pullPolicy
## https://kubernetes.io/docs/concepts/services-networking/network-policies/
## Nats filenames:
## - For Nats 1.x.x version, some filenames (binary, configuration file, pid file) uses `gnatsd` as part of the name.
## - For Nats 2.x.x version, those filenames now uses `nats-server`
## In order to make the chart compatible with NATS versions 1.0.0 and 2.0.0 we have parametrized the following value
## to specify the proper filename according to the image version.
##
networkPolicy:
## Enable creation of NetworkPolicy resources.
enabled: true
natsFilename: nats-server
## The Policy model to apply. When set to false, only pods with the correct
## client labels will have network access to the port NATS is listening
## on. When true, NATS will accept connections from any source
## (with the correct destination port).
## Command and args for running the container (set to default if not set). Use array form
##
command: []
args: []
## Extra flags to be passed to NATS
## Example:
## extraFlags:
## tls.insecure-skip-tls-verify: ""
## web.telemetry-path: "/metrics"
##
extraFlags: {}
## An array to add extra env vars
## Example:
## extraEnvVars:
## - name: FOO
## value: "bar"
##
extraEnvVars: []
## ConfigMap with extra environment variables
##
extraEnvVarsCM:
## Secret with extra environment variables
##
extraEnvVarsSecret:
## NATS cluster resource type under Kubernetes. Allowed values: statefulset (default) or deployment
## ref:
## - https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
## - https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
##
resourceType: "statefulset"
## Number of NATS replicas to deploy
##
replicaCount: 3
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:
## Pod Priority Class
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
# priorityClassName: ""
## Strategy to use to update Pods
##
updateStrategy:
## StrategyType
## Can be set to RollingUpdate or OnDelete
##
allowExternal: false
type: RollingUpdate
## NATS pods' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
##
podSecurityContext:
enabled: false
## fsGroup: 1001
## NATS containers' SecurityContext
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
##
containerSecurityContext:
enabled: false
## runAsUser: 1001
## runAsNonRoot: true
## NATS resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits: {}
# cpu: 200m
# memory: 256Mi
requests: {}
# cpu: 200m
# memory: 256Mi
## NATS containers' liveness and readiness probes.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes
##
livenessProbe:
enabled: true
httpGet:
path: /
port: monitoring
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
readinessProbe:
httpGet:
path: /
port: monitoring
enabled: true
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
## Custom Liveness probes for NATS
##
customLivenessProbe: {}
## Custom Rediness probes NATS
##
customReadinessProbe: {}
## Pod extra labels
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}
## Annotations for server pods.
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}
## Pod affinity preset
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
## Allowed values: soft, hard
##
podAffinityPreset: ""
## Pod anti-affinity preset
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
## Allowed values: soft, hard
##
podAntiAffinityPreset: soft
## Node affinity preset
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
## Allowed values: soft, hard
##
nodeAffinityPreset:
## Node affinity type
## Allowed values: soft, hard
type: ""
## Node label key to match
## E.g.
## key: "kubernetes.io/e2e-az-name"
##
key: ""
## Node label values to match
## E.g.
## values:
## - e2e-az1
## - e2e-az2
##
values: []
## Affinity for pod assignment. Evaluated as a template.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## Note: podAffinityPreset, podAntiAffinityPreset, and nodeAffinityPreset will be ignored when it's set
##
affinity: {}
## Node labels for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## Tolerations for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## Extra volumes to add to the deployment
##
extraVolumes: []
## Extra volume mounts to add to the container
##
extraVolumeMounts: []
## Add init containers to the NATS pods.
## Example:
## initContainers:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
initContainers: {}
## Add sidecars to the NATS pods.
## Example:
## sidecars:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
sidecars: {}
## NATS svc used for client connections
## ref: https://github.com/nats-io/gnatsd#running
@@ -258,59 +374,109 @@ monitoring:
##
loadBalancerIP:
## Configure the ingress resource that allows you to access the
## NATS Monitoring. Set up the URL
## ref: http://kubernetes.io/docs/user-guide/ingress/
## Ingress configuratiom
##
ingress:
## Set to true to enable ingress record generation
##
enabled: true
# The list of hostnames to be covered with this ingress record.
# Most likely this will be just one host, but in the event more hosts are needed, this is an array
hosts:
- name: nats.local
## Set this to true in order to enable TLS on the ingress record
tls: false
## Set this to true in order to add the corresponding annotations for cert-manager
##
certManager: false
## If TLS is set to true, you must declare what secret will store the key/certificate for TLS
tlsSecret: nats.local-tls
## When the ingress is enabled, a host pointing to this will be created
##
hostname: nats.local
## Ingress annotations done as key:value pairs
## If you're using kube-lego, you will want to add:
## kubernetes.io/tls-acme: true
##
## For a full list of possible ingress annotations, please see
## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
##
## If tls is set to true, annotation ingress.kubernetes.io/secure-backends: "true" will automatically be set
annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: true
## Ingress annotations done as key:value pairs
## For a full list of possible ingress annotations, please see
## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
##
## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set
##
annotations: {}
## Enable TLS configuration for the hostname defined at ingress.hostname parameter
## TLS certificates will be retrieved from a TLS secret with name: {{- printf "%s-tls" .Values.ingress.hostname }}
## You can use the ingress.secrets parameter to create this TLS secret, relay on cert-manager to create it, or
## let the chart create self-signed certificates for you
##
tls: false
## The list of additional hostnames to be covered with this ingress record.
## Most likely the hostname above will be enough, but in the event more hosts are needed, this is an array
## Example:
## extraHosts:
## - name: nats.local
## path: /
##
extraHosts: []
## The tls configuration for additional hostnames to be covered with this ingress record.
## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
## Example:
## extraTls:
## - hosts:
## - nats.local
## secretName: nats.local-tls
##
extraTls: []
secrets:
## If you're providing your own certificates, please use this to add the certificates as secrets
## key and certificate should start with -----BEGIN CERTIFICATE----- or
## -----BEGIN RSA PRIVATE KEY-----
##
## name should line up with a tlsSecret set further up
## If you're using kube-lego, this is unneeded, as it will create the secret for you if it is not set
## key and certificate should start with -----BEGIN CERTIFICATE----- or -----BEGIN RSA PRIVATE KEY-----
## name should line up with a secretName set further up
##
## If it is not set and you're using cert-manager, this is unneeded, as it will create the secret for you
## If it is not set and you're NOT using cert-manager either, self-signed certificates will be created
## It is also possible to create and manage the certificates outside of this helm chart
## Please see README.md for more information
# - name: nats.local-tls
# key:
# certificate:
##
## Example
## secrets:
## - name: nats.local-tls
## key: ""
## certificate: ""
##
secrets: []
# Optional additional arguments
extraArgs: []
## Nats filenames:
## - For Nats 1.x.x version, some filenames (binary, configuration file, pid file) uses `gnatsd` as part of the name.
## - For Nats 2.x.x version, those filenames now uses `nats-server`
## In order to make the chart compatible with NATS versions 1.0.0 and 2.0.0 we have parametrized the following value
## to specify the proper filename according to the image version.
## Network Policy configuration
## ref: https://kubernetes.io/docs/concepts/services-networking/network-policies/
##
natsFilename: nats-server
networkPolicy:
## Enable creation of NetworkPolicy resources
##
enabled: true
## The Policy model to apply. When set to false, only pods with the correct
## client label will have network access to the ports Keycloak is listening
## on. When true, Keycloak will accept connections from any source
## (with the correct destination port).
##
allowExternal: false
## Additional NetworkPolicy Ingress "from" rules to set. Note that all rules are OR-ed.
## Example:
## additionalRules:
## - matchLabels:
## - role: frontend
## - matchExpressions:
## - key: role
## operator: In
## values:
## - frontend
##
additionalRules: {}
## NATS Pod Disruption Budget configuration
## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
##
pdb:
create: true
## Min number of pods that must still be available after the eviction
##
minAvailable: 1
## Max number of pods that can be unavailable after the eviction
##
# maxUnavailable: 1
## Metrics / Prometheus NATS Exporter
##
@@ -331,20 +497,31 @@ metrics:
## Metrics exporter resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
# resources: {}
resources: {}
## Metrics exporter port
port: 7777
## Metrics exporter annotations
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "7777"
##
containerPort: 7777
## Metrics exporter flags
args:
##
flags:
- -connz
- -routez
- -subz
- -varz
# Enable this if you're using https://github.com/coreos/prometheus-operator
## Metrics service configuration
##
service:
type: ClusterIP
port: 7777
## Use serviceLoadBalancerIP to request a specific static IP,
## otherwise leave blank
# loadBalancerIP:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.metrics.service.port }}"
labels: {}
## Prometheus Operator ServiceMonitor configuration
##
serviceMonitor:
enabled: false
## Specify a namespace if needed
@@ -356,20 +533,3 @@ metrics:
## [Kube Prometheus Selector Label](https://github.com/bitnami/charts/tree/master/bitnami/prometheus-operator#exporters)
selector:
prometheus: kube-prometheus
service:
type: ClusterIP
## Use serviceLoadBalancerIP to request a specific static IP,
## otherwise leave blank
# loadBalancerIP:
annotations: {}
labels: {}
sidecars:
## Add sidecars to the pod.
## e.g.
# - name: your-image-name
# image: your-image
# imagePullPolicy: Always
# ports:
# - name: portname
# containerPort: 1234

View File

@@ -19,126 +19,37 @@ image:
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## Example:
## pullSecrets:
## - myRegistryKeySecretName
##
# pullSecrets:
# - name: myRegistryKeySecretName
pullSecrets: []
## String to partially override nats.fullname template (will maintain the release name)
## String to partially override common.names.fullname template (will maintain the release name)
##
# nameOverride:
## String to fully override nats.fullname template
## String to fully override common.names.fullname template
##
# fullnameOverride:
## NATS replicas
replicaCount: 1
## NATS Pod Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
## Add labels to all the deployed resources
##
securityContext:
enabled: true
fsGroup: 1001
runAsUser: 1001
commonLabels: {}
## NATS Node selector and tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
## Add annotations to all the deployed resources
##
# nodeSelector: {"beta.kubernetes.io/arch": "amd64"}
# tolerations: []
commonAnnotations: {}
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
## Kubernetes Cluster Domain
##
# schedulerName:
clusterDomain: cluster.local
## Pods anti-affinity
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## Extra objects to deploy (value evaluated as a template)
##
antiAffinity: soft
## Pod annotations
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}
## Additional pod labels
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}
## Pod disruption budget
## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
##
pdb:
## Specifies whether a Pod disruption budget should be created
##
create: false
## Minimum number / percentage of pods that should remain scheduled
##
minAvailable: 1
## Maximum number / percentage of pods that may be made unavailable
##
maxUnavailable: ""
## Pod Priority Class
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
# priorityClassName: ""
## NATS cluster resource type under Kubernetes. Allowed values: statefulset (default) or deployment
## ref:
## - https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
## - https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
resourceType: "statefulset"
## Update strategy for statefulset, can be set to RollingUpdate or OnDelete by default.
## https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets
statefulset:
updateStrategy: OnDelete
## Partition update strategy
## https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#partitions
# rollingUpdatePartition:
## Update strategy for deployment, can be set to RollingUpdate or OnDelete by default.
## https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
deployment:
updateType: RollingUpdate
# maxSurge: 25%
# maxUnavailable: 25%
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
# limits:
# cpu: 500m
# memory: 512Mi
# requests:
# cpu: 100m
# memory: 256Mi
## Configure extra options for liveness and readiness probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
livenessProbe:
enabled: true
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
extraDeploy: []
## Client Authentication
## ref: https://github.com/nats-io/gnatsd#authentication
@@ -174,19 +85,224 @@ debug:
# maxPayload: 65536
# writeDeadline: "2s"
## Network pullPolicy
## https://kubernetes.io/docs/concepts/services-networking/network-policies/
## Nats filenames:
## - For Nats 1.x.x version, some filenames (binary, configuration file, pid file) uses `gnatsd` as part of the name.
## - For Nats 2.x.x version, those filenames now uses `nats-server`
## In order to make the chart compatible with NATS versions 1.0.0 and 2.0.0 we have parametrized the following value
## to specify the proper filename according to the image version.
##
networkPolicy:
## Enable creation of NetworkPolicy resources.
enabled: false
natsFilename: nats-server
## The Policy model to apply. When set to false, only pods with the correct
## client labels will have network access to the port NATS is listening
## on. When true, NATS will accept connections from any source
## (with the correct destination port).
## Command and args for running the container (set to default if not set). Use array form
##
command: []
args: []
## Extra flags to be passed to NATS
## Example:
## extraFlags:
## tls.insecure-skip-tls-verify: ""
## web.telemetry-path: "/metrics"
##
extraFlags: {}
## An array to add extra env vars
## Example:
## extraEnvVars:
## - name: FOO
## value: "bar"
##
extraEnvVars: []
## ConfigMap with extra environment variables
##
extraEnvVarsCM:
## Secret with extra environment variables
##
extraEnvVarsSecret:
## NATS cluster resource type under Kubernetes. Allowed values: statefulset (default) or deployment
## ref:
## - https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
## - https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
##
resourceType: "statefulset"
## Number of NATS replicas to deploy
##
replicaCount: 1
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:
## Pod Priority Class
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
# priorityClassName: ""
## Strategy to use to update Pods
##
updateStrategy:
## StrategyType
## Can be set to RollingUpdate or OnDelete
##
allowExternal: true
type: RollingUpdate
## NATS pods' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
##
podSecurityContext:
enabled: false
## fsGroup: 1001
## NATS containers' SecurityContext
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
##
containerSecurityContext:
enabled: false
## runAsUser: 1001
## runAsNonRoot: true
## NATS resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits: {}
# cpu: 200m
# memory: 256Mi
requests: {}
# cpu: 200m
# memory: 256Mi
## NATS containers' liveness and readiness probes.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes
##
livenessProbe:
enabled: true
httpGet:
path: /
port: monitoring
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
readinessProbe:
httpGet:
path: /
port: monitoring
enabled: true
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
## Custom Liveness probes for NATS
##
customLivenessProbe: {}
## Custom Rediness probes NATS
##
customReadinessProbe: {}
## Pod extra labels
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}
## Annotations for server pods.
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}
## Pod affinity preset
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
## Allowed values: soft, hard
##
podAffinityPreset: ""
## Pod anti-affinity preset
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
## Allowed values: soft, hard
##
podAntiAffinityPreset: soft
## Node affinity preset
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
## Allowed values: soft, hard
##
nodeAffinityPreset:
## Node affinity type
## Allowed values: soft, hard
type: ""
## Node label key to match
## E.g.
## key: "kubernetes.io/e2e-az-name"
##
key: ""
## Node label values to match
## E.g.
## values:
## - e2e-az1
## - e2e-az2
##
values: []
## Affinity for pod assignment. Evaluated as a template.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## Note: podAffinityPreset, podAntiAffinityPreset, and nodeAffinityPreset will be ignored when it's set
##
affinity: {}
## Node labels for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## Tolerations for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## Extra volumes to add to the deployment
##
extraVolumes: []
## Extra volume mounts to add to the container
##
extraVolumeMounts: []
## Add init containers to the NATS pods.
## Example:
## initContainers:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
initContainers: {}
## Add sidecars to the NATS pods.
## Example:
## sidecars:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
sidecars: {}
## NATS svc used for client connections
## ref: https://github.com/nats-io/gnatsd#running
@@ -258,59 +374,109 @@ monitoring:
##
loadBalancerIP:
## Configure the ingress resource that allows you to access the
## NATS Monitoring. Set up the URL
## ref: http://kubernetes.io/docs/user-guide/ingress/
## Ingress configuratiom
##
ingress:
## Set to true to enable ingress record generation
##
enabled: false
# The list of hostnames to be covered with this ingress record.
# Most likely this will be just one host, but in the event more hosts are needed, this is an array
hosts:
- name: nats.local
## Set this to true in order to enable TLS on the ingress record
tls: false
## Set this to true in order to add the corresponding annotations for cert-manager
##
certManager: false
## If TLS is set to true, you must declare what secret will store the key/certificate for TLS
tlsSecret: nats.local-tls
## When the ingress is enabled, a host pointing to this will be created
##
hostname: nats.local
## Ingress annotations done as key:value pairs
## If you're using kube-lego, you will want to add:
## kubernetes.io/tls-acme: true
##
## For a full list of possible ingress annotations, please see
## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
##
## If tls is set to true, annotation ingress.kubernetes.io/secure-backends: "true" will automatically be set
annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: true
## Ingress annotations done as key:value pairs
## For a full list of possible ingress annotations, please see
## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
##
## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set
##
annotations: {}
## Enable TLS configuration for the hostname defined at ingress.hostname parameter
## TLS certificates will be retrieved from a TLS secret with name: {{- printf "%s-tls" .Values.ingress.hostname }}
## You can use the ingress.secrets parameter to create this TLS secret, relay on cert-manager to create it, or
## let the chart create self-signed certificates for you
##
tls: false
## The list of additional hostnames to be covered with this ingress record.
## Most likely the hostname above will be enough, but in the event more hosts are needed, this is an array
## Example:
## extraHosts:
## - name: nats.local
## path: /
##
extraHosts: []
## The tls configuration for additional hostnames to be covered with this ingress record.
## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
## Example:
## extraTls:
## - hosts:
## - nats.local
## secretName: nats.local-tls
##
extraTls: []
secrets:
## If you're providing your own certificates, please use this to add the certificates as secrets
## key and certificate should start with -----BEGIN CERTIFICATE----- or
## -----BEGIN RSA PRIVATE KEY-----
##
## name should line up with a tlsSecret set further up
## If you're using kube-lego, this is unneeded, as it will create the secret for you if it is not set
## key and certificate should start with -----BEGIN CERTIFICATE----- or -----BEGIN RSA PRIVATE KEY-----
## name should line up with a secretName set further up
##
## If it is not set and you're using cert-manager, this is unneeded, as it will create the secret for you
## If it is not set and you're NOT using cert-manager either, self-signed certificates will be created
## It is also possible to create and manage the certificates outside of this helm chart
## Please see README.md for more information
# - name: nats.local-tls
# key:
# certificate:
##
## Example
## secrets:
## - name: nats.local-tls
## key: ""
## certificate: ""
##
secrets: []
# Optional additional arguments
extraArgs: []
## Nats filenames:
## - For Nats 1.x.x version, some filenames (binary, configuration file, pid file) uses `gnatsd` as part of the name.
## - For Nats 2.x.x version, those filenames now uses `nats-server`
## In order to make the chart compatible with NATS versions 1.0.0 and 2.0.0 we have parametrized the following value
## to specify the proper filename according to the image version.
## Network Policy configuration
## ref: https://kubernetes.io/docs/concepts/services-networking/network-policies/
##
natsFilename: nats-server
networkPolicy:
## Enable creation of NetworkPolicy resources
##
enabled: false
## The Policy model to apply. When set to false, only pods with the correct
## client label will have network access to the ports Keycloak is listening
## on. When true, Keycloak will accept connections from any source
## (with the correct destination port).
##
allowExternal: true
## Additional NetworkPolicy Ingress "from" rules to set. Note that all rules are OR-ed.
## Example:
## additionalRules:
## - matchLabels:
## - role: frontend
## - matchExpressions:
## - key: role
## operator: In
## values:
## - frontend
##
additionalRules: {}
## NATS Pod Disruption Budget configuration
## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
##
pdb:
create: false
## Min number of pods that must still be available after the eviction
##
minAvailable: 1
## Max number of pods that can be unavailable after the eviction
##
# maxUnavailable: 1
## Metrics / Prometheus NATS Exporter
##
@@ -331,20 +497,31 @@ metrics:
## Metrics exporter resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
# resources: {}
resources: {}
## Metrics exporter port
port: 7777
## Metrics exporter annotations
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "7777"
##
containerPort: 7777
## Metrics exporter flags
args:
##
flags:
- -connz
- -routez
- -subz
- -varz
# Enable this if you're using https://github.com/coreos/prometheus-operator
## Metrics service configuration
##
service:
type: ClusterIP
port: 7777
## Use serviceLoadBalancerIP to request a specific static IP,
## otherwise leave blank
# loadBalancerIP:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.metrics.service.port }}"
labels: {}
## Prometheus Operator ServiceMonitor configuration
##
serviceMonitor:
enabled: false
## Specify a namespace if needed
@@ -356,20 +533,3 @@ metrics:
## [Kube Prometheus Selector Label](https://github.com/bitnami/charts/tree/master/bitnami/prometheus-operator#exporters)
selector:
prometheus: kube-prometheus
service:
type: ClusterIP
## Use serviceLoadBalancerIP to request a specific static IP,
## otherwise leave blank
# loadBalancerIP:
annotations: {}
labels: {}
sidecars:
## Add sidecars to the pod.
## e.g.
# - name: your-image-name
# image: your-image
# imagePullPolicy: Always
# ports:
# - name: portname
# containerPort: 1234