From 7c27166ad1bf969cd74ad9473ac93768bcb678a2 Mon Sep 17 00:00:00 2001 From: Sameer Naik Date: Tue, 26 Nov 2019 14:00:28 +0530 Subject: [PATCH] [bitnami/node-exporter] new chart Signed-off-by: Sameer Naik --- bitnami/node-exporter/.helmignore | 22 ++ bitnami/node-exporter/Chart.yaml | 18 ++ bitnami/node-exporter/README.md | 141 +++++++++++ bitnami/node-exporter/templates/NOTES.txt | 37 +++ bitnami/node-exporter/templates/_helpers.tpl | 178 ++++++++++++++ .../node-exporter/templates/daemonset.yaml | 113 +++++++++ .../templates/psp-clusterrole.yaml | 13 + .../templates/psp-clusterrolebinding.yaml | 15 ++ bitnami/node-exporter/templates/psp.yaml | 44 ++++ bitnami/node-exporter/templates/service.yaml | 34 +++ .../templates/serviceaccount.yaml | 7 + .../templates/servicemonitor.yaml | 30 +++ bitnami/node-exporter/values-production.yaml | 222 ++++++++++++++++++ bitnami/node-exporter/values.yaml | 222 ++++++++++++++++++ 14 files changed, 1096 insertions(+) create mode 100644 bitnami/node-exporter/.helmignore create mode 100644 bitnami/node-exporter/Chart.yaml create mode 100644 bitnami/node-exporter/README.md create mode 100644 bitnami/node-exporter/templates/NOTES.txt create mode 100644 bitnami/node-exporter/templates/_helpers.tpl create mode 100644 bitnami/node-exporter/templates/daemonset.yaml create mode 100644 bitnami/node-exporter/templates/psp-clusterrole.yaml create mode 100644 bitnami/node-exporter/templates/psp-clusterrolebinding.yaml create mode 100644 bitnami/node-exporter/templates/psp.yaml create mode 100644 bitnami/node-exporter/templates/service.yaml create mode 100644 bitnami/node-exporter/templates/serviceaccount.yaml create mode 100644 bitnami/node-exporter/templates/servicemonitor.yaml create mode 100644 bitnami/node-exporter/values-production.yaml create mode 100644 bitnami/node-exporter/values.yaml diff --git a/bitnami/node-exporter/.helmignore b/bitnami/node-exporter/.helmignore new file mode 100644 index 0000000000..50af031725 --- /dev/null +++ b/bitnami/node-exporter/.helmignore @@ -0,0 +1,22 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ diff --git a/bitnami/node-exporter/Chart.yaml b/bitnami/node-exporter/Chart.yaml new file mode 100644 index 0000000000..ecf1cfced8 --- /dev/null +++ b/bitnami/node-exporter/Chart.yaml @@ -0,0 +1,18 @@ +apiVersion: v1 +appVersion: 0.18.1 +description: Prometheus exporter for hardware and OS metrics exposed by UNIX kernels, with pluggable metric collectors. +name: node-exporter +version: 0.1.0 +keywords: + - prometheus + - node-exporter + - monitoring +home: https://prometheus.io/ +icon: https://bitnami.com/assets/stacks/node-exporter/img/node-exporter-stack-220x234.png +sources: +- https://github.com/bitnami/bitnami-docker-node-exporter +- https://github.com/prometheus/node_exporter +maintainers: +- name: Bitnami + email: containers@bitnami.com +engine: gotpl diff --git a/bitnami/node-exporter/README.md b/bitnami/node-exporter/README.md new file mode 100644 index 0000000000..48243991fa --- /dev/null +++ b/bitnami/node-exporter/README.md @@ -0,0 +1,141 @@ +# Node Exporter + +[Node Exporter](https://github.com/prometheus/node_exporter) is a Prometheus exporter for hardware and OS metrics exposed by *NIX kernels, written in Go with pluggable metric collectors. + +## TL;DR; + +```bash +$ helm repo add bitnami https://charts.bitnami.com/bitnami +$ helm install bitnami/node-exporter +``` + +## Introduction + +This chart bootstraps [Node Exporter](https://github.com/bitnami/bitnami-docker-node-exporter) on [Kubernetes](http://kubernetes.io) using the [Helm](https://helm.sh) package manager. + +Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. + +## Prerequisites + +- Kubernetes 1.12+ +- Helm 2.11+ or Helm 3.0+ + +## Installing the Chart + +Add the `bitnami` charts repo to Helm: + +```bash +$ helm repo add bitnami https://charts.bitnami.com/bitnami +``` + +To install the chart with the release name `my-release`: + +```bash +$ helm install --name my-release bitnami/node-exporter +``` + +The command deploys Node Exporter on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation. + +## Uninstalling the Chart + +To uninstall/delete the `my-release` release: + +```bash +$ helm delete my-release +``` + +The command removes all the Kubernetes components associated with the chart and deletes the release. + +## Parameters + +The following table lists the configurable parameters of the Node Exporter chart and their default values. + +| Parameter | Description | Default | +|--------------------------------------|----------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------| +| `global.imageRegistry` | Global Docker image registry | `nil` | +| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | +| `global.storageClass` | Global storage class for dynamic provisioning | `nil` | +| `global.labels` | Additional labels to apply to all resource | `{}` | +| `nameOverride` | String to partially override `node-exporter.name` template with a string (will prepend the release name) | `nil` | +| `fullnameOverride` | String to fully override `node-exporter.fullname` template with a string | `nil` | +| `rbac.create` | Wether to create & use RBAC resources or not | `true` | +| `rbac.apiVersion` | Version of the RBAC API | `v1beta1` | +| `rbac.pspEnabled` | PodSecurityPolicy | `true` | +| `image.registry` | Node Exporter image registry | `docker.io` | +| `image.repository` | Node Exporter Image name | `bitnami/node-exporter` | +| `image.tag` | Node Exporter Image tag | `{TAG_NAME}` | +| `image.pullPolicy` | Node Exporter image pull policy | `IfNotPresent` | +| `image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | +| `extraArgs` | Additional command line arguments to pass to node-exporter | `{}` | +| `extraVolumes` | Additional volumes to the node-exporter pods | `{}` | +| `extraVolumeMounts` | Additional volumeMounts to the node-exporter container | `{}` | +| `serviceAccount.create` | Specify whether to create a ServiceAccount for Node Exporter | `true` | +| `serviceAccount.name` | The name of the ServiceAccount to create | Generated using the `node-exporter.fullname` template | +| `securityContext.enabled` | Enable security context | `true` | +| `securityContext.runAsUser` | User ID for the container | `1001` | +| `securityContext.fsGroup` | Group ID for the container filesystem | `1001` | +| `service.type` | Kubernetes service type | `ClusterIP` | +| `service.port` | Node Exporter service port | `9100` | +| `service.clusterIP` | Specific cluster IP when service type is cluster IP. Use `None` for headless service | `nil` | +| `service.nodePort` | Kubernetes Service nodePort | `nil` | +| `service.loadBalancerIP` | `loadBalancerIP` if service type is `LoadBalancer` | `nil` | +| `service.loadBalancerSourceRanges` | Address that are allowed when svc is `LoadBalancer` | `[]` | +| `service.annotations` | Additional annotations for Node Exporter service | `{}` | +| `service.labels` | Additional labels for Node Exporter service | `{}` | +| `updateStrategy` | The update strategy to apply to the DaemonSet | `{ "type": "RollingUpdate", "rollingUpdate": { "maxUnavailable": "1" } }` | +| `hostNetwork` | Expose the service to the host network | `true` | +| `minReadySeconds` | `minReadySeconds` to avoid killing pods before we are ready | `0` | +| `priorityClassName` | Priority class assigned to the Pods | `nil` | +| `resources` | CPU/Memory resource requests/limits for node | `{}` | +| `podLabels` | Pod labels | `{}` | +| `podAnnotations` | Pod annotations | `{}` | +| `nodeAffinity` | Node Affinity (this value is evaluated as a template) | `{}` | +| `podAntiAffinity` | Pod anti-affinity policy | `soft` | +| `podAffinity` | Affinity, in addition to antiAffinity (this value is evaluated as a template) | `{}` | +| `nodeSelector` | Node labels for pod assignment (this value is evaluated as a template) | `{}` | +| `tolerations` | List of node taints to tolerate (this value is evaluated as a template) | `[]` | +| `livenessProbe.enabled` | Turn on and off liveness probe | `true` | +| `livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `120` | +| `livenessProbe.periodSeconds` | How often to perform the probe | `10` | +| `livenessProbe.timeoutSeconds` | When the probe times out | `5` | +| `livenessProbe.failureThreshold` | Minimum consecutive failures for the probe | `6` | +| `livenessProbe.successThreshold` | Minimum consecutive successes for the probe | `1` | +| `readinessProbe.enabled` | Turn on and off readiness probe | `true` | +| `readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `30` | +| `readinessProbe.periodSeconds` | How often to perform the probe | `10` | +| `readinessProbe.timeoutSeconds` | When the probe times out | `5` | +| `readinessProbe.failureThreshold` | Minimum consecutive failures for the probe | `6` | +| `readinessProbe.successThreshold` | Minimum consecutive successes for the probe | `1` | +| `serviceMonitor.enabled` | Creates a ServiceMonitor to monitor Node Exporter | `false` | +| `serviceMonitor.namespace` | Namespace in which Prometheus is running | `nil` | +| `serviceMonitor.interval` | Scrape interval (use by default, falling back to Prometheus' default) | `nil` | +| `serviceMonitor.jobLabel` | The name of the label on the target service to use as the job name in prometheus. | `nil` | +| `serviceMonitor.selector` | ServiceMonitor selector labels | `[]` | + +Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example the following command sets the `minReadySeconds` of the Node Exporter Pods to `120` seconds. + +```bash +$ helm install --name my-release --set minReadySeconds=120 bitnami/node-exporter +``` + +Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example, + +```bash +$ helm install --name my-release -f values.yaml bitnami/node-exporter +``` + +> **Tip**: You can use the default [values.yaml](values.yaml) + +## Configuration and installation details + +### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/) + +It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image. + +Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist. + +## Upgrading + +```bash +$ helm upgrade my-release bitnami/node-exporter +``` diff --git a/bitnami/node-exporter/templates/NOTES.txt b/bitnami/node-exporter/templates/NOTES.txt new file mode 100644 index 0000000000..9b3ee27651 --- /dev/null +++ b/bitnami/node-exporter/templates/NOTES.txt @@ -0,0 +1,37 @@ +** Please be patient while the chart is being deployed ** + +Watch the Node Exporter DaemonSet status using the command: + + kubectl get ds -w --namespace {{ .Release.Namespace }} -l app.kubernetes.io/name={{ template "node-exporter.name" . }},app.kubernetes.io/instance={{ .Release.Name }} + +Node Exporter can be accessed via port "{{ .Values.service.port }}" on the following DNS name from within your cluster: + + {{ template "node-exporter.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local + +To access Node Exporter from outside the cluster execute the following commands: + +{{- if contains "LoadBalancer" .Values.service.type }} + + NOTE: It may take a few minutes for the LoadBalancer IP to be available. + Watch the status with: 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "node-exporter.fullname" . }}' + +{{- $port:=.Values.service.port | toString }} + + export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "node-exporter.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}") + echo "URL: http://$SERVICE_IP{{- if ne $port "80" }}:{{ .Values.service.port }}{{ end }}/" + +{{- else if contains "ClusterIP" .Values.service.type }} + + echo "URL: http://127.0.0.1:9100/" + kubectl port-forward --namespace {{ .Release.Namespace }} svc/{{ template "node-exporter.fullname" . }} 9100:{{ .Values.service.port }} + +{{- else if contains "NodePort" .Values.service.type }} + + export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "node-exporter.fullname" . }}) + export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}") + echo "URL: http://$NODE_IP:$NODE_PORT/" + +{{- end }} + +{{- include "node-exporter.validateValues" . }} +{{- include "node-exporter.checkRollingTags" . }} diff --git a/bitnami/node-exporter/templates/_helpers.tpl b/bitnami/node-exporter/templates/_helpers.tpl new file mode 100644 index 0000000000..68f9c30e56 --- /dev/null +++ b/bitnami/node-exporter/templates/_helpers.tpl @@ -0,0 +1,178 @@ +{{/* vim: set filetype=mustache: */}} + +{{/* +Renders a value that contains template. +Usage: +{{ include "node-exporter.tplValue" ( dict "value" .Values.path.to.the.Value "context" $) }} +*/}} +{{- define "node-exporter.tplValue" -}} + {{- if typeIs "string" .value }} + {{- tpl .value .context }} + {{- else }} + {{- tpl (.value | toYaml) .context }} + {{- end }} +{{- end -}} + +{{/* +Return the appropriate apiVersion for PodSecurityPolicy. +*/}} +{{- define "podSecurityPolicy.apiVersion" -}} +{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}} +{{- print "policy/v1beta1" -}} +{{- else -}} +{{- print "extensions/v1beta1" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for Deployment. +*/}} +{{- define "deployment.apiVersion" -}} +{{- if semverCompare "<1.14-0" .Capabilities.KubeVersion.GitVersion -}} +{{- print "extensions/v1beta1" -}} +{{- else -}} +{{- print "apps/v1" -}} +{{- end -}} +{{- end -}} + +{{/* +Expand the name of the chart. +*/}} +{{- define "node-exporter.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "node-exporter.fullname" -}} +{{- if .Values.fullnameOverride -}} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- if contains $name .Release.Name -}} +{{- .Release.Name | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "node-exporter.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Common labels +*/}} +{{- define "node-exporter.labels" -}} +app.kubernetes.io/name: {{ include "node-exporter.name" . }} +helm.sh/chart: {{ include "node-exporter.chart" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +{{- if .Chart.AppVersion }} +app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} +{{- end }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +{{- if .Values.global.labels }} +{{ toYaml .Values.global.labels }} +{{- end }} +{{- end -}} + +{{/* +Create the name of the service account to use +*/}} +{{- define "node-exporter.serviceAccountName" -}} +{{- if .Values.serviceAccount.create -}} + {{ default (include "node-exporter.fullname" .) .Values.serviceAccount.name }} +{{- else -}} + {{ default "default" .Values.serviceAccount.name }} +{{- end -}} +{{- end -}} + +{{/* +matchLabels +*/}} +{{- define "node-exporter.matchLabels" -}} +app.kubernetes.io/name: {{ include "node-exporter.name" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +{{- end -}} + +{{/* +Return the proper Docker Image Registry Secret Names for Node Exporter image +*/}} +{{- define "node-exporter.imagePullSecrets" -}} +{{/* +Helm 2.11 supports the assignment of a value to a variable defined in a different scope, +but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic. +Also, we can not use a single if because lazy evaluation is not an option +*/}} +{{- if .Values.global }} +{{- if .Values.global.imagePullSecrets }} +imagePullSecrets: +{{- range .Values.global.imagePullSecrets }} + - name: {{ . }} +{{- end }} +{{- else if .Values.image.pullSecrets }} +imagePullSecrets: +{{- range .Values.image.pullSecrets }} + - name: {{ . }} +{{- end }} +{{- end -}} +{{- else if .Values.image.pullSecrets }} +imagePullSecrets: +{{- range .Values.image.pullSecrets }} + - name: {{ . }} +{{- end }} +{{- end -}} +{{- end -}} + +{{/* +Return the proper Node Exporter image name +*/}} +{{- define "node-exporter.image" -}} +{{- $registryName := .Values.image.registry -}} +{{- $repositoryName := .Values.image.repository -}} +{{- $tag := .Values.image.tag | toString -}} +{{/* +Helm 2.11 supports the assignment of a value to a variable defined in a different scope, +but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic. +Also, we can't use a single if because lazy evaluation is not an option +*/}} +{{- if .Values.global }} + {{- if .Values.global.imageRegistry }} + {{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}} + {{- else -}} + {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}} + {{- end -}} +{{- else -}} + {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}} +{{- end -}} +{{- end -}} + +{{/* +Check if there are rolling tags in the images +*/}} +{{- define "node-exporter.checkRollingTags" -}} +{{- if and (contains "bitnami/" .Values.image.repository) (not (.Values.image.tag | toString | regexFind "-r\\d+$|sha256:")) }} +WARNING: Rolling tag detected ({{ .Values.image.repository }}:{{ .Values.image.tag }}), please note that it is strongly recommended to avoid using rolling tags in a production environment. ++info https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/ +{{- end }} +{{- end -}} + +{{/* +Compile all warnings into a single message, and call fail. +*/}} +{{- define "node-exporter.validateValues" -}} +{{- $messages := list -}} +{{- $messages := without $messages "" -}} +{{- $message := join "\n" $messages -}} + +{{- if $message -}} +{{- printf "\nVALUES VALIDATION:\n%s" $message | fail -}} +{{- end -}} +{{- end -}} diff --git a/bitnami/node-exporter/templates/daemonset.yaml b/bitnami/node-exporter/templates/daemonset.yaml new file mode 100644 index 0000000000..6c91625179 --- /dev/null +++ b/bitnami/node-exporter/templates/daemonset.yaml @@ -0,0 +1,113 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: {{ template "node-exporter.fullname" . }} + labels: {{- include "node-exporter.labels" . | nindent 4 }} +spec: + selector: + matchLabels: {{- include "node-exporter.matchLabels" . | nindent 6 }} + updateStrategy: +{{ toYaml .Values.updateStrategy | indent 4 }} + minReadySeconds: {{ .Values.minReadySeconds }} + template: + metadata: + {{- if .Values.podAnnotations }} + annotations: +{{ toYaml .Values.podAnnotations | indent 8 }} + {{- end }} + labels: {{- include "node-exporter.labels" . | nindent 8 }} + {{- if .Values.podLabels }} +{{ toYaml .Values.podLabels | indent 8 }} + {{- end }} + spec: + serviceAccountName: {{ template "node-exporter.serviceAccountName" . }} +{{- include "node-exporter.imagePullSecrets" . | indent 6 }} + {{- if .Values.priorityClassName }} + priorityClassName: {{ .Values.priorityClassName }} + {{- end }} + {{- if .Values.securityContext.enabled }} + securityContext: + runAsUser: {{ .Values.securityContext.runAsUser }} + fsGroup: {{ .Values.securityContext.fsGroup }} + runAsNonRoot: {{ .Values.securityContext.runAsNonRoot }} + {{- end }} + containers: + - name: {{ template "node-exporter.name" . }} + image: {{ template "node-exporter.image" . }} + imagePullPolicy: {{ .Values.image.pullPolicy }} + args: + - --path.procfs=/host/proc + - --path.sysfs=/host/sys + - --web.listen-address=0.0.0.0:9100 + {{- range $key, $value := .Values.extraArgs }} + {{- if $value }} + - --{{ $key }}={{ $value }} + {{- else }} + - --{{ $key }} + {{- end }} + {{- end }} + ports: + - name: metrics + containerPort: 9100 + protocol: TCP + livenessProbe: + httpGet: + path: / + port: metrics +{{ toYaml .Values.livenessProbe | indent 12 }} + readinessProbe: + httpGet: + path: / + port: metrics +{{ toYaml .Values.readinessProbe | indent 12 }} + resources: {{- toYaml .Values.resources | nindent 12 }} + volumeMounts: + - name: proc + mountPath: /host/proc + readOnly: true + - name: sys + mountPath: /host/sys + readOnly: true + {{- if .Values.extraVolumeMounts }} +{{ toYaml .Values.extraVolumeMounts | indent 12}} + {{- end }} + hostNetwork: {{ .Values.hostNetwork }} + hostPID: true + {{- if .Values.nodeSelector }} + nodeSelector: {{- include "node-exporter.tplValue" (dict "value" .Values.nodeSelector "context" $) | nindent 8 }} + {{- end }} + {{- if .Values.tolerations }} + tolerations: {{- include "node-exporter.tplValue" (dict "value" .Values.tolerations "context" $) | nindent 8 }} + {{- end }} + affinity: + {{- if .Values.nodeAffinity }} + nodeAffinity: {{- include "node-exporter.tplValue" (dict "value" .Values.nodeAffinity "context" $) | nindent 10 }} + {{- end }} + {{- if eq .Values.podAntiAffinity "hard" }} + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - topologyKey: "kubernetes.io/hostname" + labelSelector: + matchLabels: {{- include "node-exporter.matchLabels" . | nindent 16 }} + {{- else if eq .Values.podAntiAffinity "soft" }} + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 1 + podAffinityTerm: + topologyKey: "kubernetes.io/hostname" + labelSelector: + matchLabels: {{- include "node-exporter.matchLabels" . | nindent 18 }} + {{- end }} + {{- if .Values.podAffinity }} + podAffinity: {{- include "node-exporter.tplValue" (dict "value" .Values.podAffinity "context" $) | nindent 10 }} + {{- end }} + volumes: + - name: proc + hostPath: + path: /proc + - name: sys + hostPath: + path: /sys + {{- if .Values.extraVolumes }} +{{ toYaml .Values.extraVolumes | indent 8 }} + {{- end }} diff --git a/bitnami/node-exporter/templates/psp-clusterrole.yaml b/bitnami/node-exporter/templates/psp-clusterrole.yaml new file mode 100644 index 0000000000..f4f4b00e8d --- /dev/null +++ b/bitnami/node-exporter/templates/psp-clusterrole.yaml @@ -0,0 +1,13 @@ +{{- if and .Values.rbac.create .Values.rbac.pspEnabled }} +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: {{ template "node-exporter.fullname" . }}-psp + labels: {{- include "node-exporter.labels" . | nindent 4 }} +rules: +- apiGroups: ['extensions'] + resources: ['podsecuritypolicies'] + verbs: ['use'] + resourceNames: + - {{ template "node-exporter.fullname" . }} +{{- end }} diff --git a/bitnami/node-exporter/templates/psp-clusterrolebinding.yaml b/bitnami/node-exporter/templates/psp-clusterrolebinding.yaml new file mode 100644 index 0000000000..62cd822b98 --- /dev/null +++ b/bitnami/node-exporter/templates/psp-clusterrolebinding.yaml @@ -0,0 +1,15 @@ +{{- if and .Values.rbac.create .Values.rbac.pspEnabled }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: {{ template "node-exporter.fullname" . }}-psp + labels: {{- include "node-exporter.labels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ template "node-exporter.fullname" . }}-psp +subjects: + - kind: ServiceAccount + name: {{ template "node-exporter.serviceAccountName" . }} + namespace: {{ .Release.Namespace }} +{{- end }} diff --git a/bitnami/node-exporter/templates/psp.yaml b/bitnami/node-exporter/templates/psp.yaml new file mode 100644 index 0000000000..404e8d5721 --- /dev/null +++ b/bitnami/node-exporter/templates/psp.yaml @@ -0,0 +1,44 @@ +{{- if and .Values.rbac.create .Values.rbac.pspEnabled }} +apiVersion: {{ template "podSecurityPolicy.apiVersion" . }} +kind: PodSecurityPolicy +metadata: + name: {{ template "node-exporter.fullname" . }} + labels: {{- include "node-exporter.labels" . | nindent 4 }} +spec: + privileged: false + allowPrivilegeEscalation: false + requiredDropCapabilities: + - ALL + volumes: + - 'configMap' + - 'emptyDir' + - 'projected' + - 'secret' + - 'downwardAPI' + - 'persistentVolumeClaim' + - 'hostPath' + hostNetwork: true + hostIPC: false + hostPID: true + hostPorts: + - min: 0 + max: 65535 + runAsUser: + rule: 'MustRunAs' + ranges: + - min: 1001 + max: 1001 + seLinux: + rule: 'RunAsAny' + supplementalGroups: + rule: 'MustRunAs' + ranges: + - min: 1001 + max: 1001 + fsGroup: + rule: 'MustRunAs' + ranges: + - min: 1001 + max: 1001 + readOnlyRootFilesystem: false +{{- end }} diff --git a/bitnami/node-exporter/templates/service.yaml b/bitnami/node-exporter/templates/service.yaml new file mode 100644 index 0000000000..ba736e909d --- /dev/null +++ b/bitnami/node-exporter/templates/service.yaml @@ -0,0 +1,34 @@ +apiVersion: v1 +kind: Service +metadata: + name: {{ template "node-exporter.fullname" . }} + annotations: + prometheus.io/scrape: "true" + {{- with .Values.service.annotations }} + {{- toYaml . | indent 4 }} + {{- end }} + labels: {{- include "node-exporter.labels" . | nindent 4 }} + {{- if .Values.service.labels }} +{{ toYaml .Values.service.labels | indent 4 }} + {{- end }} +spec: + type: {{ .Values.service.type }} + {{- if and .Values.service.loadBalancerIP (eq .Values.service.type "LoadBalancer") }} + loadBalancerIP: {{ .Values.service.loadBalancerIP }} + {{- end }} + {{- if and (eq .Values.service.type "LoadBalancer") .Values.service.loadBalancerSourceRanges }} + {{- with .Values.service.loadBalancerSourceRanges }} + loadBalancerSourceRanges: {{- toYaml . | nindent 4 }} + {{- end }} + {{- end }} + {{- if and (eq .Values.service.type "ClusterIP") .Values.service.clusterIP }} + clusterIP: {{ .Values.service.clusterIP }} + {{- end }} + ports: + - name: metrics + port: {{ .Values.service.port }} + targetPort: metrics + {{- if and .Values.service.nodePort (or (eq .Values.service.type "NodePort") (eq .Values.service.type "LoadBalancer")) }} + nodePort: {{ .Values.service.nodePort }} + {{- end }} + selector: {{- include "node-exporter.matchLabels" . | nindent 4 }} diff --git a/bitnami/node-exporter/templates/serviceaccount.yaml b/bitnami/node-exporter/templates/serviceaccount.yaml new file mode 100644 index 0000000000..8e60e27e52 --- /dev/null +++ b/bitnami/node-exporter/templates/serviceaccount.yaml @@ -0,0 +1,7 @@ +{{- if .Values.serviceAccount.create -}} +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ template "node-exporter.serviceAccountName" . }} + labels: {{- include "node-exporter.labels" . | nindent 4 }} +{{- end }} diff --git a/bitnami/node-exporter/templates/servicemonitor.yaml b/bitnami/node-exporter/templates/servicemonitor.yaml new file mode 100644 index 0000000000..4efdcb83a4 --- /dev/null +++ b/bitnami/node-exporter/templates/servicemonitor.yaml @@ -0,0 +1,30 @@ +{{- if .Values.serviceMonitor.enabled }} +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + name: {{ template "node-exporter.fullname" . }} + {{- if .Values.serviceMonitor.namespace }} + namespace: {{ .Values.serviceMonitor.namespace }} + {{- end }} + labels: {{- include "node-exporter.labels" . | nindent 4 }} + {{- range $key, $value := .Values.serviceMonitor.selector }} + {{ $key }}: {{ $value | quote }} + {{- end }} +spec: + {{- if .Values.serviceMonitor.jobLabel }} + jobLabel: {{ .Values.serviceMonitor.jobLabel }} + {{- end }} + selector: + matchLabels: {{- include "node-exporter.matchLabels" . | nindent 6 }} + endpoints: + - port: metrics + {{- if .Values.serviceMonitor.interval }} + interval: {{ .Values.serviceMonitor.interval }} + {{- end }} + {{- if .Values.serviceMonitor.scrapeTimeout }} + scrapeTimeout: {{ .Values.serviceMonitor.scrapeTimeout }} + {{- end }} + namespaceSelector: + matchNames: + - {{ .Release.Namespace }} +{{- end }} diff --git a/bitnami/node-exporter/values-production.yaml b/bitnami/node-exporter/values-production.yaml new file mode 100644 index 0000000000..d470ea946f --- /dev/null +++ b/bitnami/node-exporter/values-production.yaml @@ -0,0 +1,222 @@ +## Global Docker image parameters +## Please, note that this will override the image parameters, including dependencies, configured to use the global value +## Current available global Docker image parameters: imageRegistry and imagePullSecrets +## +global: +# imageRegistry: myRegistryName +# imagePullSecrets: +# - myRegistryKeySecretName +# storageClass: myStorageClass + + labels: {} + # foo: bar + +## String to partially override node-exporter.fullname template (will maintain the release name) +## +# nameOverride: + +## String to fully override node-exporter.fullname template +## +# fullnameOverride: + +## Role Based Access +## Ref: https://kubernetes.io/docs/admin/authorization/rbac/ +## +rbac: + create: true + + ## RBAC API version + ## + apiVersion: v1beta1 + + ## Podsecuritypolicy + ## + pspEnabled: true + +## Bitnami Node Exporter image version +## ref: https://hub.docker.com/r/bitnami/node-exporter/tags/ +## +image: + registry: docker.io + repository: bitnami/node-exporter + tag: 0.18.1-debian-9-r172 + + ## Specify a imagePullPolicy + ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' + ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images + ## + pullPolicy: IfNotPresent + ## Optionally specify an array of imagePullSecrets. + ## Secrets must be manually created in the namespace. + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + ## + # pullSecrets: + # - myRegistryKeySecretName + +## Additional command line arguments to pass to node-exporter +extraArgs: {} + # collector.filesystem.ignored-mount-points: "^/(dev|proc|sys|var/lib/docker/.+)($|/)" + # collector.filesystem.ignored-fs-types: "^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$" + +## Additional volumes to the node-exporter pods +extraVolumes: [] +# - name: copy-portal-skins +# emptyDir: {} + +## Additional volumeMounts to the node-exporter container +extraVolumeMounts: [] +# - name: copy-portal-skins +# mountPath: /var/lib/lemonldap-ng/portal/skins + +## Service account for Node Exporter to use. +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ +## +serviceAccount: + ## Specifies whether a ServiceAccount should be created + ## + create: true + ## The name of the ServiceAccount to use. + ## If not set and create is true, a name is generated using the node-exporter.fullname template + # name: + +## SecurityContext configuration +## +securityContext: + enabled: true + runAsUser: 1001 + fsGroup: 1001 + +## Node Exporter Service +## +service: + ## Kubernetes service type and port number + ## + type: ClusterIP + port: 9100 + # clusterIP: None + + ## Specify the nodePort value for the LoadBalancer and NodePort service types. + ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport + ## + # nodePort: 30080 + + ## Set the LoadBalancer service type to internal only. + ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer + ## + # loadBalancerIP: + + ## Load Balancer sources + ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service + ## + # loadBalancerSourceRanges: + # - 10.10.10.0/24 + + ## Provide any additional service annotations + ## + annotations: {} + + ## Provide any additional service labels + ## + labels: {} + +# The update strategy to apply to the DaemonSet +## +updateStrategy: + type: RollingUpdate + rollingUpdate: + maxUnavailable: 1 + +# Expose the service to the host network +hostNetwork: true + +# minReadySeconds to avoid killing pods before we are ready +## +minReadySeconds: 0 + +## Priority class assigned to the Pods +## +priorityClassName: "" + +## Configure resource requests and limits +## ref: http://kubernetes.io/docs/user-guide/compute-resources/ +## +resources: {} + +## Pod labels +## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ +## +podLabels: {} + +## Pod annotations +## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ +## +podAnnotations: {} + +## Node Affinity. The value is evaluated as a template. +## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity +## +nodeAffinity: {} + +## Pod AntiAffinity +## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity +## +podAntiAffinity: soft + +## Pod Affinity. The value is evaluated as a template. +## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity +## +podAffinity: {} + +## Node labels for pod assignment +## Ref: https://kubernetes.io/docs/user-guide/node-selection/ +## +nodeSelector: {} + +## Tolerations for pod assignment +## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ +## +tolerations: [] + +## Configure extra options for liveness and readiness probes +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes) +## +livenessProbe: + initialDelaySeconds: 120 + periodSeconds: 10 + timeoutSeconds: 5 + failureThreshold: 6 + successThreshold: 1 + +readinessProbe: + initialDelaySeconds: 30 + periodSeconds: 10 + timeoutSeconds: 5 + failureThreshold: 6 + successThreshold: 1 + +## ServiceMonitor configuration +## +serviceMonitor: + enabled: false + ## Namespace in which Prometheus is running + ## + # namespace: monitoring + + ## The name of the label on the target service to use as the job name in prometheus + # jobLabel: + + ## Interval at which metrics should be scraped. + ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint + ## + # interval: 10s + + ## Timeout after which the scrape is ended + ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint + ## + # scrapeTimeout: 10s + + ## ServiceMonitor selector labels + ## ref: https://github.com/bitnami/charts/tree/master/bitnami/prometheus-operator#prometheus-configuration + ## + # selector: + # prometheus: my-prometheus diff --git a/bitnami/node-exporter/values.yaml b/bitnami/node-exporter/values.yaml new file mode 100644 index 0000000000..d470ea946f --- /dev/null +++ b/bitnami/node-exporter/values.yaml @@ -0,0 +1,222 @@ +## Global Docker image parameters +## Please, note that this will override the image parameters, including dependencies, configured to use the global value +## Current available global Docker image parameters: imageRegistry and imagePullSecrets +## +global: +# imageRegistry: myRegistryName +# imagePullSecrets: +# - myRegistryKeySecretName +# storageClass: myStorageClass + + labels: {} + # foo: bar + +## String to partially override node-exporter.fullname template (will maintain the release name) +## +# nameOverride: + +## String to fully override node-exporter.fullname template +## +# fullnameOverride: + +## Role Based Access +## Ref: https://kubernetes.io/docs/admin/authorization/rbac/ +## +rbac: + create: true + + ## RBAC API version + ## + apiVersion: v1beta1 + + ## Podsecuritypolicy + ## + pspEnabled: true + +## Bitnami Node Exporter image version +## ref: https://hub.docker.com/r/bitnami/node-exporter/tags/ +## +image: + registry: docker.io + repository: bitnami/node-exporter + tag: 0.18.1-debian-9-r172 + + ## Specify a imagePullPolicy + ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' + ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images + ## + pullPolicy: IfNotPresent + ## Optionally specify an array of imagePullSecrets. + ## Secrets must be manually created in the namespace. + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + ## + # pullSecrets: + # - myRegistryKeySecretName + +## Additional command line arguments to pass to node-exporter +extraArgs: {} + # collector.filesystem.ignored-mount-points: "^/(dev|proc|sys|var/lib/docker/.+)($|/)" + # collector.filesystem.ignored-fs-types: "^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$" + +## Additional volumes to the node-exporter pods +extraVolumes: [] +# - name: copy-portal-skins +# emptyDir: {} + +## Additional volumeMounts to the node-exporter container +extraVolumeMounts: [] +# - name: copy-portal-skins +# mountPath: /var/lib/lemonldap-ng/portal/skins + +## Service account for Node Exporter to use. +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ +## +serviceAccount: + ## Specifies whether a ServiceAccount should be created + ## + create: true + ## The name of the ServiceAccount to use. + ## If not set and create is true, a name is generated using the node-exporter.fullname template + # name: + +## SecurityContext configuration +## +securityContext: + enabled: true + runAsUser: 1001 + fsGroup: 1001 + +## Node Exporter Service +## +service: + ## Kubernetes service type and port number + ## + type: ClusterIP + port: 9100 + # clusterIP: None + + ## Specify the nodePort value for the LoadBalancer and NodePort service types. + ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport + ## + # nodePort: 30080 + + ## Set the LoadBalancer service type to internal only. + ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer + ## + # loadBalancerIP: + + ## Load Balancer sources + ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service + ## + # loadBalancerSourceRanges: + # - 10.10.10.0/24 + + ## Provide any additional service annotations + ## + annotations: {} + + ## Provide any additional service labels + ## + labels: {} + +# The update strategy to apply to the DaemonSet +## +updateStrategy: + type: RollingUpdate + rollingUpdate: + maxUnavailable: 1 + +# Expose the service to the host network +hostNetwork: true + +# minReadySeconds to avoid killing pods before we are ready +## +minReadySeconds: 0 + +## Priority class assigned to the Pods +## +priorityClassName: "" + +## Configure resource requests and limits +## ref: http://kubernetes.io/docs/user-guide/compute-resources/ +## +resources: {} + +## Pod labels +## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ +## +podLabels: {} + +## Pod annotations +## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ +## +podAnnotations: {} + +## Node Affinity. The value is evaluated as a template. +## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity +## +nodeAffinity: {} + +## Pod AntiAffinity +## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity +## +podAntiAffinity: soft + +## Pod Affinity. The value is evaluated as a template. +## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity +## +podAffinity: {} + +## Node labels for pod assignment +## Ref: https://kubernetes.io/docs/user-guide/node-selection/ +## +nodeSelector: {} + +## Tolerations for pod assignment +## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ +## +tolerations: [] + +## Configure extra options for liveness and readiness probes +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes) +## +livenessProbe: + initialDelaySeconds: 120 + periodSeconds: 10 + timeoutSeconds: 5 + failureThreshold: 6 + successThreshold: 1 + +readinessProbe: + initialDelaySeconds: 30 + periodSeconds: 10 + timeoutSeconds: 5 + failureThreshold: 6 + successThreshold: 1 + +## ServiceMonitor configuration +## +serviceMonitor: + enabled: false + ## Namespace in which Prometheus is running + ## + # namespace: monitoring + + ## The name of the label on the target service to use as the job name in prometheus + # jobLabel: + + ## Interval at which metrics should be scraped. + ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint + ## + # interval: 10s + + ## Timeout after which the scrape is ended + ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint + ## + # scrapeTimeout: 10s + + ## ServiceMonitor selector labels + ## ref: https://github.com/bitnami/charts/tree/master/bitnami/prometheus-operator#prometheus-configuration + ## + # selector: + # prometheus: my-prometheus