[bitnami/thanos] Add prometheus alerts rules (#12873)

* [bitnami/thanos] Add prometheus alerts

Signed-off-by: yasin lachini <yasin.lachini@justeattakeaway.com>

* [bitnami/thanos] Change default prometheus alerts rules to false

Signed-off-by: yasin.lachiny <yasin.lachiny@gmail.com>

* [bitnami/thanos] add new variables to values.yml

Signed-off-by: yasin lachini <yasin.lachini@justeattakeaway.com>

* [bitnami/thanos] resolve confilict

Signed-off-by: yasin lachini <yasin.lachini@justeattakeaway.com>

* [bitnami/thanos] bump version

Signed-off-by: yasin lachini <yasin.lachini@justeattakeaway.com>

* [bitnami/thanos] add missing extra to values.yaml

Signed-off-by: yasin lachini <yasin.lachini@justeattakeaway.com>

Signed-off-by: yasin lachini <yasin.lachini@justeattakeaway.com>
Signed-off-by: yasin.lachiny <yasin.lachiny@gmail.com>
Co-authored-by: yasin.lachiny <yasin.lachiny@gmail.com.com>
This commit is contained in:
yasinlachiny
2022-11-16 11:05:45 +01:00
committed by GitHub
parent d68efac64f
commit 1fbb541238
11 changed files with 1285 additions and 17 deletions

View File

@@ -28,4 +28,4 @@ name: thanos
sources:
- https://github.com/bitnami/containers/tree/main/bitnami/thanos
- https://thanos.io
version: 11.5.10
version: 11.6.0

View File

@@ -1073,22 +1073,78 @@ Check the section [Integrate Thanos with Prometheus and Alertmanager](#integrate
### Metrics parameters
| Name | Description | Value |
| ------------------------------------------ | ------------------------------------------------------------------------------------------------------ | ------- |
| `metrics.enabled` | Enable the export of Prometheus metrics | `false` |
| `metrics.serviceMonitor.enabled` | Specify if a ServiceMonitor will be deployed for Prometheus Operator | `false` |
| `metrics.serviceMonitor.namespace` | Namespace in which Prometheus is running | `""` |
| `metrics.serviceMonitor.labels` | Extra labels for the ServiceMonitor | `{}` |
| `metrics.serviceMonitor.jobLabel` | The name of the label on the target service to use as the job name in Prometheus | `""` |
| `metrics.serviceMonitor.interval` | How frequently to scrape metrics | `""` |
| `metrics.serviceMonitor.scrapeTimeout` | Timeout after which the scrape is ended | `""` |
| `metrics.serviceMonitor.metricRelabelings` | Specify additional relabeling of metrics | `[]` |
| `metrics.serviceMonitor.relabelings` | Specify general relabeling | `[]` |
| `metrics.serviceMonitor.selector` | Prometheus instance selector labels | `{}` |
| `metrics.prometheusRule.enabled` | If `true`, creates a Prometheus Operator PrometheusRule (also requires `metrics.enabled` to be `true`) | `false` |
| `metrics.prometheusRule.namespace` | Namespace in which the PrometheusRule CRD is created | `""` |
| `metrics.prometheusRule.additionalLabels` | Additional labels for the prometheusRule | `{}` |
| `metrics.prometheusRule.groups` | Prometheus Rule Groups for Thanos components | `[]` |
| Name | Description | Value |
| --------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `metrics.enabled` | Enable the export of Prometheus metrics | `false` |
| `metrics.serviceMonitor.enabled` | Specify if a ServiceMonitor will be deployed for Prometheus Operator | `false` |
| `metrics.serviceMonitor.namespace` | Namespace in which Prometheus is running | `""` |
| `metrics.serviceMonitor.labels` | Extra labels for the ServiceMonitor | `{}` |
| `metrics.serviceMonitor.jobLabel` | The name of the label on the target service to use as the job name in Prometheus | `""` |
| `metrics.serviceMonitor.interval` | How frequently to scrape metrics | `""` |
| `metrics.serviceMonitor.scrapeTimeout` | Timeout after which the scrape is ended | `""` |
| `metrics.serviceMonitor.metricRelabelings` | Specify additional relabeling of metrics | `[]` |
| `metrics.serviceMonitor.relabelings` | Specify general relabeling | `[]` |
| `metrics.serviceMonitor.selector` | Prometheus instance selector labels | `{}` |
| `metrics.prometheusRule.enabled` | If `true`, creates a Prometheus Operator PrometheusRule (also requires `metrics.enabled` to be `true`) | `false` |
| `metrics.prometheusRule.default.absent_rules` | Enable absent_rules when metrics.prometheusRule.default.create is false (also requires `metrics.enabled` to be `true`) | |
| `metrics.prometheusRule.default.compaction` | Enable compaction rules when metrics.prometheusRule.default.create is false (also requires `metrics.enabled` to be `true`) | |
| `metrics.prometheusRule.default.query` | Enable query when metrics.prometheusRule.default.create is false (also requires `metrics.enabled` to be `true`) | |
| `metrics.prometheusRule.default.receive` | Enable receive rules when metrics.prometheusRule.default.create is false (also requires `metrics.enabled` to be `true`) | |
| `metrics.prometheusRule.default.replicate` | Enable replicate rules when metrics.prometheusRule.default.create is false (also requires `metrics.enabled` to be `true`) | |
| `metrics.prometheusRule.default.ruler` | Enable ruler rules when metrics.prometheusRule.default.create is false (also requires `metrics.enabled` to be `true`) | |
| `metrics.prometheusRule.default.sidecar` | Enable sidecar rules when metrics.prometheusRule.default.create is false (also requires `metrics.enabled` to be `true`) | |
| `metrics.prometheusRule.default.store_gateway` | Enable store_gateway rules when metrics.prometheusRule.default.create is false (also requires `metrics.enabled` to be `true`) | |
| `metrics.prometheusRule.default.create` | would create all default prometheus alerts | `false` |
| `metrics.prometheusRule.default.disabled.ThanosCompactIsDown` | Disable ThanosCompactIsDown rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.absent_rules is true | |
| `metrics.prometheusRule.default.disabled.ThanosQueryIsDown` | Disable ThanosQueryIsDown rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.absent_rules is true | |
| `metrics.prometheusRule.default.disabled.ThanosReceiveIsDown` | Disable ThanosReceiveIsDown rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.absent_rules is true | |
| `metrics.prometheusRule.default.disabled.ThanosRuleIsDown` | Disable ThanosRuleIsDown rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.absent_rules is true | |
| `metrics.prometheusRule.default.disabled.ThanosSidecarIsDown` | Disable ThanosSidecarIsDown rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.absent_rules is true | |
| `metrics.prometheusRule.default.disabled.ThanosStoreIsDown` | Disable ThanosStoreIsDown rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.absent_rules is true | |
| `metrics.prometheusRule.default.disabled.ThanosCompactMultipleRunning` | Disable ThanosCompactMultipleRunning rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.compaction is true | |
| `metrics.prometheusRule.default.disabled.ThanosCompactHalted` | Disable ThanosCompactMultipleRunning rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.compaction is true | |
| `metrics.prometheusRule.default.disabled.ThanosCompactHighCompactionFailures` | Disable ThanosCompactMultipleRunning rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.compaction is true | |
| `metrics.prometheusRule.default.disabled.ThanosCompactBucketHighOperationFailures` | Disable ThanosCompactMultipleRunning rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.compaction is true | |
| `metrics.prometheusRule.default.disabled.ThanosCompactHasNotRun` | Disable ThanosCompactMultipleRunning rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.compaction is true | |
| `metrics.prometheusRule.default.disabled.ThanosQueryHttpRequestQueryErrorRateHigh` | Disable ThanosQueryHttpRequestQueryErrorRateHigh rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.query is true | |
| `metrics.prometheusRule.default.disabled.ThanosQueryHttpRequestQueryRangeErrorRateHigh` | Disable ThanosQueryHttpRequestQueryRangeErrorRateHigh rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.query is true | |
| `metrics.prometheusRule.default.disabled.ThanosQueryGrpcServerErrorRate` | Disable ThanosQueryGrpcServerErrorRate rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.query is true | |
| `metrics.prometheusRule.default.disabled.ThanosQueryGrpcClientErrorRate` | Disable ThanosQueryGrpcClientErrorRate rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.query is true | |
| `metrics.prometheusRule.default.disabled.ThanosQueryHighDNSFailures` | Disable ThanosQueryHighDNSFailures rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.query is true | |
| `metrics.prometheusRule.default.disabled.ThanosQueryInstantLatencyHigh` | Disable ThanosQueryInstantLatencyHigh rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.query is true | |
| `metrics.prometheusRule.default.disabled.ThanosQueryRangeLatencyHigh` | Disable ThanosQueryRangeLatencyHigh rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.query is true | |
| `metrics.prometheusRule.default.disabled.ThanosQueryOverload` | Disable ThanosQueryOverload rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.query is true | |
| `metrics.prometheusRule.default.disabled.ThanosReceiveHttpRequestErrorRateHigh` | Disable ThanosReceiveHttpRequestErrorRateHigh rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.receive is true | |
| `metrics.prometheusRule.default.disabled.ThanosReceiveHttpRequestLatencyHigh` | Disable ThanosReceiveHttpRequestLatencyHigh rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.receive is true | |
| `metrics.prometheusRule.default.disabled.ThanosReceiveHighReplicationFailures` | Disable ThanosReceiveHighReplicationFailures rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.receive is true | |
| `metrics.prometheusRule.default.disabled.ThanosReceiveHighForwardRequestFailures` | Disable ThanosReceiveHighForwardRequestFailures rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.receive is true | |
| `metrics.prometheusRule.default.disabled.ThanosReceiveHighHashringFileRefreshFailures` | Disable ThanosReceiveHighHashringFileRefreshFailures rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.receive is true | |
| `metrics.prometheusRule.default.disabled.ThanosReceiveConfigReloadFailure` | Disable ThanosReceiveConfigReloadFailure rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.receive is true | |
| `metrics.prometheusRule.default.disabled.ThanosReceiveNoUpload` | Disable ThanosReceiveNoUpload rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.receive is true | |
| `metrics.prometheusRule.default.disabled.ThanosReceiveTrafficBelowThreshold` | Disable ThanosReceiveTrafficBelowThreshold rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.receive is true | |
| `metrics.prometheusRule.default.disabled.ThanosBucketReplicateErrorRate` | Disable ThanosBucketReplicateErrorRate rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.receive is true | |
| `metrics.prometheusRule.default.disabled.ThanosBucketReplicateRunLatency` | Disable ThanosBucketReplicateRunLatency rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.receive is true | |
| `metrics.prometheusRule.default.disabled.ThanosRuleQueueIsDroppingAlerts` | Disable ThanosRuleQueueIsDroppingAlerts rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.ruler is true | |
| `metrics.prometheusRule.default.disabled.ThanosRuleSenderIsFailingAlerts` | Disable ThanosRuleSenderIsFailingAlerts rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.ruler is true | |
| `metrics.prometheusRule.default.disabled.ThanosRuleHighRuleEvaluationFailures` | Disable ThanosRuleHighRuleEvaluationFailures rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.ruler is true | |
| `metrics.prometheusRule.default.disabled.ThanosRuleHighRuleEvaluationWarnings` | Disable ThanosRuleHighRuleEvaluationWarnings rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.ruler is true | |
| `metrics.prometheusRule.default.disabled.ThanosRuleRuleEvaluationLatencyHigh` | Disable ThanosRuleRuleEvaluationLatencyHigh rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.ruler is true | |
| `metrics.prometheusRule.default.disabled.ThanosRuleGrpcErrorRate` | Disable ThanosRuleGrpcErrorRate rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.ruler is true | |
| `metrics.prometheusRule.default.disabled.ThanosRuleConfigReloadFailure` | Disable ThanosRuleConfigReloadFailure rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.ruler is true | |
| `metrics.prometheusRule.default.disabled.ThanosRuleQueryHighDNSFailures` | Disable ThanosRuleQueryHighDNSFailures rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.ruler is true | |
| `metrics.prometheusRule.default.disabled.ThanosRuleAlertmanagerHighDNSFailures` | Disable ThanosRuleAlertmanagerHighDNSFailures rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.ruler is true | |
| `metrics.prometheusRule.default.disabled.ThanosRuleNoEvaluationFor10Intervals` | Disable ThanosRuleNoEvaluationFor10Intervals rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.ruler is true | |
| `metrics.prometheusRule.default.disabled.ThanosNoRuleEvaluations` | Disable ThanosNoRuleEvaluations rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.ruler is true | |
| `metrics.prometheusRule.default.disabled.ThanosSidecarBucketOperationsFailed` | Disable ThanosSidecarBucketOperationsFailed rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.sidecar is true | |
| `metrics.prometheusRule.default.disabled.ThanosSidecarNoConnectionToStartedPrometheus` | Disable ThanosSidecarNoConnectionToStartedPrometheus rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.sidecar is true | |
| `metrics.prometheusRule.default.disabled.ThanosStoreGrpcErrorRate` | Disable ThanosSidecarNoConnectionToStartedPrometheus rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.store_gateway is true | |
| `metrics.prometheusRule.default.disabled.ThanosStoreSeriesGateLatencyHigh` | Disable ThanosStoreSeriesGateLatencyHigh rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.store_gateway is true | |
| `metrics.prometheusRule.default.disabled.ThanosStoreBucketHighOperationFailures` | Disable ThanosStoreBucketHighOperationFailures rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.store_gateway is true | |
| `metrics.prometheusRule.default.disabled.ThanosStoreObjstoreOperationLatencyHigh` | Disable ThanosStoreObjstoreOperationLatencyHigh rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.store_gateway is true | |
| `metrics.prometheusRule.default.disabled` | disable one specific prometheus alert rule | `{}` |
| `metrics.prometheusRule.namespace` | Namespace in which the PrometheusRule CRD is created | `""` |
| `metrics.prometheusRule.additionalLabels` | Additional labels for the prometheusRule | `{}` |
| `metrics.prometheusRule.groups` | Prometheus Rule Groups for Thanos components | `[]` |
### Volume Permissions parameters

View File

@@ -0,0 +1,132 @@
{{- /*
Generated from https://github.com/thanos-io/thanos/blob/main/examples/alerts/alerts.md
*/ -}}
{{- if and .Values.metrics.enabled (or .Values.metrics.prometheusRule.default.create .Values.metrics.prometheusRule.default.absent_rules ) }}
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: {{ template "common.names.fullname" . }}
namespace: {{ default .Release.Namespace .Values.metrics.prometheusRule.namespace | quote }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{- include "common.tplvalues.render" (dict "value" .Values.metrics.prometheusRule.additionalLabels "context" $) | nindent 4 }}
{{- end }}
{{- if .Values.commonLabels }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
spec:
groups:
- name: thanos-component-absent
rules:
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosCompactIsDown | default false) }}
- alert: ThanosCompactIsDown
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: ThanosCompact has disappeared. Prometheus target for the component cannot be discovered.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanoscompactisdown
summary: Thanos component has disappeared.
expr: |
absent(up{job=~".*thanos-compact.*"} == 1)
for: 5m
labels:
severity: critical
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosQueryIsDown | default false) }}
- alert: ThanosQueryIsDown
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: ThanosQuery has disappeared. Prometheus target for the component cannot be discovered.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosqueryisdown
summary: Thanos component has disappeared.
expr: |
absent(up{job=~".*thanos-query.*"} == 1)
for: 5m
labels:
severity: critical
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosReceiveIsDown | default false) }}
- alert: ThanosReceiveIsDown
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: ThanosReceive has disappeared. Prometheus target for the component cannot be discovered.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosreceiveisdown
summary: Thanos component has disappeared.
expr: |
absent(up{job=~".*thanos-receive.*"} == 1)
for: 5m
labels:
severity: critical
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosRuleIsDown | default false) }}
- alert: ThanosRuleIsDown
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: ThanosRule has disappeared. Prometheus target for the component cannot be discovered.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosruleisdown
summary: Thanos component has disappeared.
expr: |
absent(up{job=~".*thanos-rule.*"} == 1)
for: 5m
labels:
severity: critical
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosSidecarIsDown | default false) }}
- alert: ThanosSidecarIsDown
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: ThanosSidecar has disappeared. Prometheus target for the component cannot be discovered.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanossidecarisdown
summary: Thanos component has disappeared.
expr: |
absent(up{job=~".*thanos-sidecar.*"} == 1)
for: 5m
labels:
severity: critical
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosStoreIsDown | default false) }}
- alert: ThanosStoreIsDown
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: ThanosStore has disappeared. Prometheus target for the component cannot be discovered.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosstoreisdown
summary: Thanos component has disappeared.
expr: |
absent(up{job=~".*thanos-store.*"} == 1)
for: 5m
labels:
severity: critical
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,120 @@
{{- /*
Generated from https://github.com/thanos-io/thanos/blob/main/examples/alerts/alerts.md
*/ -}}
{{- if and .Values.metrics.enabled (or .Values.metrics.prometheusRule.default.create .Values.metrics.prometheusRule.default.compaction ) }}
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: {{ template "common.names.fullname" . }}
namespace: {{ default .Release.Namespace .Values.metrics.prometheusRule.namespace | quote }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{- include "common.tplvalues.render" (dict "value" .Values.metrics.prometheusRule.additionalLabels "context" $) | nindent 4 }}
{{- end }}
{{- if .Values.commonLabels }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
spec:
groups:
- name: thanos-compact
rules:
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosCompactMultipleRunning | default false) }}
- alert: ThanosCompactMultipleRunning
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: No more than one Thanos Compact instance should be running at once. There are {{`{{`}} $value {{`}}`}} instances running.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanoscompactmultiplerunning
summary: Thanos Compact has multiple instances running.
expr: sum by (job) (up{job=~".*thanos-compact.*"}) > 1
for: 5m
labels:
severity: warning
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosCompactHalted | default false) }}
- alert: ThanosCompactHalted
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Compact {{`{{`}} $labels.job {{`}}`}} has failed to run and now is halted.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanoscompacthalted
summary: Thanos Compact has failed to run and is now halted.
expr: thanos_compact_halted{job=~".*thanos-compact.*"} == 1
for: 5m
labels:
severity: warning
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosCompactHighCompactionFailures | default false) }}
- alert: ThanosCompactHighCompactionFailures
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Compact {{`{{`}} $labels.job {{`}}`}} is failing to execute {{`{{`}} $value | humanize {{`}}`}}% of compactions.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanoscompacthighcompactionfailures
summary: Thanos Compact is failing to execute compactions.
expr: |
(
sum by (job) (rate(thanos_compact_group_compactions_failures_total{job=~".*thanos-compact.*"}[5m]))
/
sum by (job) (rate(thanos_compact_group_compactions_total{job=~".*thanos-compact.*"}[5m]))
* 100 > 5
)
for: 15m
labels:
severity: warning
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosCompactBucketHighOperationFailures | default false) }}
- alert: ThanosCompactBucketHighOperationFailures
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Compact {{`{{`}} $labels.job {{`}}`}} Bucket is failing to execute {{`{{`}} $value | humanize {{`}}`}}% of operations.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanoscompactbuckethighoperationfailures
summary: Thanos Compact Bucket is having a high number of operation failures.
expr: |
(
sum by (job) (rate(thanos_objstore_bucket_operation_failures_total{job=~".*thanos-compact.*"}[5m]))
/
sum by (job) (rate(thanos_objstore_bucket_operations_total{job=~".*thanos-compact.*"}[5m]))
* 100 > 5
)
for: 15m
labels:
severity: warning
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosCompactHasNotRun | default false) }}
- alert: ThanosCompactHasNotRun
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Compact {{`{{`}} $labels.job {{`}}`}} has not uploaded anything for 24 hours.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanoscompacthasnotrun
summary: Thanos Compact has not uploaded anything for last 24 hours.
expr: (time() - max by (job) (max_over_time(thanos_objstore_bucket_last_successful_upload_time{job=~".*thanos-compact.*"}[24h]))) / 60 / 60 > 24
labels:
severity: warning
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,199 @@
{{- /*
Generated from https://github.com/thanos-io/thanos/blob/main/examples/alerts/alerts.md
*/ -}}
{{- if and .Values.metrics.enabled (or .Values.metrics.prometheusRule.default.create .Values.metrics.prometheusRule.default.query ) }}
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: {{ template "common.names.fullname" . }}
namespace: {{ default .Release.Namespace .Values.metrics.prometheusRule.namespace | quote }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{- include "common.tplvalues.render" (dict "value" .Values.metrics.prometheusRule.additionalLabels "context" $) | nindent 4 }}
{{- end }}
{{- if .Values.commonLabels }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
spec:
groups:
- name: thanos-query
rules:
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosQueryHttpRequestQueryErrorRateHigh | default false) }}
- alert: ThanosQueryHttpRequestQueryErrorRateHigh
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Query {{`{{`}} $labels.job {{`}}`}} is failing to handle {{`{{`}} $value | humanize {{`}}`}}% of "query" requests.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosqueryhttprequestqueryerrorratehigh
summary: Thanos Query is failing to handle requests.
expr: |
(
sum by (job) (rate(http_requests_total{code=~"5..", job=~".*thanos-query.*", handler="query"}[5m]))
/
sum by (job) (rate(http_requests_total{job=~".*thanos-query.*", handler="query"}[5m]))
) * 100 > 5
for: 5m
labels:
severity: critical
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosQueryHttpRequestQueryRangeErrorRateHigh | default false) }}
- alert: ThanosQueryHttpRequestQueryRangeErrorRateHigh
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Query {{`{{`}} $labels.job {{`}}`}} is failing to handle {{`{{`}} $value | humanize {{`}}`}}% of "query_range" requests.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosqueryhttprequestqueryrangeerrorratehigh
summary: Thanos Query is failing to handle requests.
expr: |
(
sum by (job) (rate(http_requests_total{code=~"5..", job=~".*thanos-query.*", handler="query_range"}[5m]))
/
sum by (job) (rate(http_requests_total{job=~".*thanos-query.*", handler="query_range"}[5m]))
) * 100 > 5
for: 5m
labels:
severity: critical
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosQueryGrpcServerErrorRate | default false) }}
- alert: ThanosQueryGrpcServerErrorRate
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Query {{`{{`}} $labels.job {{`}}`}} is failing to handle {{`{{`}} $value | humanize {{`}}`}}% of requests.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosquerygrpcservererrorrate
summary: Thanos Query is failing to handle requests.
expr: |
(
sum by (job) (rate(grpc_server_handled_total{grpc_code=~"Unknown|ResourceExhausted|Internal|Unavailable|DataLoss|DeadlineExceeded", job=~".*thanos-query.*"}[5m]))
/
sum by (job) (rate(grpc_server_started_total{job=~".*thanos-query.*"}[5m]))
* 100 > 5
)
for: 5m
labels:
severity: warning
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosQueryGrpcClientErrorRate | default false) }}
- alert: ThanosQueryGrpcClientErrorRate
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Query {{`{{`}} $labels.job {{`}}`}} is failing to send {{`{{`}} $value | humanize {{`}}`}}% of requests.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosquerygrpcclienterrorrate
summary: Thanos Query is failing to send requests.
expr: |
(
sum by (job) (rate(grpc_client_handled_total{grpc_code!="OK", job=~".*thanos-query.*"}[5m]))
/
sum by (job) (rate(grpc_client_started_total{job=~".*thanos-query.*"}[5m]))
) * 100 > 5
for: 5m
labels:
severity: warning
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosQueryHighDNSFailures | default false) }}
- alert: ThanosQueryHighDNSFailures
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Query {{`{{`}} $labels.job {{`}}`}} have {{`{{`}} $value | humanize{{`}}`}}% of failing DNS queries for store endpoints.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosqueryhighdnsfailures
summary: Thanos Query is having high number of DNS failures.
expr: |
(
sum by (job) (rate(thanos_query_store_apis_dns_failures_total{job=~".*thanos-query.*"}[5m]))
/
sum by (job) (rate(thanos_query_store_apis_dns_lookups_total{job=~".*thanos-query.*"}[5m]))
) * 100 > 1
for: 15m
labels:
severity: warning
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosQueryInstantLatencyHigh | default false) }}
- alert: ThanosQueryInstantLatencyHigh
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Query {{`{{`}} $labels.job {{`}}`}} has a 99th percentile latency of {{`{{`}} $value {{`}}`}} seconds for instant queries.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosqueryinstantlatencyhigh
summary: Thanos Query has high latency for queries.
expr: |
(
histogram_quantile(0.99, sum by (job, le) (rate(http_request_duration_seconds_bucket{job=~".*thanos-query.*", handler="query"}[5m]))) > 40
and
sum by (job) (rate(http_request_duration_seconds_bucket{job=~".*thanos-query.*", handler="query"}[5m])) > 0
)
for: 10m
labels:
severity: critical
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosQueryRangeLatencyHigh | default false) }}
- alert: ThanosQueryRangeLatencyHigh
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Query {{`{{`}} $labels.job {{`}}`}} has a 99th percentile latency of {{`{{`}} $value {{`}}`}} seconds for range queries.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosqueryrangelatencyhigh
summary: Thanos Query has high latency for queries.
expr: |
(
histogram_quantile(0.99, sum by (job, le) (rate(http_request_duration_seconds_bucket{job=~".*thanos-query.*", handler="query_range"}[5m]))) > 90
and
sum by (job) (rate(http_request_duration_seconds_count{job=~".*thanos-query.*", handler="query_range"}[5m])) > 0
)
for: 10m
labels:
severity: critical
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosQueryOverload | default false) }}
- alert: ThanosQueryOverload
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Query {{`{{`}} $labels.job {{`}}`}} has been overloaded for more than 15 minutes. This may be a symptom of excessive simultanous complex requests, low performance of the Prometheus API, or failures within these components. Assess the health of the Thanos query instances, the connnected Prometheus instances, look for potential senders of these requests and then contact support.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosqueryoverload
summary: Thanos query reaches its maximum capacity serving concurrent requests.
expr: |
(
max_over_time(thanos_query_concurrent_gate_queries_max[5m]) - avg_over_time(thanos_query_concurrent_gate_queries_in_flight[5m]) < 1
)
for: 15m
labels:
severity: warning
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,204 @@
{{- /*
Generated from https://github.com/thanos-io/thanos/blob/main/examples/alerts/alerts.md
*/ -}}
{{- if and .Values.metrics.enabled (or .Values.metrics.prometheusRule.default.create .Values.metrics.prometheusRule.default.receive ) }}
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: {{ template "common.names.fullname" . }}
namespace: {{ default .Release.Namespace .Values.metrics.prometheusRule.namespace | quote }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{- include "common.tplvalues.render" (dict "value" .Values.metrics.prometheusRule.additionalLabels "context" $) | nindent 4 }}
{{- end }}
{{- if .Values.commonLabels }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
spec:
groups:
- name: thanos-receive
rules:
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosReceiveHttpRequestErrorRateHigh | default false) }}
- alert: ThanosReceiveHttpRequestErrorRateHigh
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Receive {{`{{`}} $labels.job {{`}}`}} is failing to handle {{`{{`}} $value | humanize {{`}}`}}% of requests.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosreceivehttprequesterrorratehigh
summary: Thanos Receive is failing to handle requests.
expr: |
(
sum by (job) (rate(http_requests_total{code=~"5..", job=~".*thanos-receive.*", handler="receive"}[5m]))
/
sum by (job) (rate(http_requests_total{job=~".*thanos-receive.*", handler="receive"}[5m]))
) * 100 > 5
for: 5m
labels:
severity: critical
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosReceiveHttpRequestLatencyHigh | default false) }}
- alert: ThanosReceiveHttpRequestLatencyHigh
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Receive {{`{{`}} $labels.job {{`}}`}} has a 99th percentile latency of {{`{{`}} $value {{`}}`}} seconds for requests.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosreceivehttprequestlatencyhigh
summary: Thanos Receive has high HTTP requests latency.
expr: |
(
histogram_quantile(0.99, sum by (job, le) (rate(http_request_duration_seconds_bucket{job=~".*thanos-receive.*", handler="receive"}[5m]))) > 10
and
sum by (job) (rate(http_request_duration_seconds_count{job=~".*thanos-receive.*", handler="receive"}[5m])) > 0
)
for: 10m
labels:
severity: critical
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosReceiveHighReplicationFailures | default false) }}
- alert: ThanosReceiveHighReplicationFailures
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Receive {{`{{`}} $labels.job {{`}}`}} is failing to replicate {{`{{`}} $value | humanize {{`}}`}}% of requests.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosreceivehighreplicationfailures
summary: Thanos Receive is having high number of replication failures.
expr: |
thanos_receive_replication_factor > 1
and
(
(
sum by (job) (rate(thanos_receive_replications_total{result="error", job=~".*thanos-receive.*"}[5m]))
/
sum by (job) (rate(thanos_receive_replications_total{job=~".*thanos-receive.*"}[5m]))
)
>
(
max by (job) (floor((thanos_receive_replication_factor{job=~".*thanos-receive.*"}+1) / 2))
/
max by (job) (thanos_receive_hashring_nodes{job=~".*thanos-receive.*"})
)
) * 100
for: 5m
labels:
severity: warning
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosReceiveHighForwardRequestFailures | default false) }}
- alert: ThanosReceiveHighForwardRequestFailures
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Receive {{`{{`}} $labels.job {{`}}`}} is failing to forward {{`{{`}} $value | humanize {{`}}`}}% of requests.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosreceivehighforwardrequestfailures
summary: Thanos Receive is failing to forward requests.
expr: |
(
sum by (job) (rate(thanos_receive_forward_requests_total{result="error", job=~".*thanos-receive.*"}[5m]))
/
sum by (job) (rate(thanos_receive_forward_requests_total{job=~".*thanos-receive.*"}[5m]))
) * 100 > 20
for: 5m
labels:
severity: info
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosReceiveHighHashringFileRefreshFailures | default false) }}
- alert: ThanosReceiveHighHashringFileRefreshFailures
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Receive {{`{{`}} $labels.job {{`}}`}} is failing to refresh hashring file, {{`{{`}} $value | humanize {{`}}`}} of attempts failed.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosreceivehighhashringfilerefreshfailures
summary: Thanos Receive is failing to refresh hasring file.
expr: |
(
sum by (job) (rate(thanos_receive_hashrings_file_errors_total{job=~".*thanos-receive.*"}[5m]))
/
sum by (job) (rate(thanos_receive_hashrings_file_refreshes_total{job=~".*thanos-receive.*"}[5m]))
> 0
)
for: 15m
labels:
severity: warning
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosReceiveConfigReloadFailure | default false) }}
- alert: ThanosReceiveConfigReloadFailure
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Receive {{`{{`}} $labels.job {{`}}`}} has not been able to reload hashring configurations.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosreceiveconfigreloadfailure
summary: Thanos Receive has not been able to reload configuration.
expr: avg by (job) (thanos_receive_config_last_reload_successful{job=~".*thanos-receive.*"}) != 1
for: 5m
labels:
severity: warning
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosReceiveNoUpload | default false) }}
- alert: ThanosReceiveNoUpload
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Receive {{`{{`}} $labels.instance {{`}}`}} has not uploaded latest data to object storage.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosreceivenoupload
summary: Thanos Receive has not uploaded latest data to object storage.
expr: |
(up{job=~".*thanos-receive.*"} - 1)
+ on (job, instance) # filters to only alert on current instance last 3h
(sum by (job, instance) (increase(thanos_shipper_uploads_total{job=~".*thanos-receive.*"}[3h])) == 0)
for: 3h
labels:
severity: critical
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosReceiveTrafficBelowThreshold | default false) }}
- alert: ThanosReceiveTrafficBelowThreshold
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: At Thanos Receive {{`{{`}} $labels.job {{`}}`}} in {{`{{`}} $labels.namespace {{`}}`}} , the average 1-hr avg. metrics ingestion rate is {{`{{`}} $value | humanize {{`}}`}}% of 12-hr avg. ingestion rate.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosreceivetrafficbelowthreshold
summary: Thanos Receive is experiencing low avg. 1-hr ingestion rate relative to avg. 12-hr ingestion rate.
expr: |
(
avg_over_time(rate(http_requests_total{job=~".*thanos-receive.*", code=~"2..", handler="receive"}[5m])[1h:5m])
/
avg_over_time(rate(http_requests_total{job=~".*thanos-receive.*", code=~"2..", handler="receive"}[5m])[12h:5m])
) * 100 < 50
for: 1h
labels:
severity: warning
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,68 @@
{{- /*
Generated from https://github.com/thanos-io/thanos/blob/main/examples/alerts/alerts.md
*/ -}}
{{- if and .Values.metrics.enabled (or .Values.metrics.prometheusRule.default.create .Values.metrics.prometheusRule.default.replicate ) }}
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: {{ template "common.names.fullname" . }}
namespace: {{ default .Release.Namespace .Values.metrics.prometheusRule.namespace | quote }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{- include "common.tplvalues.render" (dict "value" .Values.metrics.prometheusRule.additionalLabels "context" $) | nindent 4 }}
{{- end }}
{{- if .Values.commonLabels }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
spec:
groups:
- name: thanos-bucket-replicate
rules:
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosBucketReplicateErrorRate | default false) }}
- alert: ThanosBucketReplicateErrorRate
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Replicate is failing to run, {{`{{`}} $value | humanize {{`}}`}}% of attempts failed.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosbucketreplicateerrorrate
summary: Thanos Replicate is failing to run.
expr: |
(
sum by (job) (rate(thanos_replicate_replication_runs_total{result="error", job=~".*thanos-bucket-replicate.*"}[5m]))
/ on (job) group_left
sum by (job) (rate(thanos_replicate_replication_runs_total{job=~".*thanos-bucket-replicate.*"}[5m]))
) * 100 >= 10
for: 5m
labels:
severity: critical
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosBucketReplicateRunLatency | default false) }}
- alert: ThanosBucketReplicateRunLatency
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Replicate {{`{{`}} $labels.job {{`}}`}} has a 99th percentile latency of {{`{{`}} $value {{`}}`}} seconds for the replicate operations.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosbucketreplicaterunlatency
summary: Thanos Replicate has a high latency for replicate operations.
expr: |
(
histogram_quantile(0.99, sum by (job) (rate(thanos_replicate_replication_run_duration_seconds_bucket{job=~".*thanos-bucket-replicate.*"}[5m]))) > 20
and
sum by (job) (rate(thanos_replicate_replication_run_duration_seconds_bucket{job=~".*thanos-bucket-replicate.*"}[5m])) > 0
)
for: 5m
labels:
severity: critical
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,249 @@
{{- /*
Generated from https://github.com/thanos-io/thanos/blob/main/examples/alerts/alerts.md
*/ -}}
{{- if and .Values.metrics.enabled (or .Values.metrics.prometheusRule.default.create .Values.metrics.prometheusRule.default.ruler ) }}
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: {{ template "common.names.fullname" . }}
namespace: {{ default .Release.Namespace .Values.metrics.prometheusRule.namespace | quote }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{- include "common.tplvalues.render" (dict "value" .Values.metrics.prometheusRule.additionalLabels "context" $) | nindent 4 }}
{{- end }}
{{- if .Values.commonLabels }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
spec:
groups:
- name: thanos-rule
rules:
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosRuleQueueIsDroppingAlerts | default false) }}
- alert: ThanosRuleQueueIsDroppingAlerts
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Rule {{`{{`}} $labels.instance {{`}}`}} is failing to queue alerts.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosrulequeueisdroppingalerts
summary: Thanos Rule is failing to queue alerts.
expr: |
sum by (job, instance) (rate(thanos_alert_queue_alerts_dropped_total{job=~".*thanos-rule.*"}[5m])) > 0
for: 5m
labels:
severity: critical
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosRuleSenderIsFailingAlerts | default false) }}
- alert: ThanosRuleSenderIsFailingAlerts
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Rule {{`{{`}} $labels.instance {{`}}`}} is failing to send alerts to alertmanager.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosrulesenderisfailingalerts
summary: Thanos Rule is failing to send alerts to alertmanager.
expr: |
sum by (job, instance) (rate(thanos_alert_sender_alerts_dropped_total{job=~".*thanos-rule.*"}[5m])) > 0
for: 5m
labels:
severity: critical
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosRuleHighRuleEvaluationFailures | default false) }}
- alert: ThanosRuleHighRuleEvaluationFailures
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Rule {{`{{`}} $labels.instance {{`}}`}} is failing to evaluate rules.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosrulehighruleevaluationfailures
summary: Thanos Rule is failing to evaluate rules.
expr: |
(
sum by (job, instance) (rate(prometheus_rule_evaluation_failures_total{job=~".*thanos-rule.*"}[5m]))
/
sum by (job, instance) (rate(prometheus_rule_evaluations_total{job=~".*thanos-rule.*"}[5m]))
* 100 > 5
)
for: 5m
labels:
severity: critical
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosRuleHighRuleEvaluationWarnings | default false) }}
- alert: ThanosRuleHighRuleEvaluationWarnings
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Rule {{`{{`}} $labels.instance {{`}}`}} has high number of evaluation warnings.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosrulehighruleevaluationwarnings
summary: Thanos Rule has high number of evaluation warnings.
expr: |
sum by (job, instance) (rate(thanos_rule_evaluation_with_warnings_total{job=~".*thanos-rule.*"}[5m])) > 0
for: 15m
labels:
severity: info
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosRuleRuleEvaluationLatencyHigh | default false) }}
- alert: ThanosRuleRuleEvaluationLatencyHigh
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Rule {{`{{`}} $labels.instance {{`}}`}} has higher evaluation latency than interval for {{`{{`}} $labels.rule_group {{`}}`}}.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosruleruleevaluationlatencyhigh
summary: Thanos Rule has high rule evaluation latency.
expr: |
(
sum by (job, instance, rule_group) (prometheus_rule_group_last_duration_seconds{job=~".*thanos-rule.*"})
>
sum by (job, instance, rule_group) (prometheus_rule_group_interval_seconds{job=~".*thanos-rule.*"})
)
for: 5m
labels:
severity: warning
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosRuleGrpcErrorRate | default false) }}
- alert: ThanosRuleGrpcErrorRate
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Rule {{`{{`}} $labels.job {{`}}`}} is failing to handle {{`{{`}} $value | humanize {{`}}`}}% of requests.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosrulegrpcerrorrate
summary: Thanos Rule is failing to handle grpc requests.
expr: |
(
sum by (job, instance) (rate(grpc_server_handled_total{grpc_code=~"Unknown|ResourceExhausted|Internal|Unavailable|DataLoss|DeadlineExceeded", job=~".*thanos-rule.*"}[5m]))
/
sum by (job, instance) (rate(grpc_server_started_total{job=~".*thanos-rule.*"}[5m]))
* 100 > 5
)
for: 5m
labels:
severity: warning
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosRuleConfigReloadFailure | default false) }}
- alert: ThanosRuleConfigReloadFailure
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Rule {{`{{`}} $labels.job {{`}}`}} has not been able to reload its configuration.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosruleconfigreloadfailure
summary: Thanos Rule has not been able to reload configuration.
expr: avg by (job, instance) (thanos_rule_config_last_reload_successful{job=~".*thanos-rule.*"}) != 1
for: 5m
labels:
severity: info
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosRuleQueryHighDNSFailures | default false) }}
- alert: ThanosRuleQueryHighDNSFailures
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Rule {{`{{`}} $labels.job {{`}}`}} has {{`{{`}} $value | humanize{{`}}`}}% of failing DNS queries for query endpoints.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosrulequeryhighdnsfailures
summary: Thanos Rule is having high number of DNS failures.
expr: |
(
sum by (job, instance) (rate(thanos_rule_query_apis_dns_failures_total{job=~".*thanos-rule.*"}[5m]))
/
sum by (job, instance) (rate(thanos_rule_query_apis_dns_lookups_total{job=~".*thanos-rule.*"}[5m]))
* 100 > 1
)
for: 15m
labels:
severity: warning
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosRuleAlertmanagerHighDNSFailures | default false) }}
- alert: ThanosRuleAlertmanagerHighDNSFailures
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Rule {{`{{`}} $labels.instance{{`}}`}} has {{`{{`}} $value | humanize {{`}}`}}% of failing DNS queries for Alertmanager endpoints.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosrulealertmanagerhighdnsfailures
summary: Thanos Rule is having high number of DNS failures.
expr: |
(
sum by (job, instance) (rate(thanos_rule_alertmanagers_dns_failures_total{job=~".*thanos-rule.*"}[5m]))
/
sum by (job, instance) (rate(thanos_rule_alertmanagers_dns_lookups_total{job=~".*thanos-rule.*"}[5m]))
* 100 > 1
)
for: 15m
labels:
severity: warning
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosRuleNoEvaluationFor10Intervals | default false) }}
- alert: ThanosRuleNoEvaluationFor10Intervals
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Rule {{`{{`}} $labels.job {{`}}`}} has rule groups that did not evaluate for at least 10x of their expected interval.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosrulenoevaluationfor10intervals
summary: Thanos Rule has rule groups that did not evaluate for 10 intervals.
expr: |
time() - max by (job, instance, group) (prometheus_rule_group_last_evaluation_timestamp_seconds{job=~".*thanos-rule.*"})
>
10 * max by (job, instance, group) (prometheus_rule_group_interval_seconds{job=~".*thanos-rule.*"})
for: 5m
labels:
severity: info
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosNoRuleEvaluations | default false) }}
- alert: ThanosNoRuleEvaluations
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Rule {{`{{`}} $labels.instance {{`}}`}} did not perform any rule evaluations in the past 10 minutes.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosnoruleevaluations
summary: Thanos Rule did not perform any rule evaluations.
expr: |
sum by (job, instance) (rate(prometheus_rule_evaluations_total{job=~".*thanos-rule.*"}[5m])) <= 0
and
sum by (job, instance) (thanos_rule_loaded_rules{job=~".*thanos-rule.*"}) > 0
for: 5m
labels:
severity: critical
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,63 @@
{{- /*
Generated from https://github.com/thanos-io/thanos/blob/main/examples/alerts/alerts.md
*/ -}}
{{- if and .Values.metrics.enabled (or .Values.metrics.prometheusRule.default.create .Values.metrics.prometheusRule.default.sidecar ) }}
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: {{ template "common.names.fullname" . }}
namespace: {{ default .Release.Namespace .Values.metrics.prometheusRule.namespace | quote }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{- include "common.tplvalues.render" (dict "value" .Values.metrics.prometheusRule.additionalLabels "context" $) | nindent 4 }}
{{- end }}
{{- if .Values.commonLabels }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
spec:
groups:
- name: thanos-sidecar
rules:
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosSidecarBucketOperationsFailed | default false) }}
- alert: ThanosSidecarBucketOperationsFailed
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Sidecar {{`{{`}} $labels.instance {{`}}`}} bucket operations are failing
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanossidecarbucketoperationsfailed
summary: Thanos Sidecar bucket operations are failing
expr: |
sum by (job, instance) (rate(thanos_objstore_bucket_operation_failures_total{job=~".*thanos-sidecar.*"}[5m])) > 0
for: 5m
labels:
severity: critical
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosSidecarNoConnectionToStartedPrometheus | default false) }}
- alert: ThanosSidecarNoConnectionToStartedPrometheus
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Sidecar {{`{{`}} $labels.instance {{`}}`}} is unhealthy.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanossidecarnoconnectiontostartedprometheus
summary: Thanos Sidecar cannot access Prometheus, even though Prometheus seems healthy and has reloaded WAL.
expr: |
thanos_sidecar_prometheus_up{job=~".*thanos-sidecar.*"} == 0
AND on (namespace, pod)
prometheus_tsdb_data_replay_duration_seconds != 0
for: 5m
labels:
severity: critical
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,114 @@
{{- /*
Generated from https://github.com/thanos-io/thanos/blob/main/examples/alerts/alerts.md
*/ -}}
{{- if and .Values.metrics.enabled (or .Values.metrics.prometheusRule.default.create .Values.metrics.prometheusRule.default.store_gateway ) }}
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: {{ template "common.names.fullname" . }}
namespace: {{ default .Release.Namespace .Values.metrics.prometheusRule.namespace | quote }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{- include "common.tplvalues.render" (dict "value" .Values.metrics.prometheusRule.additionalLabels "context" $) | nindent 4 }}
{{- end }}
{{- if .Values.commonLabels }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
spec:
groups:
- name: thanos-store
rules:
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosStoreGrpcErrorRate | default false) }}
- alert: ThanosStoreGrpcErrorRate
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Store {{`{{`}} $labels.job {{`}}`}} is failing to handle {`{{`}} $value | humanize {{`}}`}}% of requests.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosstoregrpcerrorrate
summary: Thanos Store is failing to handle qrpcd requests.
expr: |
(
sum by (job) (rate(grpc_server_handled_total{grpc_code=~"Unknown|ResourceExhausted|Internal|Unavailable|DataLoss|DeadlineExceeded", job=~".*thanos-store.*"}[5m]))
/
sum by (job) (rate(grpc_server_started_total{job=~".*thanos-store.*"}[5m]))
* 100 > 5
)
for: 5m
labels:
severity: warning
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosStoreSeriesGateLatencyHigh | default false) }}
- alert: ThanosStoreSeriesGateLatencyHigh
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Store {{`{{`}} $labels.job} {{`}}`}} has a 99th percentile latency of {{`{{`}} $value {{`}}`}} seconds for store series gate requests.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosstoreseriesgatelatencyhigh
summary: Thanos Store has high latency for store series gate requests.
expr: |
(
histogram_quantile(0.99, sum by (job, le) (rate(thanos_bucket_store_series_gate_duration_seconds_bucket{job=~".*thanos-store.*"}[5m]))) > 2
and
sum by (job) (rate(thanos_bucket_store_series_gate_duration_seconds_count{job=~".*thanos-store.*"}[5m])) > 0
)
for: 10m
labels:
severity: warning
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosStoreBucketHighOperationFailures | default false) }}
- alert: ThanosStoreBucketHighOperationFailures
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Store {{`{{`}} $labels.job {{`}}`}} Bucket is failing to execute {{`{{`}} $value | humanize {{`}}`}}% of operations.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosstorebuckethighoperationfailures
summary: Thanos Store Bucket is failing to execute operations.
expr: |
(
sum by (job) (rate(thanos_objstore_bucket_operation_failures_total{job=~".*thanos-store.*"}[5m]))
/
sum by (job) (rate(thanos_objstore_bucket_operations_total{job=~".*thanos-store.*"}[5m]))
* 100 > 5
)
for: 15m
labels:
severity: warning
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- if not (.Values.metrics.prometheusRule.default.disabled.ThanosStoreObjstoreOperationLatencyHigh | default false) }}
- alert: ThanosStoreObjstoreOperationLatencyHigh
annotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 8 }}
{{- end }}
description: Thanos Store {{`{{`}} $labels.job {{`}}`}} Bucket has a 99th percentile latency of {{`{{`}} $value {{`}}`}} seconds for the bucket operations.
runbook_url: https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosstoreobjstoreoperationlatencyhigh
summary: Thanos Store is having high latency for bucket operations.
expr: |
(
histogram_quantile(0.99, sum by (job, le) (rate(thanos_objstore_bucket_operation_duration_seconds_bucket{job=~".*thanos-store.*"}[5m]))) > 2
and
sum by (job) (rate(thanos_objstore_bucket_operation_duration_seconds_count{job=~".*thanos-store.*"}[5m])) > 0
)
for: 10m
labels:
severity: warning
{{- if .Values.metrics.prometheusRule.additionalLabels }}
{{ toYaml .Values.metrics.prometheusRule.additionalLabels | indent 8 }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -3920,6 +3920,69 @@ metrics:
## @param metrics.prometheusRule.enabled If `true`, creates a Prometheus Operator PrometheusRule (also requires `metrics.enabled` to be `true`)
##
enabled: false
## Configure prometheus rules
##
default:
## @extra metrics.prometheusRule.default.absent_rules Enable absent_rules when metrics.prometheusRule.default.create is false (also requires `metrics.enabled` to be `true`)
## @extra metrics.prometheusRule.default.compaction Enable compaction rules when metrics.prometheusRule.default.create is false (also requires `metrics.enabled` to be `true`)
## @extra metrics.prometheusRule.default.query Enable query when metrics.prometheusRule.default.create is false (also requires `metrics.enabled` to be `true`)
## @extra metrics.prometheusRule.default.receive Enable receive rules when metrics.prometheusRule.default.create is false (also requires `metrics.enabled` to be `true`)
## @extra metrics.prometheusRule.default.replicate Enable replicate rules when metrics.prometheusRule.default.create is false (also requires `metrics.enabled` to be `true`)
## @extra metrics.prometheusRule.default.ruler Enable ruler rules when metrics.prometheusRule.default.create is false (also requires `metrics.enabled` to be `true`)
## @extra metrics.prometheusRule.default.sidecar Enable sidecar rules when metrics.prometheusRule.default.create is false (also requires `metrics.enabled` to be `true`)
## @extra metrics.prometheusRule.default.store_gateway Enable store_gateway rules when metrics.prometheusRule.default.create is false (also requires `metrics.enabled` to be `true`)
## @param metrics.prometheusRule.default.create would create all default prometheus alerts
##
create: false
## @extra metrics.prometheusRule.default.disabled.ThanosCompactIsDown Disable ThanosCompactIsDown rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.absent_rules is true
## @extra metrics.prometheusRule.default.disabled.ThanosQueryIsDown Disable ThanosQueryIsDown rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.absent_rules is true
## @extra metrics.prometheusRule.default.disabled.ThanosReceiveIsDown Disable ThanosReceiveIsDown rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.absent_rules is true
## @extra metrics.prometheusRule.default.disabled.ThanosRuleIsDown Disable ThanosRuleIsDown rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.absent_rules is true
## @extra metrics.prometheusRule.default.disabled.ThanosSidecarIsDown Disable ThanosSidecarIsDown rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.absent_rules is true
## @extra metrics.prometheusRule.default.disabled.ThanosStoreIsDown Disable ThanosStoreIsDown rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.absent_rules is true
## @extra metrics.prometheusRule.default.disabled.ThanosCompactMultipleRunning Disable ThanosCompactMultipleRunning rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.compaction is true
## @extra metrics.prometheusRule.default.disabled.ThanosCompactHalted Disable ThanosCompactMultipleRunning rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.compaction is true
## @extra metrics.prometheusRule.default.disabled.ThanosCompactHighCompactionFailures Disable ThanosCompactMultipleRunning rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.compaction is true
## @extra metrics.prometheusRule.default.disabled.ThanosCompactBucketHighOperationFailures Disable ThanosCompactMultipleRunning rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.compaction is true
## @extra metrics.prometheusRule.default.disabled.ThanosCompactHasNotRun Disable ThanosCompactMultipleRunning rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.compaction is true
## @extra metrics.prometheusRule.default.disabled.ThanosQueryHttpRequestQueryErrorRateHigh Disable ThanosQueryHttpRequestQueryErrorRateHigh rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.query is true
## @extra metrics.prometheusRule.default.disabled.ThanosQueryHttpRequestQueryRangeErrorRateHigh Disable ThanosQueryHttpRequestQueryRangeErrorRateHigh rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.query is true
## @extra metrics.prometheusRule.default.disabled.ThanosQueryGrpcServerErrorRate Disable ThanosQueryGrpcServerErrorRate rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.query is true
## @extra metrics.prometheusRule.default.disabled.ThanosQueryGrpcClientErrorRate Disable ThanosQueryGrpcClientErrorRate rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.query is true
## @extra metrics.prometheusRule.default.disabled.ThanosQueryHighDNSFailures Disable ThanosQueryHighDNSFailures rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.query is true
## @extra metrics.prometheusRule.default.disabled.ThanosQueryInstantLatencyHigh Disable ThanosQueryInstantLatencyHigh rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.query is true
## @extra metrics.prometheusRule.default.disabled.ThanosQueryRangeLatencyHigh Disable ThanosQueryRangeLatencyHigh rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.query is true
## @extra metrics.prometheusRule.default.disabled.ThanosQueryOverload Disable ThanosQueryOverload rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.query is true
## @extra metrics.prometheusRule.default.disabled.ThanosReceiveHttpRequestErrorRateHigh Disable ThanosReceiveHttpRequestErrorRateHigh rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.receive is true
## @extra metrics.prometheusRule.default.disabled.ThanosReceiveHttpRequestLatencyHigh Disable ThanosReceiveHttpRequestLatencyHigh rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.receive is true
## @extra metrics.prometheusRule.default.disabled.ThanosReceiveHighReplicationFailures Disable ThanosReceiveHighReplicationFailures rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.receive is true
## @extra metrics.prometheusRule.default.disabled.ThanosReceiveHighForwardRequestFailures Disable ThanosReceiveHighForwardRequestFailures rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.receive is true
## @extra metrics.prometheusRule.default.disabled.ThanosReceiveHighHashringFileRefreshFailures Disable ThanosReceiveHighHashringFileRefreshFailures rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.receive is true
## @extra metrics.prometheusRule.default.disabled.ThanosReceiveConfigReloadFailure Disable ThanosReceiveConfigReloadFailure rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.receive is true
## @extra metrics.prometheusRule.default.disabled.ThanosReceiveNoUpload Disable ThanosReceiveNoUpload rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.receive is true
## @extra metrics.prometheusRule.default.disabled.ThanosReceiveTrafficBelowThreshold Disable ThanosReceiveTrafficBelowThreshold rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.receive is true
## @extra metrics.prometheusRule.default.disabled.ThanosBucketReplicateErrorRate Disable ThanosBucketReplicateErrorRate rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.receive is true
## @extra metrics.prometheusRule.default.disabled.ThanosBucketReplicateRunLatency Disable ThanosBucketReplicateRunLatency rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.receive is true
## @extra metrics.prometheusRule.default.disabled.ThanosRuleQueueIsDroppingAlerts Disable ThanosRuleQueueIsDroppingAlerts rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.ruler is true
## @extra metrics.prometheusRule.default.disabled.ThanosRuleSenderIsFailingAlerts Disable ThanosRuleSenderIsFailingAlerts rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.ruler is true
## @extra metrics.prometheusRule.default.disabled.ThanosRuleHighRuleEvaluationFailures Disable ThanosRuleHighRuleEvaluationFailures rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.ruler is true
## @extra metrics.prometheusRule.default.disabled.ThanosRuleHighRuleEvaluationWarnings Disable ThanosRuleHighRuleEvaluationWarnings rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.ruler is true
## @extra metrics.prometheusRule.default.disabled.ThanosRuleRuleEvaluationLatencyHigh Disable ThanosRuleRuleEvaluationLatencyHigh rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.ruler is true
## @extra metrics.prometheusRule.default.disabled.ThanosRuleGrpcErrorRate Disable ThanosRuleGrpcErrorRate rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.ruler is true
## @extra metrics.prometheusRule.default.disabled.ThanosRuleConfigReloadFailure Disable ThanosRuleConfigReloadFailure rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.ruler is true
## @extra metrics.prometheusRule.default.disabled.ThanosRuleQueryHighDNSFailures Disable ThanosRuleQueryHighDNSFailures rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.ruler is true
## @extra metrics.prometheusRule.default.disabled.ThanosRuleAlertmanagerHighDNSFailures Disable ThanosRuleAlertmanagerHighDNSFailures rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.ruler is true
## @extra metrics.prometheusRule.default.disabled.ThanosRuleNoEvaluationFor10Intervals Disable ThanosRuleNoEvaluationFor10Intervals rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.ruler is true
## @extra metrics.prometheusRule.default.disabled.ThanosNoRuleEvaluations Disable ThanosNoRuleEvaluations rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.ruler is true
## @extra metrics.prometheusRule.default.disabled.ThanosSidecarBucketOperationsFailed Disable ThanosSidecarBucketOperationsFailed rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.sidecar is true
## @extra metrics.prometheusRule.default.disabled.ThanosSidecarNoConnectionToStartedPrometheus Disable ThanosSidecarNoConnectionToStartedPrometheus rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.sidecar is true
## @extra metrics.prometheusRule.default.disabled.ThanosStoreGrpcErrorRate Disable ThanosSidecarNoConnectionToStartedPrometheus rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.store_gateway is true
## @extra metrics.prometheusRule.default.disabled.ThanosStoreSeriesGateLatencyHigh Disable ThanosStoreSeriesGateLatencyHigh rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.store_gateway is true
## @extra metrics.prometheusRule.default.disabled.ThanosStoreBucketHighOperationFailures Disable ThanosStoreBucketHighOperationFailures rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.store_gateway is true
## @extra metrics.prometheusRule.default.disabled.ThanosStoreObjstoreOperationLatencyHigh Disable ThanosStoreObjstoreOperationLatencyHigh rule when metrics.prometheusRule.default.create or metrics.prometheusRule.default.store_gateway is true
## @param metrics.prometheusRule.default.disabled disable one specific prometheus alert rule
##
disabled: {}
## @param metrics.prometheusRule.namespace Namespace in which the PrometheusRule CRD is created
##
namespace: ""