mirror of
https://github.com/bitnami/charts.git
synced 2026-03-06 15:10:15 +08:00
bitnami/dataplatform-bp1: added annotation based discovery feature to wavefront and updated emitter/exporter images. (#8133)
* updated dataplatform-bp1 * removed charts folder * removed kafka prefix in wavefront config * updated README file * update values.yaml and README.md * updated README.md * updated emitter image repo * updated comments in values.yaml file * updated values.yaml * updated exporter deployment * updated README and values.yaml files * updated README.md * updated values.yaml * Fix comments standard * [bitnami/dataplatform-bp1] Update components versions Signed-off-by: Bitnami Containers <containers@bitnami.com> Co-authored-by: Miguel Ángel Cabrera Miñagorri <mcabrera@vmware.com> Co-authored-by: Francisco de Paz Galán <fdepaz@vmware.com> Co-authored-by: Bitnami Containers <containers@bitnami.com>
This commit is contained in:
@@ -1,21 +1,21 @@
|
||||
dependencies:
|
||||
- name: kafka
|
||||
repository: https://charts.bitnami.com/bitnami
|
||||
version: 14.2.0
|
||||
version: 14.4.1
|
||||
- name: spark
|
||||
repository: https://charts.bitnami.com/bitnami
|
||||
version: 5.7.3
|
||||
version: 5.7.10
|
||||
- name: solr
|
||||
repository: https://charts.bitnami.com/bitnami
|
||||
version: 2.0.6
|
||||
version: 2.1.3
|
||||
- name: zookeeper
|
||||
repository: https://charts.bitnami.com/bitnami
|
||||
version: 7.4.5
|
||||
version: 7.4.11
|
||||
- name: wavefront
|
||||
repository: https://charts.bitnami.com/bitnami
|
||||
version: 3.1.12
|
||||
version: 3.1.17
|
||||
- name: common
|
||||
repository: https://charts.bitnami.com/bitnami
|
||||
version: 1.9.1
|
||||
digest: sha256:1d13222347eb823077412fd2c1ed493246f2ab48fd232b41eee8e057c7bdfab6
|
||||
generated: "2021-09-28T11:07:25.053174515Z"
|
||||
version: 1.10.1
|
||||
digest: sha256:e022651345182378a657cc4498827deca4a648f2e4cc6f22f946e5bfc2ee4dc6
|
||||
generated: "2021-11-22T16:44:07.908476007Z"
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
annotations:
|
||||
category: Infrastructure
|
||||
apiVersion: v2
|
||||
appVersion: 0.0.11
|
||||
appVersion: 1.0.1
|
||||
dependencies:
|
||||
- condition: kafka.enabled
|
||||
name: kafka
|
||||
@@ -59,4 +59,4 @@ sources:
|
||||
- https://github.com/bitnami/bitnami-docker-wavefront-proxy
|
||||
- https://github.com/wavefrontHQ/wavefront-collector-for-kubernetes
|
||||
- https://github.com/wavefrontHQ/wavefront-proxy
|
||||
version: 8.0.3
|
||||
version: 9.0.0
|
||||
|
||||
@@ -9,7 +9,7 @@ This Helm chart enables the fully automated Kubernetes deployment of such multi-
|
||||
- Apache Kafka – Data distribution bus with buffering capabilities
|
||||
- Apache Spark – In-memory data analytics
|
||||
- Solr – Data persistence and search
|
||||
- Data Platform Prometheus Exporter - Prometheus exporter that emits the health metrics of the data platform
|
||||
- Data Platform Signature State Controller – Kubernetes controller that emits data platform health and state metrics in Prometheus format.
|
||||
|
||||
These containerized stateful software stacks are deployed in multi-node cluster configurations, which is defined by the
|
||||
Helm chart blueprint for this data platform deployment, covering:
|
||||
@@ -21,9 +21,7 @@ Helm chart blueprint for this data platform deployment, covering:
|
||||
|
||||
In addition to the Pod resource optimizations, this blueprint is validated and tested to provide Kubernetes node count and sizing recommendations [(see Kubernetes Cluster Requirements)](#kubernetes-cluster-requirements) to facilitate cloud platform capacity planning. The goal is optimize the number of required Kubernetes nodes in order to optimize server resource usage and, at the same time, ensuring runtime and resource diversity.
|
||||
|
||||
The first release of this blueprint defines a small size data platform deployment, deployed on 3 Kubernetes application nodes with physical diverse underlying server infrastructure.
|
||||
|
||||
Use cases for this small size data platform setup include: data and application evaluation, development, and functional testing.
|
||||
This blueprint, in its default configuration, deploys the data platform, on a Kubernetes cluster with three worker nodes. Use cases for this data platform setup include: data and application evaluation, development, and functional testing.
|
||||
|
||||
## TL;DR
|
||||
|
||||
@@ -36,16 +34,14 @@ $ helm install my-release bitnami/dataplatform-bp1
|
||||
|
||||
This chart bootstraps Data Platform Blueprint-1 deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
|
||||
|
||||
The "Small" size data platform in default configuration deploys the following:
|
||||
Once the chart is installed, the deployed data platform cluster comprises of:
|
||||
1. Zookeeper with 3 nodes to be used for both Kafka and Solr
|
||||
2. Kafka with 3 nodes using the zookeeper deployed above
|
||||
3. Solr with 2 nodes using the zookeeper deployed above
|
||||
4. Spark with 1 Master and 2 worker nodes
|
||||
5. Data Platform Metrics emitter and Prometheus exporter
|
||||
|
||||
The data platform can be optionally deployed with the Tanzu observability framework. In that case, the wavefront collectors will be set up as a DaemonSet to collect the Kubernetes cluster metrics to enable runtime feed into the Tanzu Observability service. It will also be pre-configured to scrape the metrics from the Prometheus endpoint that each application (Kafka/Spark/Solr) emits the metrics to.
|
||||
|
||||
Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This Helm chart has been tested on top of [Bitnami Kubernetes Production Runtime](https://kubeprod.io/) (BKPR). Deploy BKPR to get automated TLS certificates, logging and monitoring for your applications.
|
||||
The data platform can be optionally deployed with the Tanzu observability framework. In that case, the wavefront collectors will be set up as a DaemonSet to collect the Kubernetes cluster metrics to enable runtime feed into the Tanzu Observability service.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
@@ -118,6 +114,7 @@ The command removes all the Kubernetes components associated with the chart and
|
||||
| `dataplatform.exporter.image.tag` | dataplatform exporter image tag (immutable tags are recommended) | `0.0.11-scratch-r4` |
|
||||
| `dataplatform.exporter.image.pullPolicy` | dataplatform exporter image pull policy | `IfNotPresent` |
|
||||
| `dataplatform.exporter.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
|
||||
| `dataplatform.exporter.config` | Data Platform Metrics Configuration emitted in Prometheus format | `""` |
|
||||
| `dataplatform.exporter.livenessProbe.enabled` | Enable livenessProbe | `true` |
|
||||
| `dataplatform.exporter.livenessProbe.initialDelaySeconds` | Initial delay seconds for livenessProbe | `10` |
|
||||
| `dataplatform.exporter.livenessProbe.periodSeconds` | Period seconds for livenessProbe | `5` |
|
||||
@@ -180,7 +177,7 @@ The command removes all the Kubernetes components associated with the chart and
|
||||
| `dataplatform.emitter.enabled` | Start Data Platform metrics emitter | `true` |
|
||||
| `dataplatform.emitter.image.registry` | Data Platform emitter image registry | `docker.io` |
|
||||
| `dataplatform.emitter.image.repository` | Data Platform emitter image repository | `bitnami/dataplatform-emitter` |
|
||||
| `dataplatform.emitter.image.tag` | Data Platform emitter image tag (immutable tags are recommended) | `0.0.10-scratch-r3` |
|
||||
| `dataplatform.emitter.image.tag` | Data Platform emitter image tag (immutable tags are recommended) | `1.0.1-scratch-r0` |
|
||||
| `dataplatform.emitter.image.pullPolicy` | Data Platform emitter image pull policy | `IfNotPresent` |
|
||||
| `dataplatform.emitter.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
|
||||
| `dataplatform.emitter.livenessProbe.enabled` | Enable livenessProbe | `true` |
|
||||
@@ -275,6 +272,7 @@ The command removes all the Kubernetes components associated with the chart and
|
||||
| `kafka.metrics.jmx.resources.limits` | The resources limits for the container | `{}` |
|
||||
| `kafka.metrics.jmx.resources.requests` | JMX Exporter container resource requests | `{}` |
|
||||
| `kafka.metrics.jmx.service.port` | JMX Exporter Prometheus port | `5556` |
|
||||
| `kafka.metrics.jmx.service.annotations` | Exporter service annotation | `{}` |
|
||||
| `kafka.zookeeper.enabled` | Switch to enable or disable the Zookeeper helm chart | `false` |
|
||||
| `kafka.externalZookeeper.servers` | Server or list of external Zookeeper servers to use | `["{{ .Release.Name }}-zookeeper"]` |
|
||||
|
||||
@@ -295,6 +293,7 @@ The command removes all the Kubernetes components associated with the chart and
|
||||
| `solr.exporter.affinity.podAffinity` | Zookeeper pods Affinity rules for best possible resiliency (evaluated as a template) | `{}` |
|
||||
| `solr.exporter.resources.limits` | The resources limits for the container | `{}` |
|
||||
| `solr.exporter.resources.requests` | The requested resources for the container | `{}` |
|
||||
| `solr.exporter.service.annotations` | Exporter service annotations | `{}` |
|
||||
| `solr.zookeeper.enabled` | Enable Zookeeper deployment. Needed for Solr cloud. | `false` |
|
||||
| `solr.externalZookeeper.servers` | Servers for an already existing Zookeeper. | `["{{ .Release.Name }}-zookeeper"]` |
|
||||
|
||||
@@ -358,7 +357,7 @@ $ helm install my-release -f values.yaml bitnami/dataplatform-bp1
|
||||
|
||||
In the default deployment, the helm chart deploys the data platform with [Metrics Emitter](https://hub.docker.com/r/bitnami/dataplatform-emitter) and [Prometheus Exporter](https://hub.docker.com/r/bitnami/dataplatform-exporter) which emit the health metrics of the data platform which can be integrated with your observability solution.
|
||||
|
||||
In case you need to deploy the data platform with [Tanzu Observability](https://docs.wavefront.com/kubernetes.html) Framework for all the applications (Kafka/Spark/Solr) in the data platform, you can specify the 'enabled' parameter using the `--set <component>.metrics.enabled=true` argument to `helm install`. For Solr, the parameter is `solr.exporter.enabled=true` For Example,
|
||||
- To deploy the data platform with Tanzu Observability Framework with the Wavefront Collector with enabled annotation based discovery feature for all the applications (Kafka/Spark/Elasticsearch/Logstash) in the data platform, make sure that auto discovery `wavefront.collector.discovery.enabled=true` is enabled, It should be enabled by default and specify the 'enabled' parameter using the `--set <component>.metrics.enabled=true` argument to helm install. For Example,
|
||||
|
||||
```console
|
||||
$ helm install my-release bitnami/dataplatform-bp1 \
|
||||
@@ -371,8 +370,29 @@ $ helm install my-release bitnami/dataplatform-bp1 \
|
||||
--set wavefront.wavefront.url=https://<YOUR_CLUSTER>.wavefront.com \
|
||||
--set wavefront.wavefront.token=<YOUR_API_TOKEN>
|
||||
```
|
||||
> **NOTE**: When the annotation based discovery feature is enabled in the Wavefront Collector, it scrapes metrics from all the pods that have Prometheus annotation enabled.
|
||||
|
||||
If you want to use an existing Wavefront deployment, edit the Wavefront Collector ConfigMap and add the following snippet under discovery plugins. Once done, restart the wavefront collectors DaemonSet.
|
||||
- To deploy the data platform with Tanzu Observability Framework without the annotation based discovery feature in Wavefront Collector for all the applications (Kafka/Spark/Elasticsearch/Logstash) in the data platform, uncomment the config section in the wavefront deployment from the data platform values.yml file, and specify the 'enable' parameter to 'false' using the `--set wavefront.collector.discovery.enabled=false` with helm install command, below is an example:
|
||||
|
||||
```console
|
||||
|
||||
$ helm install my-release bitnami/dataplatform-bp1 \
|
||||
--set kafka.metrics.kafka.enabled=true \
|
||||
--set kafka.metrics.jmx.enabled=true \
|
||||
--set spark.metrics.enabled=true \
|
||||
--set solr.exporter.enabled=true \
|
||||
--set wavefront.enabled=true \
|
||||
--set wavefront.collector.discovery.enabled=false \
|
||||
--set wavefront.clusterName=<K8s-CLUSTER-NAME> \
|
||||
--set wavefront.wavefront.url=https://<YOUR_CLUSTER>.wavefront.com \
|
||||
--set wavefront.wavefront.token=<YOUR_API_TOKEN>
|
||||
```
|
||||
|
||||
### For using an existing Wavefront deployment
|
||||
|
||||
- To enable the annotation discovery feature in wavefront for the existing wavefront deployment, make sure that auto discovery `enableDiscovery: true` and annotation based discovery `discovery.disable_annotation_discovery: false` are enabled in the Wavefront Collector ConfigMap. They should be enabled by default.
|
||||
|
||||
- To not use the annotation based discovery feature in wavefront, edit the Wavefront Collector ConfigMap and add the following snippet under discovery plugins. Once done, restart the wavefront collectors DaemonSet.
|
||||
|
||||
```console
|
||||
$ kubectl edit configmap wavefront-collector-config -n wavefront
|
||||
@@ -393,7 +413,6 @@ Add the below config:
|
||||
port: 9308
|
||||
path: /metrics
|
||||
scheme: http
|
||||
prefix: kafka.
|
||||
|
||||
## auto-discover jmx exporter
|
||||
- name: kafka-jmx-discovery
|
||||
@@ -461,6 +480,10 @@ In order to render complete information about the deployment including all the s
|
||||
|
||||
## Upgrading
|
||||
|
||||
### To 9.0.0
|
||||
|
||||
This major adds the auto discovery feature in wavefront and updates to newest versions of the exporter/emitter to the chart.
|
||||
|
||||
### To 8.0.0
|
||||
|
||||
This major adds the data platform metrics emitter and Prometheus exporters to the chart which emit health metrics of the data platform.
|
||||
|
||||
14
bitnami/dataplatform-bp1/templates/configmap.yaml
Normal file
14
bitnami/dataplatform-bp1/templates/configmap.yaml
Normal file
@@ -0,0 +1,14 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ include "dataplatform.exporter-name" . }}-configuration
|
||||
labels: {{- include "common.labels.standard" . | nindent 4 }}
|
||||
{{- if .Values.commonLabels }}
|
||||
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.commonAnnotations }}
|
||||
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
|
||||
{{- end }}
|
||||
data:
|
||||
bp.json: |-
|
||||
{{- .Values.dataplatform.exporter.config | nindent 4 }}
|
||||
@@ -78,8 +78,8 @@ spec:
|
||||
args: {{- include "common.tplvalues.render" (dict "value" .Values.dataplatform.exporter.args "context" $) | nindent 12 }}
|
||||
{{- end }}
|
||||
env:
|
||||
- name: BP_NAME
|
||||
value: {{ include "dataplatform.fullname" . }}
|
||||
- name: METRIC_CONFIG_PATH
|
||||
value: "/data/bp.json"
|
||||
- name: DP_URI
|
||||
value: http://{{ include "dataplatform.emitter-name" . }}:{{ .Values.dataplatform.emitter.service.ports.http }}
|
||||
{{- if or .Values.dataplatform.exporter.extraEnvVarsCM .Values.dataplatform.exporter.extraEnvVarsSecret }}
|
||||
@@ -139,6 +139,9 @@ spec:
|
||||
startupProbe: {{- include "common.tplvalues.render" (dict "value" .Values.dataplatform.exporter.customStartupProbe "context" $) | nindent 12 }}
|
||||
{{- end }}
|
||||
volumeMounts:
|
||||
- name: exporter-config
|
||||
mountPath: /data/bp.json
|
||||
subPath: bp.json
|
||||
{{- if .Values.dataplatform.exporter.extraVolumeMounts }}
|
||||
{{- include "common.tplvalues.render" (dict "value" .Values.dataplatform.exporter.extraVolumeMounts "context" $) | nindent 12 }}
|
||||
{{- end }}
|
||||
@@ -146,6 +149,9 @@ spec:
|
||||
{{- include "common.tplvalues.render" ( dict "value" .Values.dataplatform.exporter.sidecars "context" $) | nindent 8 }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
- name: exporter-config
|
||||
configMap:
|
||||
name: {{ include "dataplatform.exporter-name" . }}-configuration
|
||||
{{- if .Values.dataplatform.exporter.extraVolumes }}
|
||||
{{- include "common.tplvalues.render" (dict "value" .Values.dataplatform.exporter.extraVolumes "context" $) | nindent 8 }}
|
||||
{{- end }}
|
||||
|
||||
@@ -18,6 +18,7 @@ rules:
|
||||
- statefulsets
|
||||
- pods
|
||||
- services
|
||||
- secrets
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
@@ -51,3 +52,4 @@ rules:
|
||||
- list
|
||||
- watch
|
||||
{{- end -}}
|
||||
|
||||
|
||||
@@ -68,7 +68,7 @@ dataplatform:
|
||||
image:
|
||||
registry: docker.io
|
||||
repository: bitnami/dataplatform-exporter
|
||||
tag: 0.0.11-scratch-r4
|
||||
tag: 1.0.1-scratch-r0
|
||||
## Specify a imagePullPolicy
|
||||
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
|
||||
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
|
||||
@@ -81,6 +81,88 @@ dataplatform:
|
||||
## - myRegistryKeySecretName
|
||||
##
|
||||
pullSecrets: []
|
||||
## Configuration file passed to the exporter.
|
||||
## This exporter metrics configuration is used to emit only the health and state metrics configured below.
|
||||
## In the below config metrics key,name and guage should not be changed.
|
||||
## @param dataplatform.exporter.config [string] Data Platform Metrics Configuration emitted in Prometheus format
|
||||
##
|
||||
config: |
|
||||
{
|
||||
"blueprintName": "bp1",
|
||||
"metrics": [
|
||||
{
|
||||
"name": "zookeeper_desired_nodes",
|
||||
"type": "gauge",
|
||||
"helpMessage": "Desired number of zookeeper nodes in the data platform",
|
||||
"key": "zookeeper",
|
||||
"dataComponent": "DesiredNodes"
|
||||
},
|
||||
{
|
||||
"name": "zookeeper_available_nodes",
|
||||
"type": "gauge",
|
||||
"helpMessage": "Available number of zookeeper nodes in the data platform",
|
||||
"key": "zookeeper",
|
||||
"dataComponent": "AvailableNodes"
|
||||
},
|
||||
{
|
||||
"name": "kafka_desired_nodes",
|
||||
"type": "gauge",
|
||||
"helpMessage": "Desired number of kafka nodes in the data platform",
|
||||
"key": "kafka",
|
||||
"dataComponent": "DesiredNodes"
|
||||
},
|
||||
{
|
||||
"name": "kafka_available_nodes",
|
||||
"type": "gauge",
|
||||
"helpMessage": "Available number of kafka nodes in the data platform",
|
||||
"key": "kafka",
|
||||
"dataComponent": "AvailableNodes"
|
||||
},
|
||||
{
|
||||
"name": "solr_desired_nodes",
|
||||
"type": "gauge",
|
||||
"helpMessage": "Desired number of solr nodes in the data platform",
|
||||
"key": "solr",
|
||||
"dataComponent": "DesiredNodes"
|
||||
},
|
||||
{
|
||||
"name": "solr_available_nodes",
|
||||
"type": "gauge",
|
||||
"helpMessage": "Available number of solr nodes in the data platform",
|
||||
"key": "solr",
|
||||
"dataComponent": "AvailableNodes"
|
||||
},
|
||||
{
|
||||
"name": "spark_master_desired_nodes",
|
||||
"type": "gauge",
|
||||
"helpMessage": "Desired number of spark master nodes in the data platform",
|
||||
"key": "spark-master",
|
||||
"dataComponent": "DesiredNodes"
|
||||
},
|
||||
{
|
||||
"name": "spark_master_available_nodes",
|
||||
"type": "gauge",
|
||||
"helpMessage": "Available number of spark master nodes in the data platform",
|
||||
"key": "spark-master",
|
||||
"dataComponent": "AvailableNodes"
|
||||
},
|
||||
{
|
||||
"name": "spark_worker_desired_nodes",
|
||||
"type": "gauge",
|
||||
"helpMessage": "Desired number of spark worker nodes in the data platform",
|
||||
"key": "spark-worker",
|
||||
"dataComponent": "DesiredNodes"
|
||||
},
|
||||
{
|
||||
"name": "spark_worker_available_nodes",
|
||||
"type": "gauge",
|
||||
"helpMessage": "Available number of spark worker nodes in the data platform",
|
||||
"key": "spark-worker",
|
||||
"dataComponent": "AvailableNodes"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
## Configure extra options for liveness probe
|
||||
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes
|
||||
## @param dataplatform.exporter.livenessProbe.enabled Enable livenessProbe
|
||||
@@ -345,7 +427,7 @@ dataplatform:
|
||||
image:
|
||||
registry: docker.io
|
||||
repository: bitnami/dataplatform-emitter
|
||||
tag: 0.0.10-scratch-r3
|
||||
tag: 1.0.1-scratch-r0
|
||||
## Specify a imagePullPolicy
|
||||
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
|
||||
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
|
||||
@@ -753,6 +835,17 @@ kafka:
|
||||
## @param kafka.metrics.jmx.service.port JMX Exporter Prometheus port
|
||||
##
|
||||
port: 5556
|
||||
## Provide any additional annotations which may be required. This can be used to
|
||||
## set the LoadBalancer service type to internal only.
|
||||
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
|
||||
## @param kafka.metrics.jmx.service.annotations [object] Exporter service annotation
|
||||
##
|
||||
annotations:
|
||||
prometheus.io/scrape: "true"
|
||||
prometheus.io/port: "5556"
|
||||
prometheus.io/path: "/metrics"
|
||||
prometheus.io/prefix: "kafkajmx."
|
||||
|
||||
## @param kafka.zookeeper.enabled Switch to enable or disable the Zookeeper helm chart
|
||||
##
|
||||
zookeeper:
|
||||
@@ -849,6 +942,17 @@ solr:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 128Mi
|
||||
service:
|
||||
## Provide any additional annotations which may be required. This can be used to
|
||||
## set the LoadBalancer service type to internal only.
|
||||
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
|
||||
## @param solr.exporter.service.annotations [object] Exporter service annotations
|
||||
##
|
||||
annotations:
|
||||
prometheus.io/scrape: "true"
|
||||
prometheus.io/path: "/metrics"
|
||||
prometheus.io/port: "9983"
|
||||
|
||||
## @param solr.zookeeper.enabled Enable Zookeeper deployment. Needed for Solr cloud.
|
||||
##
|
||||
zookeeper:
|
||||
@@ -953,12 +1057,14 @@ spark:
|
||||
prometheus.io/scrape: "true"
|
||||
prometheus.io/path: "/metrics/"
|
||||
prometheus.io/port: "8080"
|
||||
prometheus.io/prefix: "spark."
|
||||
## @param spark.metrics.workerAnnotations [object] Annotations for enabling prometheus to access the metrics endpoint of the worker nodes
|
||||
##
|
||||
workerAnnotations:
|
||||
prometheus.io/scrape: "true"
|
||||
prometheus.io/path: "/metrics/"
|
||||
prometheus.io/port: "8081"
|
||||
prometheus.io/prefix: "spark."
|
||||
|
||||
## @section Tanzu Observability (Wavefront) chart parameters
|
||||
##
|
||||
@@ -1005,61 +1111,53 @@ wavefront:
|
||||
enableRuntimeConfigs: true
|
||||
## @param wavefront.collector.discovery.config [array] Configuration for rules based auto-discovery
|
||||
##
|
||||
config:
|
||||
## auto-discover kafka-exporter
|
||||
##
|
||||
- name: kafka-discovery
|
||||
type: prometheus
|
||||
selectors:
|
||||
images:
|
||||
- "*bitnami/kafka-exporter*"
|
||||
port: 9308
|
||||
path: /metrics
|
||||
scheme: http
|
||||
prefix: kafka.
|
||||
## auto-discover jmx exporter
|
||||
##
|
||||
- name: kafka-jmx-discovery
|
||||
type: prometheus
|
||||
selectors:
|
||||
images:
|
||||
- "*bitnami/jmx-exporter*"
|
||||
port: 5556
|
||||
path: /metrics
|
||||
scheme: http
|
||||
prefix: kafkajmx.
|
||||
## auto-discover solr
|
||||
##
|
||||
- name: solr-discovery
|
||||
type: prometheus
|
||||
selectors:
|
||||
images:
|
||||
- "*bitnami/solr*"
|
||||
port: 9983
|
||||
path: /metrics
|
||||
scheme: http
|
||||
## auto-discover spark
|
||||
##
|
||||
- name: spark-worker-discovery
|
||||
type: prometheus
|
||||
selectors:
|
||||
images:
|
||||
- "*bitnami/spark*"
|
||||
port: 8081
|
||||
path: /metrics/
|
||||
scheme: http
|
||||
prefix: spark.
|
||||
## auto-discover spark
|
||||
##
|
||||
- name: spark-master-discovery
|
||||
type: prometheus
|
||||
selectors:
|
||||
images:
|
||||
- "*bitnami/spark*"
|
||||
port: 8080
|
||||
path: /metrics/
|
||||
scheme: http
|
||||
prefix: spark.
|
||||
## Example:
|
||||
## config:
|
||||
## - name: kafka-discovery
|
||||
## type: prometheus
|
||||
## selectors:
|
||||
## images:
|
||||
## - "*bitnami/kafka-exporter*"
|
||||
## port: 9308
|
||||
## path: /metrics
|
||||
## scheme: http
|
||||
## - name: kafka-jmx-discovery
|
||||
## type: prometheus
|
||||
## selectors:
|
||||
## images:
|
||||
## - "*bitnami/jmx-exporter*"
|
||||
## port: 5556
|
||||
## path: /metrics
|
||||
## scheme: http
|
||||
## prefix: kafkajmx.
|
||||
## - name: solr-discovery
|
||||
## type: prometheus
|
||||
## selectors:
|
||||
## images:
|
||||
## - "*bitnami/solr*"
|
||||
## port: 9983
|
||||
## path: /metrics
|
||||
## scheme: http
|
||||
## - name: spark-worker-discovery
|
||||
## type: prometheus
|
||||
## selectors:
|
||||
## images:
|
||||
## - "*bitnami/spark*"
|
||||
## port: 8081
|
||||
## path: /metrics/
|
||||
## scheme: http
|
||||
## prefix: spark.
|
||||
## - name: spark-master-discovery
|
||||
## type: prometheus
|
||||
## selectors:
|
||||
## images:
|
||||
## - "*bitnami/spark*"
|
||||
## port: 8080
|
||||
## path: /metrics/
|
||||
## scheme: http
|
||||
## prefix: spark.
|
||||
##
|
||||
config: []
|
||||
proxy:
|
||||
## Wavefront Proxy resource requests and limits
|
||||
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
|
||||
|
||||
Reference in New Issue
Block a user