mirror of
https://github.com/bitnami/charts.git
synced 2026-03-14 14:57:22 +08:00
* [bitnami/wavefront-prometheus-storage-adapter] Chart standarisation Signed-off-by: Francisco de Paz <fdepaz@vmware.com> * Apply suggested changes Signed-off-by: Francisco de Paz <fdepaz@vmware.com> * Fix deployment typo Signed-off-by: Francisco de Paz <fdepaz@vmware.com> * Update containerPort used in args Signed-off-by: Francisco de Paz <fdepaz@vmware.com> * Update README.md with readme-generator-for-helm Signed-off-by: Bitnami Containers <containers@bitnami.com> * [bitnami/wavefront-prometheus-storage-adapter] Update components versions Signed-off-by: Bitnami Containers <containers@bitnami.com> Co-authored-by: Bitnami Containers <containers@bitnami.com>
52 lines
2.7 KiB
Plaintext
52 lines
2.7 KiB
Plaintext
CHART NAME: {{ .Chart.Name }}
|
|
CHART VERSION: {{ .Chart.Version }}
|
|
APP VERSION: {{ .Chart.AppVersion }}
|
|
|
|
** Please be patient while the chart is being deployed **
|
|
|
|
{{- if .Values.diagnosticMode.enabled }}
|
|
The chart has been deployed in diagnostic mode. All probes have been disabled and the command has been overwritten with:
|
|
|
|
command: {{- include "common.tplvalues.render" (dict "value" .Values.diagnosticMode.command "context" $) | nindent 4 }}
|
|
args: {{- include "common.tplvalues.render" (dict "value" .Values.diagnosticMode.args "context" $) | nindent 4 }}
|
|
|
|
Get the list of pods by executing:
|
|
|
|
kubectl get pods --namespace {{ .Release.Namespace }} -l app.kubernetes.io/instance={{ .Release.Name }}
|
|
|
|
Access the pod you want to debug by executing
|
|
|
|
kubectl exec --namespace {{ .Release.Namespace }} -ti <NAME OF THE POD> -- bash
|
|
|
|
In order to replicate the container startup execute this command:
|
|
|
|
/opt/bitnami/wavefront-prometheus-storage-adapter/bin/adapter
|
|
|
|
{{- else }}
|
|
|
|
1. Get the application URL by running these commands:
|
|
|
|
{{- if contains "NodePort" .Values.service.type }}
|
|
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "common.names.fullname" . }})
|
|
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
|
|
echo http://$NODE_IP:$NODE_PORT
|
|
{{- else if contains "LoadBalancer" .Values.service.type }}
|
|
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
|
|
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ template "common.names.fullname" . }}
|
|
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "common.names.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
|
echo http://$SERVICE_IP:{{ coalesce .Values.service.ports.http .Values.service.port}}
|
|
{{- else if contains "ClusterIP" .Values.service.type }}
|
|
echo "The renconciler is available at http://127.0.0.1:{{ coalesce .Values.service.ports.http .Values.service.port}}"
|
|
kubectl port-forward svc/{{ template "common.names.fullname" . }} {{ coalesce .Values.service.ports.http .Values.service.port}}:{{ coalesce .Values.service.ports.http .Values.service.port}} &
|
|
{{- end }}
|
|
|
|
2. Make sure that your running Prometheus instance has the following configuration in the prometheus.yml file
|
|
|
|
remote_write:
|
|
- url: "http://{{ template "common.names.fullname" . }}:{{ coalesce .Values.service.ports.http .Values.service.port}}/receive"
|
|
|
|
{{- end }}
|
|
|
|
{{- include "common.warnings.rollingTag" .Values.image }}
|
|
{{- include "wfpsa.validateValues" . }}
|