Files
charts/bitnami/airflow
Lucas Marques 8622ccfcb0 [bitnami/airflow] update README to match deprecated values (#33602)
* [bitnami/airflow] docs: update deprecation references for worker autoscaling parameters in README

Signed-off-by: Lucas Marques <marques.analista@gmail.com>

* Update CHANGELOG.md

Signed-off-by: Bitnami Bot <bitnami.bot@broadcom.com>

---------

Signed-off-by: Lucas Marques <marques.analista@gmail.com>
Signed-off-by: Bitnami Bot <bitnami.bot@broadcom.com>
Co-authored-by: Bitnami Bot <bitnami.bot@broadcom.com>
2025-05-26 15:39:50 +02:00
..

Bitnami package for Apache Airflow

Apache Airflow is a tool to express and execute workflows as directed acyclic graphs (DAGs). It includes utilities to schedule tasks, monitor task progress and handle task dependencies.

Overview of Apache Airflow

Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.

TL;DR

helm install my-release oci://registry-1.docker.io/bitnamicharts/airflow

Looking to use Apache Airflow in production? Try VMware Tanzu Application Catalog, the commercial edition of the Bitnami catalog.

Introduction

This chart bootstraps an Apache Airflow deployment on a Kubernetes cluster using the Helm package manager.

Prerequisites

  • Kubernetes 1.23+
  • Helm 3.8.0+
  • PV provisioner support in the underlying infrastructure

Installing the Chart

To install the chart with the release name my-release:

helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/airflow

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

The command deploys Apache Airflow on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.

Tip

: List all releases using helm list

Configuration and installation details

Executors

Airflow supports different Executors and this Helm chart provides support for several of them. You can choose the executor you want to use by setting the executor parameter.

CeleryExecutor

The Celery executor (default one) uses a message queue system (Redis® in this case) to coordinate tasks between pre-configured workers.

KubernetesExecutor

The Kubernetes executor creates a new worker pod for every task instance using the pod_template.yaml that you can find at templates/config/configmap.yaml. This template can be overwritten using worker.podTemplate. To enable KubernetesExecutor you can set the following parameters:

executor=KubernetesExecutor
rbac.create=true
serviceAccount.create=true
redis.enabled=false

NOTE: Redis® is not needed to be deployed when using KubernetesExecutor so you can disable it using redis.enabled=false.

CeleryKubernetesExecutor

The CeleryKubernetesExecutor (introduced in Airflow 2.0) is a combination of both the Celery and the Kubernetes executors. Tasks will be executed using Celery by default, but those tasks that require it can be executed in a Kubernetes pod using the 'kubernetes' queue.

The CeleryKubernetesExecutor has been deprecated starting with Airflow 3.0.0.

LocalExecutor

The Local executor runs tasks by spawning processes in the Scheduler pods. To enable LocalExecutor set the following parameters.

executor=LocalExecutor
redis.enabled=false

LocalKubernetesExecutor

The LocalKubernetesExecutor (introduced in Airflow 2.3) is a combination of both the Local and the Kubernetes executors. Tasks will be executed in the scheduler by default, but those tasks that require it can be executed in a Kubernetes pod using the 'kubernetes' queue.

The LocalKubernetesExecutor has been deprecated starting with Airflow 3.0.0.

SequentialExecutor

This executor will only run one task instance at a time in the Scheduler pods. For production use case, please use other executors. To enable SequentialExecutor set the following parameters.

executor=SequentialExecutor
redis.enabled=false

The SequentialExecutor has been deprecated starting with Airflow 3.0.0.

Update credentials

Bitnami charts configure credentials at first boot. Any further change in the secrets or credentials require manual intervention. Follow these instructions:

  • Update the user password following the upstream documentation
  • Update the password secret with the new values (replace the SECRET_NAME, PASSWORD, FERNET_KEY and SECRET_KEY placeholders)
kubectl create secret generic SECRET_NAME --from-literal=airflow-password=PASSWORD --from-literal=airflow-fernet-key=FERNET_KEY --from-literal=airflow-secret-key=SECRET_KEY --from-literal=airflow-jwt-secret-key=JWT_SECRET_KEY --dry-run -o yaml | kubectl apply -f -

Airflow configuration file

By default, the Airflow configuration file is auto-generated based on the chart parameters you set. For instance, the executor parameter will be used to set the executor class under the [core] section.

You can also provider your own configuration by setting the configuration parameter. This parameter expects the configuration as a sections/keys/values dictionary on YAML format, then it's converted to .cfg format by the chart. For instance, using a configuration like the one below...

configuration:
  core:
    dags_folder: "/opt/bitnami/airflow/dags"

... the chart will translate it to the following configuration file:

[core]
dags_folder = "/opt/bitnami/airflow/dags"

As an alternative to providing the whole configuration, you can also extend the default configuration using the overrideConfiguration parameter. The values set in this parameter, which also expects YAML format, will be merged with the default configuration or those set in the configuration parameter taking precedence.

Scaling worker pods

Sometime when using large workloads a fixed number of worker pods may make task to take a long time to be executed. This chart provide two ways for scaling worker pods.

  • If you are using KubernetesExecutor auto scaling pods would be done by the Scheduler without adding anything more.
  • If you are using SequentialExecutor you would have to enable worker.autoscaling to do so, please, set the following parameters. It will use autoscaling by default configuration that you can change using worker.autoscaling.replicas.* and worker.autoscaling.targets.*.
worker.autoscaling.enabled=true
worker.resources.requests.cpu=200m
worker.resources.requests.memory=250Mi

Generate a Fernet key

A Fernet key is required in order to encrypt password within connections. The Fernet key must be a base64-encoded 32-byte key.

Learn how to generate one here.

Generate a Secret key

Secret key used to run your Flask app. It should be as random as possible.

Note: when running multiple Webserver instances, make sure all of them use the same secret key. Otherwise you may face the error "CSRF session token is missing".

Load DAG files

There are two different ways to load your custom DAG files into the Airflow chart. All of them are compatible so you can use more than one at the same time.

Option 1: Specify an existing config map

You can manually create a config map containing all your DAG files and then pass the name when deploying Airflow chart. For that, you can set the parameters below:

dags.enabled=true
dags.existingConfigmap=my-dags-configmap

Option 2: Get your DAG files from a git repository

You can store all your DAG files on GitHub repositories and then clone to the Airflow pods with an initContainer. The repositories will be periodically updated using a sidecar container. In order to do that, you can deploy airflow with the following options:

Note: When enabling git synchronization, an init container and sidecar container will be added for all the pods running airflow, this will allow scheduler, worker and web component to reach dags if it was needed.

dags.enabled=true
dags.repositories[0].repository=https://github.com/USERNAME/REPOSITORY
dags.repositories[0].name=REPO-IDENTIFIER
dags.repositories[0].branch=master

If you use a private repository from GitHub, a possible option to clone the files is using a Personal Access Token and using it as part of the URL: https://USERNAME:PERSONAL_ACCESS_TOKEN@github.com/USERNAME/REPOSITORY. Alternatively, you can clone the repository using SSH, to do so, you can set your private SSH Key setting the dags.sshKey parameter or use an existing secret containing your private SSH key setting the dags.existingSshKeySecret and dags.existingSshKeySecretKey parameters.

Loading Plugins

You can load plugins into the chart by specifying a git repository containing the plugin files. The repository will be periodically updated using a sidecar container. In order to do that, you can deploy airflow with the following options:

Note: When enabling git synchronization, an init container and sidecar container will be added for all the pods running airflow, this will allow scheduler, worker and web component to reach plugins if it was needed.

plugins.enabled=true
plugins.repositories[0].repository=https://github.com/teamclairvoyant/airflow-rest-api-plugin.git
plugins.repositories[0].branch=v1.0.9-branch
plugins.repositories[0].path=plugins

Install extra python packages

This chart allows you to mount volumes using extraVolumes and extraVolumeMounts in every component (web, scheduler, worker). Mounting a requirements.txt using these options to /bitnami/python/requirements.txt will execute pip install -r /bitnami/python/requirements.txt on container start.

Existing Secrets

You can use an existing secret to configure your Airflow auth, external Postgres, and external Redis® passwords:

postgresql.enabled=false
externalDatabase.host=my.external.postgres.host
externalDatabase.user=bn_airflow
externalDatabase.database=bitnami_airflow
externalDatabase.existingSecret=all-my-secrets
externalDatabase.existingSecretPasswordKey=postgresql-password

redis.enabled=false
externalRedis.host=my.external.redis.host
externalRedis.existingSecret=all-my-secrets
externalRedis.existingSecretPasswordKey=redis-password

auth.existingSecret=all-my-secrets

The expected secret resource looks as follows:

apiVersion: v1
kind: Secret
metadata:
  name: all-my-secrets
type: Opaque
data:
  airflow-password: "Smo1QTJLdGxXMg=="
  airflow-fernet-key: "YVRZeVJVWnlXbU4wY1dOalVrdE1SV3cxWWtKeFIzWkVRVTVrVjNaTFR6WT0="
  airflow-secret-key: "a25mQ1FHTUh3MnFRSk5KMEIyVVU2YmN0VGRyYTVXY08="
  postgresql-password: "cG9zdGdyZXMK"
  redis-password: "cmVkaXMK"

This is useful if you plan on using Bitnami's sealed secrets to manage your passwords.

Alternatively, you can also use a SQL connection string to connect to an external database. This can be done by:

  • Setting the externalDatabase.sqlConnection parameter:
postgresql.enabled=false
externalDatabase.sqlConnection=postgresql://user:password@host:port/dbname
  • Or via the externalDatabase.existingSecret and externalDatabase.existingSecretSqlConnectionKey parameters:
postgresql.enabled=false
externalDatabase.existingSecret=db-secret
externalDatabase.existingSecretSqlConnectionKey=sql-connection

Database setup

By default, this chart setups the database (init or migrate the schema) and creates the admin user using a K8s job that is created when the chart release is installed or upgraded, and deleted once it succeeds. This job uses Chart hooks, so it won't be deleted if you're using Helm exclusively for its rendering capabilities (e.g. when using ArgoCD or FluxCD).

Alternatively, you can disable this behavior by setting the setupDBJob.enabled parameter to false. In this case, the database setup and admin user creation will be done during the Webserver startup.

Resource requests and limits

Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the resources value (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.

To make this process easier, the chart contains the resourcesPreset values, which automatically sets the resources section according to different presets. Check these presets in the bitnami/common chart. However, in production workloads using resourcesPreset is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the official Kubernetes documentation.

Prometheus metrics

This chart can be integrated with Prometheus by setting metrics.enabled to true. This will configure Airflow components to send StatsD metrics to the StatsD exporter that transforms them into Prometheus metrics. The StatsD exporter is deployed as a standalone deployment and service in the same namespace as the Airflow deployment.

Prometheus requirements

It is necessary to have a working installation of Prometheus or Prometheus Operator for the integration to work. Install the Bitnami Prometheus helm chart or the Bitnami Kube Prometheus helm chart to easily have a working Prometheus in your cluster.

Integration with Prometheus Operator

The chart can deploy ServiceMonitor objects for integration with Prometheus Operator installations. To do so, set the value metrics.serviceMonitor.enabled=true. Ensure that the Prometheus Operator CustomResourceDefinitions are installed in the cluster or it will fail with the following error:

no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"

Install the Bitnami Kube Prometheus helm chart for having the necessary CRDs and the Prometheus Operator.

Rolling VS Immutable tags

It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.

Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.

Ingress

This chart provides support for Ingress resources. If you have an ingress controller installed on your cluster, such as nginx-ingress-controller or contour you can utilize the ingress controller to serve your application.

To enable Ingress integration, set ingress.enabled to true.

The most common scenario is to have one host name mapped to the deployment. In this case, the ingress.hostname property can be used to set the host name. The ingress.tls parameter can be used to add the TLS configuration for this host. However, it is also possible to have more than one host. To facilitate this, the ingress.extraHosts parameter (if available) can be set with the host names specified as an array. The ingress.extraTLS parameter (if available) can also be used to add the TLS configuration for extra hosts.

NOTE: For each host specified in the ingress.extraHosts parameter, it is necessary to set a name, path, and any annotations that the Ingress controller should know about. Not all annotations are supported by all Ingress controllers, but this annotation reference document lists the annotations supported by many popular Ingress controllers.

Adding the TLS parameter (where available) will cause the chart to generate HTTPS URLs, and the application will be available on port 443. The actual TLS secrets do not have to be generated by this chart. However, if TLS is enabled, the Ingress record will not work until the TLS secret exists.

Learn more about Ingress controllers.

Securing traffic using TLS

By default, this chart assumes TLS is managed by the Ingress Controller and terminates the TLS connection in the Ingress Controller. This can be done by setting ingress.enabled and ingress.tls parameters to true as explained in the section above. However, it is possible to configure TLS encryption for the Airflow Webserver directly by setting the web.tls.enabled parameter to true.

It is necessary to create a secret containing the TLS certificates and pass it to the chart via the web.tls.existingSecret parameter. The secret should contain a tls.crt and tls.key keys including the certificate and key files respectively. For example:

kubectl create secret generic web-tls-secret --from-file=./tls.crt --from-file=./tls.key

You can manually create the required TLS certificates or relying on the chart auto-generation capabilities. The chart supports two different ways to auto-generate the required certificates:

  • Using Helm capabilities. Enable this feature by setting web.tls.autoGenerated.enabled to true and web.tls.autoGenerated.engine to helm.
  • Relying on CertManager (please note it's required to have CertManager installed in your K8s cluster). Enable this feature by setting web.tls.autoGenerated.enabled to true and web.tls.autoGenerated.engine to cert-manager. Please note it's supported to use an existing Issuer/ClusterIssuer for issuing the TLS certificates by setting the web.tls.autoGenerated.certManager.existingIssuer and web.tls.autoGenerated.certManager.existingIssuerKind parameters.

Sidecars

If additional containers are needed in the same pod as Apache Airflow (such as additional metrics or logging exporters), they can be defined using the sidecars parameter.

sidecars:
- name: your-image-name
  image: your-image
  imagePullPolicy: Always
  ports:
  - name: portname
    containerPort: 1234

If these sidecars export extra ports, extra port definitions can be added using the service.extraPorts parameter (where available), as shown in the example below:

service:
  extraPorts:
  - name: extraPort
    port: 11311
    targetPort: 11311

If additional init containers are needed in the same pod, they can be defined using the initContainers parameter. Here is an example:

initContainers:
  - name: your-image-name
    image: your-image
    imagePullPolicy: Always
    ports:
      - name: portname
        containerPort: 1234

Learn more about sidecar containers and init containers.

Setting Pod's affinity

This chart allows you to set your custom affinity using the affinity parameter. Find more information about Pod's affinity in the kubernetes documentation.

As an alternative, you can use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the bitnami/common chart. To do so, set the podAffinityPreset, podAntiAffinityPreset, or nodeAffinityPreset parameters.

Backup and restore

To back up and restore Helm chart deployments on Kubernetes, you need to back up the persistent volumes from the source deployment and attach them to a new deployment using Velero, a Kubernetes backup/restore tool. Find the instructions for using Velero in this guide.

Persistence

The Bitnami Airflow chart relies on the PostgreSQL chart persistence. This means that Airflow does not persist anything.

Parameters

Global parameters

Name Description Value
global.imageRegistry Global Docker image registry ""
global.imagePullSecrets Global Docker registry secret names as an array []
global.defaultStorageClass Global default StorageClass for Persistent Volume(s) ""
global.security.allowInsecureImages Allows skipping image verification false
global.compatibility.openshift.adaptSecurityContext Adapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation) auto
global.compatibility.omitEmptySeLinuxOptions If set to true, removes the seLinuxOptions from the securityContexts when it is set to an empty object false

Common parameters

Name Description Value
kubeVersion Override Kubernetes version ""
apiVersions Override Kubernetes API versions reported by .Capabilities []
nameOverride String to partially override common.names.name ""
fullnameOverride String to fully override common.names.fullname ""
namespaceOverride String to fully override common.names.namespace ""
commonLabels Labels to add to all deployed objects {}
commonAnnotations Annotations to add to all deployed objects {}
clusterDomain Kubernetes cluster domain name cluster.local
extraDeploy Array of extra objects to deploy with the release []
usePasswordFiles Mount credentials as files instead of using environment variables true
diagnosticMode.enabled Enable diagnostic mode (all probes will be disabled and the command will be overridden) false
diagnosticMode.command Command to override all containers in the chart release ["sleep"]
diagnosticMode.args Args to override all containers in the chart release ["infinity"]

Airflow common parameters

Name Description Value
image.registry Airflow image registry REGISTRY_NAME
image.repository Airflow image repository REPOSITORY_NAME/airflow
image.digest Airflow image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag ""
image.pullPolicy Airflow image pull policy IfNotPresent
image.pullSecrets Airflow image pull secrets []
image.debug Enable image debug mode false
auth.username Username to access web UI user
auth.password Password to access web UI ""
auth.fernetKey Fernet key to secure connections ""
auth.secretKey Secret key to run your flask app ""
auth.jwtSecretKey JWT secret key to run your flask app ""
auth.existingSecret Name of an existing secret to use for Airflow credentials ""
executor Airflow executor. Allowed values: LocalExecutor, CeleryExecutor, KubernetesExecutor, SequentialExecutor (Airflow 2.x only), CeleryKubernetesExecutor (Airflow 2.x only), and LocalKubernetesExecutor (Airflow 2.x only) CeleryExecutor
loadExamples Switch to load some Airflow examples false
configuration Specify content for Airflow config file (auto-generated based on other parameters otherwise) {}
overrideConfiguration Airflow common configuration override. Values defined here takes precedence over the ones defined at configuration {}
localSettings Specify content for Airflow local settings (airflow_local_settings.py) ""
existingConfigmap Name of an existing ConfigMap with the Airflow config file and, optionally, the local settings file ""
dags.enabled Enable loading DAGs from a ConfigMap or Git repositories false
dags.existingConfigmap Name of an existing ConfigMap with all the DAGs files you want to load in Airflow ""
dags.repositories Array of repositories from which to download DAG files []
dags.sshKey SSH Private key used to clone/sync DAGs from Git repositories (ignored if dags.existingSshKeySecret is set) ""
dags.existingSshKeySecret Name of a secret containing the SSH private key used to clone/sync DAGs from Git repositories ""
dags.existingSshKeySecretKey Key in the existing secret containing the SSH private key ""
plugins.enabled Enable loading plugins from Git repositories false
plugins.repositories Array of repositories from which to download plugins []
plugins.sshKey SSH Private key used to clone/sync plugins from Git repositories (ignored if plugins.existingSshKeySecret is set) ""
plugins.existingSshKeySecret Name of a secret containing the SSH private key used to clone/sync plugins from Git repositories ""
plugins.existingSshKeySecretKey Key in the existing secret containing the SSH private key ""
defaultInitContainers.prepareConfig.containerSecurityContext.enabled Enabled "prepare-config" init-containers' Security Context true
defaultInitContainers.prepareConfig.containerSecurityContext.seLinuxOptions Set SELinux options in "prepare-config" init-containers {}
defaultInitContainers.prepareConfig.containerSecurityContext.runAsUser Set runAsUser in "prepare-config" init-containers' Security Context 1001
defaultInitContainers.prepareConfig.containerSecurityContext.runAsGroup Set runAsUser in "prepare-config" init-containers' Security Context 1001
defaultInitContainers.prepareConfig.containerSecurityContext.runAsNonRoot Set runAsNonRoot in "prepare-config" init-containers' Security Context true
defaultInitContainers.prepareConfig.containerSecurityContext.readOnlyRootFilesystem Set readOnlyRootFilesystem in "prepare-config" init-containers' Security Context true
defaultInitContainers.prepareConfig.containerSecurityContext.privileged Set privileged in "prepare-config" init-containers' Security Context false
defaultInitContainers.prepareConfig.containerSecurityContext.allowPrivilegeEscalation Set allowPrivilegeEscalation in "prepare-config" init-containers' Security Context false
defaultInitContainers.prepareConfig.containerSecurityContext.capabilities.add List of capabilities to be added in "prepare-config" init-containers []
defaultInitContainers.prepareConfig.containerSecurityContext.capabilities.drop List of capabilities to be dropped in "prepare-config" init-containers ["ALL"]
defaultInitContainers.prepareConfig.containerSecurityContext.seccompProfile.type Set seccomp profile in "prepare-config" init-containers RuntimeDefault
defaultInitContainers.prepareConfig.resourcesPreset Set Airflow "prepare-config" init container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if defaultInitContainers.prepareConfig.resources is set (defaultInitContainers.prepareConfig.resources is recommended for production). nano
defaultInitContainers.prepareConfig.resources Set Airflow "prepare-config" init container requests and limits for different resources like CPU or memory (essential for production workloads) {}
defaultInitContainers.waitForDBMigrations.containerSecurityContext.enabled Enabled "wait-for-db-migrations" init-containers' Security Context true
defaultInitContainers.waitForDBMigrations.containerSecurityContext.seLinuxOptions Set SELinux options in "wait-for-db-migrations" init-containers {}
defaultInitContainers.waitForDBMigrations.containerSecurityContext.runAsUser Set runAsUser in "wait-for-db-migrations" init-containers' Security Context 1001
defaultInitContainers.waitForDBMigrations.containerSecurityContext.runAsGroup Set runAsUser in "wait-for-db-migrations" init-containers' Security Context 1001
defaultInitContainers.waitForDBMigrations.containerSecurityContext.runAsNonRoot Set runAsNonRoot in "wait-for-db-migrations" init-containers' Security Context true
defaultInitContainers.waitForDBMigrations.containerSecurityContext.readOnlyRootFilesystem Set readOnlyRootFilesystem in "wait-for-db-migrations" init-containers' Security Context true
defaultInitContainers.waitForDBMigrations.containerSecurityContext.privileged Set privileged in "wait-for-db-migrations" init-containers' Security Context false
defaultInitContainers.waitForDBMigrations.containerSecurityContext.allowPrivilegeEscalation Set allowPrivilegeEscalation in "wait-for-db-migrations" init-containers' Security Context false
defaultInitContainers.waitForDBMigrations.containerSecurityContext.capabilities.add List of capabilities to be added in "wait-for-db-migrations" init-containers []
defaultInitContainers.waitForDBMigrations.containerSecurityContext.capabilities.drop List of capabilities to be dropped in "wait-for-db-migrations" init-containers ["ALL"]
defaultInitContainers.waitForDBMigrations.containerSecurityContext.seccompProfile.type Set seccomp profile in "wait-for-db-migrations" init-containers RuntimeDefault
defaultInitContainers.waitForDBMigrations.resourcesPreset Set Airflow "wait-for-db-migrations" init container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if defaultInitContainers.waitForDBMigrations.resources is set (defaultInitContainers.waitForDBMigrations.resources is recommended for production). micro
defaultInitContainers.waitForDBMigrations.resources Set Airflow "wait-for-db-migrations" init container requests and limits for different resources like CPU or memory (essential for production workloads) {}
defaultInitContainers.loadDAGsPlugins.command Override cmd []
defaultInitContainers.loadDAGsPlugins.args Override args []
defaultInitContainers.loadDAGsPlugins.extraVolumeMounts Add extra volume mounts []
defaultInitContainers.loadDAGsPlugins.extraEnvVars Add extra environment variables []
defaultInitContainers.loadDAGsPlugins.extraEnvVarsCM ConfigMap with extra environment variables ""
defaultInitContainers.loadDAGsPlugins.extraEnvVarsSecret Secret with extra environment variables ""
defaultInitContainers.loadDAGsPlugins.containerSecurityContext.enabled Enabled "load-dags-plugins" init-containers' Security Context true
defaultInitContainers.loadDAGsPlugins.containerSecurityContext.seLinuxOptions Set SELinux options in "load-dags-plugins" init-containers {}
defaultInitContainers.loadDAGsPlugins.containerSecurityContext.runAsUser Set runAsUser in "load-dags-plugins" init-containers' Security Context 1001
defaultInitContainers.loadDAGsPlugins.containerSecurityContext.runAsGroup Set runAsUser in "load-dags-plugins" init-containers' Security Context 1001
defaultInitContainers.loadDAGsPlugins.containerSecurityContext.runAsNonRoot Set runAsNonRoot in "load-dags-plugins" init-containers' Security Context true
defaultInitContainers.loadDAGsPlugins.containerSecurityContext.readOnlyRootFilesystem Set readOnlyRootFilesystem in "load-dags-plugins" init-containers' Security Context true
defaultInitContainers.loadDAGsPlugins.containerSecurityContext.privileged Set privileged in "load-dags-plugins" init-containers' Security Context false
defaultInitContainers.loadDAGsPlugins.containerSecurityContext.allowPrivilegeEscalation Set allowPrivilegeEscalation in "load-dags-plugins" init-containers' Security Context false
defaultInitContainers.loadDAGsPlugins.containerSecurityContext.capabilities.add List of capabilities to be added in "load-dags-plugins" init-containers []
defaultInitContainers.loadDAGsPlugins.containerSecurityContext.capabilities.drop List of capabilities to be dropped in "load-dags-plugins" init-containers ["ALL"]
defaultInitContainers.loadDAGsPlugins.containerSecurityContext.seccompProfile.type Set seccomp profile in "load-dags-plugins" init-containers RuntimeDefault
defaultInitContainers.loadDAGsPlugins.resourcesPreset Set Airflow "load-dags-plugins" init container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if defaultInitContainers.loadDAGsPlugins.resources is set (defaultInitContainers.loadDAGsPlugins.resources is recommended for production). nano
defaultInitContainers.loadDAGsPlugins.resources Set Airflow "load-dags-plugins" init container requests and limits for different resources like CPU or memory (essential for production workloads) {}
defaultSidecars.syncDAGsPlugins.interval Interval in seconds to pull the git repository containing the DAGs and/or plugins 60
defaultSidecars.syncDAGsPlugins.command Override cmd []
defaultSidecars.syncDAGsPlugins.args Override args []
defaultSidecars.syncDAGsPlugins.extraVolumeMounts Add extra volume mounts []
defaultSidecars.syncDAGsPlugins.extraEnvVars Add extra environment variables []
defaultSidecars.syncDAGsPlugins.extraEnvVarsCM ConfigMap with extra environment variables ""
defaultSidecars.syncDAGsPlugins.extraEnvVarsSecret Secret with extra environment variables ""
defaultSidecars.syncDAGsPlugins.containerSecurityContext.enabled Enabled "sync-dags-plugins" sidecars' Security Context true
defaultSidecars.syncDAGsPlugins.containerSecurityContext.seLinuxOptions Set SELinux options in "sync-dags-plugins" sidecars {}
defaultSidecars.syncDAGsPlugins.containerSecurityContext.runAsUser Set runAsUser in "sync-dags-plugins" sidecars' Security Context 1001
defaultSidecars.syncDAGsPlugins.containerSecurityContext.runAsGroup Set runAsUser in "sync-dags-plugins" sidecars' Security Context 1001
defaultSidecars.syncDAGsPlugins.containerSecurityContext.runAsNonRoot Set runAsNonRoot in "sync-dags-plugins" sidecars' Security Context true
defaultSidecars.syncDAGsPlugins.containerSecurityContext.readOnlyRootFilesystem Set readOnlyRootFilesystem in "sync-dags-plugins" sidecars' Security Context true
defaultSidecars.syncDAGsPlugins.containerSecurityContext.privileged Set privileged in "sync-dags-plugins" sidecars' Security Context false
defaultSidecars.syncDAGsPlugins.containerSecurityContext.allowPrivilegeEscalation Set allowPrivilegeEscalation in "sync-dags-plugins" sidecars' Security Context false
defaultSidecars.syncDAGsPlugins.containerSecurityContext.capabilities.add List of capabilities to be added in "sync-dags-plugins" sidecars []
defaultSidecars.syncDAGsPlugins.containerSecurityContext.capabilities.drop List of capabilities to be dropped in "sync-dags-plugins" sidecars ["ALL"]
defaultSidecars.syncDAGsPlugins.containerSecurityContext.seccompProfile.type Set seccomp profile in "sync-dags-plugins" sidecars RuntimeDefault
defaultSidecars.syncDAGsPlugins.resourcesPreset Set Airflow "sync-dags-plugins" sidecar resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if defaultSidecars.syncDAGsPlugins.resources is set (defaultSidecars.syncDAGsPlugins.resources is recommended for production). nano
defaultSidecars.syncDAGsPlugins.resources Set Airflow "sync-dags-plugins" sidecar requests and limits for different resources like CPU or memory (essential for production workloads) {}
extraEnvVars Add extra environment variables for all the Airflow pods []
extraEnvVarsCM ConfigMap with extra environment variables for all the Airflow pods ""
extraEnvVarsSecret Secret with extra environment variables for all the Airflow pods ""
extraEnvVarsSecrets List of secrets with extra environment variables for all the Airflow pods []
sidecars Add additional sidecar containers to all the Airflow pods []
initContainers Add additional init containers to all the Airflow pods []
extraVolumeMounts Optionally specify extra list of additional volumeMounts for all the Airflow pods []
extraVolumes Optionally specify extra list of additional volumes for the all the Airflow pods []

Airflow webserver parameters

Name Description Value
web.baseUrl URL used to access to Airflow webserver ""
web.configuration Specify content for webserver_config.py (auto-generated based on other env. vars otherwise) ""
web.extraConfiguration Specify extra content to be appended to default webserver_config.py (ignored if web.configuration or web.existingConfigmap are set) ""
web.existingConfigmap Name of an existing config map containing the Airflow webserver config file ""
web.tls.enabled Enable TLS configuration for Airflow webserver false
web.tls.autoGenerated.enabled Enable automatic generation of TLS certificates true
web.tls.autoGenerated.engine Mechanism to generate the certificates (allowed values: helm, cert-manager) helm
web.tls.autoGenerated.certManager.existingIssuer The name of an existing Issuer to use for generating the certificates (only for cert-manager engine) ""
web.tls.autoGenerated.certManager.existingIssuerKind Existing Issuer kind, defaults to Issuer (only for cert-manager engine) ""
web.tls.autoGenerated.certManager.keyAlgorithm Key algorithm for the certificates (only for cert-manager engine) RSA
web.tls.autoGenerated.certManager.keySize Key size for the certificates (only for cert-manager engine) 2048
web.tls.autoGenerated.certManager.duration Duration for the certificates (only for cert-manager engine) 2160h
web.tls.autoGenerated.certManager.renewBefore Renewal period for the certificates (only for cert-manager engine) 360h
web.tls.ca CA certificate for TLS. Ignored if tls.existingSecret is set ""
web.tls.cert TLS certificate for Airflow webserver. Ignored if tls.master.existingSecret is set ""
web.tls.key TLS key for Airflow webserver. Ignored if tls.master.existingSecret is set ""
web.tls.existingSecret The name of an existing Secret containing the Airflow webserver certificates for TLS ""
web.command Override default container command (useful when using custom images) []
web.args Override default container args (useful when using custom images) []
web.extraEnvVars Array with extra environment variables to add Airflow webserver pods []
web.extraEnvVarsCM ConfigMap containing extra environment variables for Airflow webserver pods ""
web.extraEnvVarsSecret Secret containing extra environment variables (in case of sensitive data) for Airflow webserver pods ""
web.extraEnvVarsSecrets List of secrets with extra environment variables for Airflow webserver pods []
web.containerPorts.http Airflow webserver HTTP container port 8080
web.replicaCount Number of Airflow webserver replicas 1
web.livenessProbe.enabled Enable livenessProbe on Airflow webserver containers true
web.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 180
web.livenessProbe.periodSeconds Period seconds for livenessProbe 20
web.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 5
web.livenessProbe.failureThreshold Failure threshold for livenessProbe 6
web.livenessProbe.successThreshold Success threshold for livenessProbe 1
web.readinessProbe.enabled Enable readinessProbe on Airflow webserver containers true
web.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 30
web.readinessProbe.periodSeconds Period seconds for readinessProbe 10
web.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 5
web.readinessProbe.failureThreshold Failure threshold for readinessProbe 6
web.readinessProbe.successThreshold Success threshold for readinessProbe 1
web.startupProbe.enabled Enable startupProbe on Airflow webserver containers false
web.startupProbe.initialDelaySeconds Initial delay seconds for startupProbe 60
web.startupProbe.periodSeconds Period seconds for startupProbe 10
web.startupProbe.timeoutSeconds Timeout seconds for startupProbe 1
web.startupProbe.failureThreshold Failure threshold for startupProbe 15
web.startupProbe.successThreshold Success threshold for startupProbe 1
web.customLivenessProbe Custom livenessProbe that overrides the default one {}
web.customReadinessProbe Custom readinessProbe that overrides the default one {}
web.customStartupProbe Custom startupProbe that overrides the default one {}
web.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if web.resources is set (web.resources is recommended for production). medium
web.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads) {}
web.podSecurityContext.enabled Enabled Airflow webserver pods' Security Context true
web.podSecurityContext.fsGroupChangePolicy Set filesystem group change policy Always
web.podSecurityContext.sysctls Set kernel settings using the sysctl interface []
web.podSecurityContext.supplementalGroups Set filesystem extra groups []
web.podSecurityContext.fsGroup Set Airflow webserver pod's Security Context fsGroup 1001
web.containerSecurityContext.enabled Enabled Airflow webserver containers' Security Context true
web.containerSecurityContext.seLinuxOptions Set SELinux options in container {}
web.containerSecurityContext.runAsUser Set Airflow webserver containers' Security Context runAsUser 1001
web.containerSecurityContext.runAsGroup Set Airflow webserver containers' Security Context runAsGroup 1001
web.containerSecurityContext.runAsNonRoot Set Airflow webserver containers' Security Context runAsNonRoot true
web.containerSecurityContext.privileged Set web container's Security Context privileged false
web.containerSecurityContext.allowPrivilegeEscalation Set web container's Security Context allowPrivilegeEscalation false
web.containerSecurityContext.readOnlyRootFilesystem Set web container's Security Context readOnlyRootFilesystem true
web.containerSecurityContext.capabilities.drop List of capabilities to be dropped ["ALL"]
web.containerSecurityContext.seccompProfile.type Set container's Security Context seccomp profile RuntimeDefault
web.lifecycleHooks for the Airflow webserver container(s) to automate configuration before or after startup {}
web.automountServiceAccountToken Mount Service Account token in pod false
web.hostAliases Deployment pod host aliases []
web.podLabels Add extra labels to the Airflow webserver pods {}
web.podAnnotations Add extra annotations to the Airflow webserver pods {}
web.topologyKey Override common lib default topology key. If empty - "kubernetes.io/hostname" is used ""
web.affinity Affinity for Airflow webserver pods assignment (evaluated as a template) {}
web.nodeAffinityPreset.key Node label key to match. Ignored if web.affinity is set. ""
web.nodeAffinityPreset.type Node affinity preset type. Ignored if web.affinity is set. Allowed values: soft or hard ""
web.nodeAffinityPreset.values Node label values to match. Ignored if web.affinity is set. []
web.nodeSelector Node labels for Airflow webserver pods assignment {}
web.podAffinityPreset Pod affinity preset. Ignored if web.affinity is set. Allowed values: soft or hard. ""
web.podAntiAffinityPreset Pod anti-affinity preset. Ignored if web.affinity is set. Allowed values: soft or hard. soft
web.tolerations Tolerations for Airflow webserver pods assignment []
web.topologySpreadConstraints Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template []
web.priorityClassName Priority Class Name ""
web.schedulerName Use an alternate scheduler, e.g. "stork". ""
web.terminationGracePeriodSeconds Seconds Airflow webserver pod needs to terminate gracefully ""
web.updateStrategy.type Airflow webserver deployment strategy type RollingUpdate
web.updateStrategy.rollingUpdate Airflow webserver deployment rolling update configuration parameters {}
web.sidecars Add additional sidecar containers to the Airflow webserver pods []
web.initContainers Add additional init containers to the Airflow webserver pods []
web.extraVolumeMounts Optionally specify extra list of additional volumeMounts for the Airflow webserver pods []
web.extraVolumes Optionally specify extra list of additional volumes for the Airflow webserver pods []
web.pdb.create Deploy a pdb object for the Airflow webserver pods true
web.pdb.minAvailable Maximum number/percentage of unavailable Airflow webserver replicas ""
web.pdb.maxUnavailable Maximum number/percentage of unavailable Airflow webserver replicas ""
web.autoscaling.vpa.enabled Enable VPA for Airflow webserver false
web.autoscaling.vpa.annotations Annotations for VPA resource {}
web.autoscaling.vpa.controlledResources List of resources that the VPA can control. Defaults to cpu and memory []
web.autoscaling.vpa.maxAllowed VPA max allowed resources for the pod {}
web.autoscaling.vpa.minAllowed VPA min allowed resources for the pod {}
web.autoscaling.vpa.updatePolicy.updateMode Autoscaling update policy Auto
web.autoscaling.hpa.enabled Enable HPA for Airflow webserver false
web.autoscaling.hpa.minReplicas Minimum number of replicas ""
web.autoscaling.hpa.maxReplicas Maximum number of replicas ""
web.autoscaling.hpa.targetCPU Target CPU utilization percentage ""
web.autoscaling.hpa.targetMemory Target Memory utilization percentage ""
web.networkPolicy.enabled Specifies whether a NetworkPolicy should be created true
web.networkPolicy.allowExternal Don't require client label for connections true
web.networkPolicy.allowExternalEgress Allow the pod to access any range of port and all destinations. true
web.networkPolicy.extraIngress Add extra ingress rules to the NetworkPolicy []
web.networkPolicy.extraEgress Add extra ingress rules to the NetworkPolicy []
web.networkPolicy.ingressNSMatchLabels Labels to match to allow traffic from other namespaces {}
web.networkPolicy.ingressNSPodMatchLabels Pod labels to match to allow traffic from other namespaces {}

Airflow scheduler parameters

Name Description Value
scheduler.replicaCount Number of scheduler replicas 1
scheduler.command Override cmd []
scheduler.args Override args []
scheduler.extraEnvVars Add extra environment variables []
scheduler.extraEnvVarsCM ConfigMap with extra environment variables ""
scheduler.extraEnvVarsSecret Secret with extra environment variables ""
scheduler.extraEnvVarsSecrets List of secrets with extra environment variables for Airflow scheduler pods []
scheduler.livenessProbe.enabled Enable livenessProbe on Airflow scheduler containers true
scheduler.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 180
scheduler.livenessProbe.periodSeconds Period seconds for livenessProbe 20
scheduler.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 15
scheduler.livenessProbe.failureThreshold Failure threshold for livenessProbe 6
scheduler.livenessProbe.successThreshold Success threshold for livenessProbe 1
scheduler.readinessProbe.enabled Enable readinessProbe on Airflow scheduler containers true
scheduler.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 30
scheduler.readinessProbe.periodSeconds Period seconds for readinessProbe 10
scheduler.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 15
scheduler.readinessProbe.failureThreshold Failure threshold for readinessProbe 6
scheduler.readinessProbe.successThreshold Success threshold for readinessProbe 1
scheduler.startupProbe.enabled Enable startupProbe on Airflow scheduler containers false
scheduler.startupProbe.initialDelaySeconds Initial delay seconds for startupProbe 60
scheduler.startupProbe.periodSeconds Period seconds for startupProbe 10
scheduler.startupProbe.timeoutSeconds Timeout seconds for startupProbe 1
scheduler.startupProbe.failureThreshold Failure threshold for startupProbe 15
scheduler.startupProbe.successThreshold Success threshold for startupProbe 1
scheduler.customLivenessProbe Custom livenessProbe that overrides the default one {}
scheduler.customReadinessProbe Custom readinessProbe that overrides the default one {}
scheduler.customStartupProbe Custom startupProbe that overrides the default one {}
scheduler.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if scheduler.resources is set (scheduler.resources is recommended for production). small
scheduler.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads) {}
scheduler.podSecurityContext.enabled Enabled Airflow scheduler pods' Security Context true
scheduler.podSecurityContext.fsGroupChangePolicy Set filesystem group change policy Always
scheduler.podSecurityContext.sysctls Set kernel settings using the sysctl interface []
scheduler.podSecurityContext.supplementalGroups Set filesystem extra groups []
scheduler.podSecurityContext.fsGroup Set Airflow scheduler pod's Security Context fsGroup 1001
scheduler.containerSecurityContext.enabled Enabled Airflow scheduler containers' Security Context true
scheduler.containerSecurityContext.seLinuxOptions Set SELinux options in container {}
scheduler.containerSecurityContext.runAsUser Set Airflow scheduler containers' Security Context runAsUser 1001
scheduler.containerSecurityContext.runAsGroup Set Airflow scheduler containers' Security Context runAsGroup 1001
scheduler.containerSecurityContext.runAsNonRoot Set Airflow scheduler containers' Security Context runAsNonRoot true
scheduler.containerSecurityContext.privileged Set scheduler container's Security Context privileged false
scheduler.containerSecurityContext.allowPrivilegeEscalation Set scheduler container's Security Context allowPrivilegeEscalation false
scheduler.containerSecurityContext.readOnlyRootFilesystem Set scheduler container's Security Context readOnlyRootFilesystem true
scheduler.containerSecurityContext.capabilities.drop List of capabilities to be dropped ["ALL"]
scheduler.containerSecurityContext.seccompProfile.type Set container's Security Context seccomp profile RuntimeDefault
scheduler.lifecycleHooks for the Airflow scheduler container(s) to automate configuration before or after startup {}
scheduler.automountServiceAccountToken Mount Service Account token in pod false
scheduler.hostAliases Deployment pod host aliases []
scheduler.podLabels Add extra labels to the Airflow scheduler pods {}
scheduler.podAnnotations Add extra annotations to the Airflow scheduler pods {}
scheduler.topologyKey Override common lib default topology key. If empty - "kubernetes.io/hostname" is used ""
scheduler.affinity Affinity for Airflow scheduler pods assignment (evaluated as a template) {}
scheduler.nodeAffinityPreset.key Node label key to match. Ignored if scheduler.affinity is set. ""
scheduler.nodeAffinityPreset.type Node affinity preset type. Ignored if scheduler.affinity is set. Allowed values: soft or hard ""
scheduler.nodeAffinityPreset.values Node label values to match. Ignored if scheduler.affinity is set. []
scheduler.nodeSelector Node labels for Airflow scheduler pods assignment {}
scheduler.podAffinityPreset Pod affinity preset. Ignored if scheduler.affinity is set. Allowed values: soft or hard. ""
scheduler.podAntiAffinityPreset Pod anti-affinity preset. Ignored if scheduler.affinity is set. Allowed values: soft or hard. soft
scheduler.tolerations Tolerations for Airflow scheduler pods assignment []
scheduler.topologySpreadConstraints Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template []
scheduler.priorityClassName Priority Class Name ""
scheduler.schedulerName Use an alternate scheduler, e.g. "stork". ""
scheduler.terminationGracePeriodSeconds Seconds Airflow scheduler pod needs to terminate gracefully ""
scheduler.updateStrategy.type Airflow scheduler deployment strategy type RollingUpdate
scheduler.updateStrategy.rollingUpdate Airflow scheduler deployment rolling update configuration parameters {}
scheduler.sidecars Add additional sidecar containers to the Airflow scheduler pods []
scheduler.initContainers Add additional init containers to the Airflow scheduler pods []
scheduler.extraVolumeMounts Optionally specify extra list of additional volumeMounts for the Airflow scheduler pods []
scheduler.extraVolumes Optionally specify extra list of additional volumes for the Airflow scheduler pods []
scheduler.pdb.create Deploy a pdb object for the Airflow scheduler pods true
scheduler.pdb.minAvailable Maximum number/percentage of unavailable Airflow scheduler replicas ""
scheduler.pdb.maxUnavailable Maximum number/percentage of unavailable Airflow scheduler replicas ""
scheduler.autoscaling.vpa.enabled Enable VPA for Airflow scheduler false
scheduler.autoscaling.vpa.annotations Annotations for VPA resource {}
scheduler.autoscaling.vpa.controlledResources List of resources that the VPA can control. Defaults to cpu and memory []
scheduler.autoscaling.vpa.maxAllowed VPA max allowed resources for the pod {}
scheduler.autoscaling.vpa.minAllowed VPA min allowed resources for the pod {}
scheduler.autoscaling.vpa.updatePolicy.updateMode Autoscaling update policy Auto
scheduler.autoscaling.hpa.enabled Enable HPA for Airflow scheduler false
scheduler.autoscaling.hpa.minReplicas Minimum number of replicas ""
scheduler.autoscaling.hpa.maxReplicas Maximum number of replicas ""
scheduler.autoscaling.hpa.targetCPU Target CPU utilization percentage ""
scheduler.autoscaling.hpa.targetMemory Target Memory utilization percentage ""
scheduler.networkPolicy.enabled Specifies whether a NetworkPolicy should be created true
scheduler.networkPolicy.allowExternal Don't require client label for connections true
scheduler.networkPolicy.allowExternalEgress Allow the pod to access any range of port and all destinations. true
scheduler.networkPolicy.extraIngress Add extra ingress rules to the NetworkPolicy []
scheduler.networkPolicy.extraEgress Add extra ingress rules to the NetworkPolicy []
scheduler.networkPolicy.ingressNSMatchLabels Labels to match to allow traffic from other namespaces {}
scheduler.networkPolicy.ingressNSPodMatchLabels Pod labels to match to allow traffic from other namespaces {}

Airflow Dag Processor parameters

Name Description Value
dagProcessor.enabled Run Airflow Dag Processor Manager as a standalone component true
dagProcessor.replicaCount Number of Airflow Dag Processor replicas 1
dagProcessor.command Override default Airflow Dag Processor cmd []
dagProcessor.args Override default Airflow Dag Processor args []
dagProcessor.extraEnvVars Add extra environment variables to Airflow Dag Processor containers []
dagProcessor.extraEnvVarsCM ConfigMap with extra environment variables ""
dagProcessor.extraEnvVarsSecret Secret with extra environment variables ""
dagProcessor.livenessProbe.enabled Enable livenessProbe on Airflow Dag Processor containers true
dagProcessor.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 180
dagProcessor.livenessProbe.periodSeconds Period seconds for livenessProbe 20
dagProcessor.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 15
dagProcessor.livenessProbe.failureThreshold Failure threshold for livenessProbe 6
dagProcessor.livenessProbe.successThreshold Success threshold for livenessProbe 1
dagProcessor.readinessProbe.enabled Enable readinessProbe on Airflow Dag Processor containers true
dagProcessor.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 30
dagProcessor.readinessProbe.periodSeconds Period seconds for readinessProbe 10
dagProcessor.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 15
dagProcessor.readinessProbe.failureThreshold Failure threshold for readinessProbe 6
dagProcessor.readinessProbe.successThreshold Success threshold for readinessProbe 1
dagProcessor.startupProbe.enabled Enable startupProbe on Airflow Dag Processor containers false
dagProcessor.startupProbe.initialDelaySeconds Initial delay seconds for startupProbe 60
dagProcessor.startupProbe.periodSeconds Period seconds for startupProbe 10
dagProcessor.startupProbe.timeoutSeconds Timeout seconds for startupProbe 1
dagProcessor.startupProbe.failureThreshold Failure threshold for startupProbe 15
dagProcessor.startupProbe.successThreshold Success threshold for startupProbe 1
dagProcessor.customLivenessProbe Custom livenessProbe that overrides the default one {}
dagProcessor.customReadinessProbe Custom readinessProbe that overrides the default one {}
dagProcessor.customStartupProbe Custom startupProbe that overrides the default one {}
dagProcessor.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if dagProcessor.resources is set (dagProcessor.resources is recommended for production). small
dagProcessor.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads) {}
dagProcessor.podSecurityContext.enabled Enabled Airflow Dag Processor pods' Security Context true
dagProcessor.podSecurityContext.fsGroupChangePolicy Set filesystem group change policy Always
dagProcessor.podSecurityContext.sysctls Set kernel settings using the sysctl interface []
dagProcessor.podSecurityContext.supplementalGroups Set filesystem extra groups []
dagProcessor.podSecurityContext.fsGroup Set Airflow Dag Processor pod's Security Context fsGroup 1001
dagProcessor.containerSecurityContext.enabled Enabled Airflow Dag Processor containers' Security Context true
dagProcessor.containerSecurityContext.seLinuxOptions Set SELinux options in container {}
dagProcessor.containerSecurityContext.runAsUser Set Airflow Dag Processor containers' Security Context runAsUser 1001
dagProcessor.containerSecurityContext.runAsGroup Set Airflow Dag Processor containers' Security Context runAsGroup 1001
dagProcessor.containerSecurityContext.runAsNonRoot Set Airflow Dag Processor containers' Security Context runAsNonRoot true
dagProcessor.containerSecurityContext.privileged Set Airflow Dag Processor container's Security Context privileged false
dagProcessor.containerSecurityContext.allowPrivilegeEscalation Set Airflow Dag Processor container's Security Context allowPrivilegeEscalation false
dagProcessor.containerSecurityContext.readOnlyRootFilesystem Set Airflow Dag Processor container's Security Context readOnlyRootFilesystem true
dagProcessor.containerSecurityContext.capabilities.drop List of capabilities to be dropped ["ALL"]
dagProcessor.containerSecurityContext.seccompProfile.type Set container's Security Context seccomp profile RuntimeDefault
dagProcessor.lifecycleHooks for the Airflow Dag Processor containers to automate configuration before or after startup {}
dagProcessor.automountServiceAccountToken Mount Service Account token in pod false
dagProcessor.hostAliases Deployment pod host aliases []
dagProcessor.podLabels Add extra labels to the Airflow Dag Processor pods {}
dagProcessor.podAnnotations Add extra annotations to the Airflow Dag Processor pods {}
dagProcessor.topologyKey Override common lib default topology key. If empty - "kubernetes.io/hostname" is used ""
dagProcessor.affinity Affinity for Airflow Dag Processor pods assignment (evaluated as a template) {}
dagProcessor.nodeAffinityPreset.key Node label key to match. Ignored if dagProcessor.affinity is set. ""
dagProcessor.nodeAffinityPreset.type Node affinity preset type. Ignored if dagProcessor.affinity is set. Allowed values: soft or hard ""
dagProcessor.nodeAffinityPreset.values Node label values to match. Ignored if dagProcessor.affinity is set. []
dagProcessor.nodeSelector Node labels for Airflow Dag Processor pods assignment {}
dagProcessor.podAffinityPreset Pod affinity preset. Ignored if dagProcessor.affinity is set. Allowed values: soft or hard. ""
dagProcessor.podAntiAffinityPreset Pod anti-affinity preset. Ignored if dagProcessor.affinity is set. Allowed values: soft or hard. soft
dagProcessor.tolerations Tolerations for Airflow Dag Processor pods assignment []
dagProcessor.topologySpreadConstraints Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template []
dagProcessor.priorityClassName Priority Class Name ""
dagProcessor.schedulerName Use an alternate K8s scheduler, e.g. "stork". ""
dagProcessor.terminationGracePeriodSeconds Seconds Airflow Dag Processor pod needs to terminate gracefully ""
dagProcessor.updateStrategy.type Airflow Dag Processor deployment strategy type RollingUpdate
dagProcessor.updateStrategy.rollingUpdate Airflow Dag Processor deployment rolling update configuration parameters {}
dagProcessor.sidecars Add additional sidecar containers to the Airflow Dag Processor pods []
dagProcessor.initContainers Add additional init containers to the Airflow Dag Processor pods []
dagProcessor.extraVolumeMounts Optionally specify extra list of additional volumeMounts for the Airflow Dag Processor containers []
dagProcessor.extraVolumes Optionally specify extra list of additional volumes for the Airflow Dag Processor pods []
dagProcessor.pdb.create Deploy a pdb object for the Airflow Dag Processor pods true
dagProcessor.pdb.minAvailable Maximum number/percentage of unavailable Airflow Dag Processor replicas ""
dagProcessor.pdb.maxUnavailable Maximum number/percentage of unavailable Airflow Dag Processor replicas ""
dagProcessor.autoscaling.vpa.enabled Enable VPA for Airflow Dag Processor false
dagProcessor.autoscaling.vpa.annotations Annotations for VPA resource {}
dagProcessor.autoscaling.vpa.controlledResources List of resources that the VPA can control. Defaults to cpu and memory []
dagProcessor.autoscaling.vpa.maxAllowed VPA max allowed resources for the pod {}
dagProcessor.autoscaling.vpa.minAllowed VPA min allowed resources for the pod {}
dagProcessor.autoscaling.vpa.updatePolicy.updateMode Autoscaling update policy Auto
dagProcessor.autoscaling.hpa.enabled Enable HPA for Airflow Dag Processor false
dagProcessor.autoscaling.hpa.minReplicas Minimum number of replicas ""
dagProcessor.autoscaling.hpa.maxReplicas Maximum number of replicas ""
dagProcessor.autoscaling.hpa.targetCPU Target CPU utilization percentage ""
dagProcessor.autoscaling.hpa.targetMemory Target Memory utilization percentage ""
dagProcessor.networkPolicy.enabled Specifies whether a NetworkPolicy should be created true
dagProcessor.networkPolicy.allowExternal Don't require client label for connections true
dagProcessor.networkPolicy.allowExternalEgress Allow the pod to access any range of port and all destinations. true
dagProcessor.networkPolicy.extraIngress Add extra ingress rules to the NetworkPolicy []
dagProcessor.networkPolicy.extraEgress Add extra ingress rules to the NetworkPolicy []
dagProcessor.networkPolicy.ingressNSMatchLabels Labels to match to allow traffic from other namespaces {}
dagProcessor.networkPolicy.ingressNSPodMatchLabels Pod labels to match to allow traffic from other namespaces {}

Airflow Triggerer parameters

Name Description Value
triggerer.enabled Run Airflow Triggerer as a standalone component true
triggerer.defaultCapacity How many triggers a single Triggerer can run at once 1000
triggerer.replicaCount Number of Airflow Triggerer replicas 1
triggerer.command Override default Airflow Triggerer cmd []
triggerer.args Override default Airflow Triggerer args []
triggerer.extraEnvVars Add extra environment variables to Airflow Triggerer containers []
triggerer.extraEnvVarsCM ConfigMap with extra environment variables ""
triggerer.extraEnvVarsSecret Secret with extra environment variables ""
triggerer.containerPorts.logs Airflow Triggerer logs container port 8794
triggerer.livenessProbe.enabled Enable livenessProbe on Airflow Triggerer containers true
triggerer.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 180
triggerer.livenessProbe.periodSeconds Period seconds for livenessProbe 20
triggerer.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 15
triggerer.livenessProbe.failureThreshold Failure threshold for livenessProbe 6
triggerer.livenessProbe.successThreshold Success threshold for livenessProbe 1
triggerer.readinessProbe.enabled Enable readinessProbe on Airflow Triggerer containers true
triggerer.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 30
triggerer.readinessProbe.periodSeconds Period seconds for readinessProbe 10
triggerer.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 15
triggerer.readinessProbe.failureThreshold Failure threshold for readinessProbe 6
triggerer.readinessProbe.successThreshold Success threshold for readinessProbe 1
triggerer.startupProbe.enabled Enable startupProbe on Airflow Triggerer containers false
triggerer.startupProbe.initialDelaySeconds Initial delay seconds for startupProbe 60
triggerer.startupProbe.periodSeconds Period seconds for startupProbe 10
triggerer.startupProbe.timeoutSeconds Timeout seconds for startupProbe 1
triggerer.startupProbe.failureThreshold Failure threshold for startupProbe 15
triggerer.startupProbe.successThreshold Success threshold for startupProbe 1
triggerer.customLivenessProbe Custom livenessProbe that overrides the default one {}
triggerer.customReadinessProbe Custom readinessProbe that overrides the default one {}
triggerer.customStartupProbe Custom startupProbe that overrides the default one {}
triggerer.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if triggerer.resources is set (triggerer.resources is recommended for production). small
triggerer.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads) {}
triggerer.podSecurityContext.enabled Enabled Airflow Triggerer pods' Security Context true
triggerer.podSecurityContext.fsGroupChangePolicy Set filesystem group change policy Always
triggerer.podSecurityContext.sysctls Set kernel settings using the sysctl interface []
triggerer.podSecurityContext.supplementalGroups Set filesystem extra groups []
triggerer.podSecurityContext.fsGroup Set Airflow Triggerer pod's Security Context fsGroup 1001
triggerer.containerSecurityContext.enabled Enabled Airflow Triggerer containers' Security Context true
triggerer.containerSecurityContext.seLinuxOptions Set SELinux options in container {}
triggerer.containerSecurityContext.runAsUser Set Airflow Triggerer containers' Security Context runAsUser 1001
triggerer.containerSecurityContext.runAsGroup Set Airflow Triggerer containers' Security Context runAsGroup 1001
triggerer.containerSecurityContext.runAsNonRoot Set Airflow Triggerer containers' Security Context runAsNonRoot true
triggerer.containerSecurityContext.privileged Set Airflow Triggerer container's Security Context privileged false
triggerer.containerSecurityContext.allowPrivilegeEscalation Set Airflow Triggerer container's Security Context allowPrivilegeEscalation false
triggerer.containerSecurityContext.readOnlyRootFilesystem Set Airflow Triggerer container's Security Context readOnlyRootFilesystem true
triggerer.containerSecurityContext.capabilities.drop List of capabilities to be dropped ["ALL"]
triggerer.containerSecurityContext.seccompProfile.type Set container's Security Context seccomp profile RuntimeDefault
triggerer.lifecycleHooks for the Airflow Triggerer containers to automate configuration before or after startup {}
triggerer.automountServiceAccountToken Mount Service Account token in pod false
triggerer.hostAliases Deployment pod host aliases []
triggerer.podLabels Add extra labels to the Airflow Triggerer pods {}
triggerer.podAnnotations Add extra annotations to the Airflow Triggerer pods {}
triggerer.topologyKey Override common lib default topology key. If empty - "kubernetes.io/hostname" is used ""
triggerer.affinity Affinity for Airflow Triggerer pods assignment (evaluated as a template) {}
triggerer.nodeAffinityPreset.key Node label key to match. Ignored if triggerer.affinity is set. ""
triggerer.nodeAffinityPreset.type Node affinity preset type. Ignored if triggerer.affinity is set. Allowed values: soft or hard ""
triggerer.nodeAffinityPreset.values Node label values to match. Ignored if triggerer.affinity is set. []
triggerer.nodeSelector Node labels for Airflow Triggerer pods assignment {}
triggerer.podAffinityPreset Pod affinity preset. Ignored if triggerer.affinity is set. Allowed values: soft or hard. ""
triggerer.podAntiAffinityPreset Pod anti-affinity preset. Ignored if triggerer.affinity is set. Allowed values: soft or hard. soft
triggerer.tolerations Tolerations for Airflow Triggerer pods assignment []
triggerer.topologySpreadConstraints Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template []
triggerer.priorityClassName Priority Class Name ""
triggerer.schedulerName Use an alternate K8s scheduler, e.g. "stork". ""
triggerer.terminationGracePeriodSeconds Seconds Airflow Triggerer pod needs to terminate gracefully ""
triggerer.podManagementPolicy Pod management policy for the Airflow Triggerer statefulset OrderedReady
triggerer.updateStrategy.type Airflow Triggerer statefulset strategy type RollingUpdate
triggerer.updateStrategy.rollingUpdate Airflow Triggerer statefulset rolling update configuration parameters {}
triggerer.sidecars Add additional sidecar containers to the Airflow Triggerer pods []
triggerer.initContainers Add additional init containers to the Airflow Triggerer pods []
triggerer.extraVolumeMounts Optionally specify extra list of additional volumeMounts for the Airflow Triggerer containers []
triggerer.extraVolumes Optionally specify extra list of additional volumes for the Airflow Triggerer pods []
triggerer.pdb.create Deploy a pdb object for the Airflow Triggerer pods true
triggerer.pdb.minAvailable Maximum number/percentage of unavailable Airflow Triggerer replicas ""
triggerer.pdb.maxUnavailable Maximum number/percentage of unavailable Airflow Triggerer replicas ""
triggerer.autoscaling.vpa.enabled Enable VPA for Airflow Triggerer false
triggerer.autoscaling.vpa.annotations Annotations for VPA resource {}
triggerer.autoscaling.vpa.controlledResources List of resources that the VPA can control. Defaults to cpu and memory []
triggerer.autoscaling.vpa.maxAllowed VPA max allowed resources for the pod {}
triggerer.autoscaling.vpa.minAllowed VPA min allowed resources for the pod {}
triggerer.autoscaling.vpa.updatePolicy.updateMode Autoscaling update policy Auto
triggerer.autoscaling.hpa.enabled Enable HPA false
triggerer.autoscaling.hpa.minReplicas Minimum number of replicas ""
triggerer.autoscaling.hpa.maxReplicas Maximum number of replicas ""
triggerer.autoscaling.hpa.targetCPU Target CPU utilization percentage ""
triggerer.autoscaling.hpa.targetMemory Target Memory utilization percentage ""
triggerer.persistence.enabled Enable logs persistence using Persistent Volume Claims true
triggerer.persistence.storageClass Storage class of backing PVC ""
triggerer.persistence.annotations Additional Persistent Volume Claim annotations {}
triggerer.persistence.accessModes Persistent Volume Access Modes ["ReadWriteOnce"]
triggerer.persistence.size Size of logs volume 8Gi
triggerer.persistence.selector Selector to match an existing Persistent Volume for WordPress data PVC {}
triggerer.persistence.dataSource Custom PVC data source {}
triggerer.persistence.existingClaim The name of an existing PVC to use for persistence (only if triggerer.replicaCount=1) ""
triggerer.persistentVolumeClaimRetentionPolicy.enabled Controls if and how PVCs are deleted during the lifecycle of a StatefulSet false
triggerer.persistentVolumeClaimRetentionPolicy.whenScaled Volume retention behavior when the replica count of the StatefulSet is reduced Retain
triggerer.persistentVolumeClaimRetentionPolicy.whenDeleted Volume retention behavior that applies when the StatefulSet is deleted Retain
triggerer.service.type Airflow Triggerer service type ClusterIP
triggerer.service.ports.logs Airflow Triggerer service logs port 8794
triggerer.service.nodePorts.logs Node port for Airflow Triggerer service logs ""
triggerer.service.clusterIP Airflow Triggerer service Cluster IP ""
triggerer.service.loadBalancerIP Airflow Triggerer service Load Balancer IP ""
triggerer.service.loadBalancerSourceRanges Airflow Triggerer service Load Balancer sources []
triggerer.service.externalTrafficPolicy Airflow Triggerer service external traffic policy Cluster
triggerer.service.annotations Additional custom annotations for Airflow Triggerer service {}
triggerer.service.extraPorts Extra ports to expose in Airflow Triggerer service (normally used with the triggerer.sidecars value) []
triggerer.service.sessionAffinity Control where client requests go, to the same pod or round-robin None
triggerer.service.sessionAffinityConfig Additional settings for the sessionAffinity {}
triggerer.networkPolicy.enabled Specifies whether a NetworkPolicy should be created true
triggerer.networkPolicy.allowExternal Don't require client label for connections true
triggerer.networkPolicy.allowExternalEgress Allow the pod to access any range of port and all destinations. true
triggerer.networkPolicy.extraIngress Add extra ingress rules to the NetworkPolicy []
triggerer.networkPolicy.extraEgress Add extra ingress rules to the NetworkPolicy []
triggerer.networkPolicy.ingressNSMatchLabels Labels to match to allow traffic from other namespaces {}
triggerer.networkPolicy.ingressNSPodMatchLabels Pod labels to match to allow traffic from other namespaces {}

Airflow worker parameters

Name Description Value
worker.command Override default container command (useful when using custom images) []
worker.args Override default container args (useful when using custom images) []
worker.extraEnvVars Array with extra environment variables to add Airflow worker pods []
worker.extraEnvVarsCM ConfigMap containing extra environment variables for Airflow worker pods ""
worker.extraEnvVarsSecret Secret containing extra environment variables (in case of sensitive data) for Airflow worker pods ""
worker.extraEnvVarsSecrets List of secrets with extra environment variables for Airflow worker pods []
worker.containerPorts.http Airflow worker HTTP container port 8793
worker.replicaCount Number of Airflow worker replicas 1
worker.livenessProbe.enabled Enable livenessProbe on Airflow worker containers true
worker.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 180
worker.livenessProbe.periodSeconds Period seconds for livenessProbe 20
worker.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 5
worker.livenessProbe.failureThreshold Failure threshold for livenessProbe 6
worker.livenessProbe.successThreshold Success threshold for livenessProbe 1
worker.readinessProbe.enabled Enable readinessProbe on Airflow worker containers true
worker.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 30
worker.readinessProbe.periodSeconds Period seconds for readinessProbe 10
worker.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 5
worker.readinessProbe.failureThreshold Failure threshold for readinessProbe 6
worker.readinessProbe.successThreshold Success threshold for readinessProbe 1
worker.startupProbe.enabled Enable startupProbe on Airflow worker containers false
worker.startupProbe.initialDelaySeconds Initial delay seconds for startupProbe 60
worker.startupProbe.periodSeconds Period seconds for startupProbe 10
worker.startupProbe.timeoutSeconds Timeout seconds for startupProbe 1
worker.startupProbe.failureThreshold Failure threshold for startupProbe 15
worker.startupProbe.successThreshold Success threshold for startupProbe 1
worker.customLivenessProbe Custom livenessProbe that overrides the default one {}
worker.customReadinessProbe Custom readinessProbe that overrides the default one {}
worker.customStartupProbe Custom startupProbe that overrides the default one {}
worker.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if worker.resources is set (worker.resources is recommended for production). large
worker.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads) {}
worker.podSecurityContext.enabled Enabled Airflow worker pods' Security Context true
worker.podSecurityContext.fsGroupChangePolicy Set filesystem group change policy Always
worker.podSecurityContext.sysctls Set kernel settings using the sysctl interface []
worker.podSecurityContext.supplementalGroups Set filesystem extra groups []
worker.podSecurityContext.fsGroup Set Airflow worker pod's Security Context fsGroup 1001
worker.containerSecurityContext.enabled Enabled Airflow worker containers' Security Context true
worker.containerSecurityContext.seLinuxOptions Set SELinux options in container {}
worker.containerSecurityContext.runAsUser Set Airflow worker containers' Security Context runAsUser 1001
worker.containerSecurityContext.runAsGroup Set Airflow worker containers' Security Context runAsGroup 1001
worker.containerSecurityContext.runAsNonRoot Set Airflow worker containers' Security Context runAsNonRoot true
worker.containerSecurityContext.privileged Set worker container's Security Context privileged false
worker.containerSecurityContext.allowPrivilegeEscalation Set worker container's Security Context allowPrivilegeEscalation false
worker.containerSecurityContext.readOnlyRootFilesystem Set worker container's Security Context readOnlyRootFilesystem true
worker.containerSecurityContext.capabilities.drop List of capabilities to be dropped ["ALL"]
worker.containerSecurityContext.seccompProfile.type Set container's Security Context seccomp profile RuntimeDefault
worker.lifecycleHooks for the Airflow worker container(s) to automate configuration before or after startup {}
worker.automountServiceAccountToken Mount Service Account token in pod false
worker.hostAliases Deployment pod host aliases []
worker.podLabels Add extra labels to the Airflow worker pods {}
worker.podAnnotations Add extra annotations to the Airflow worker pods {}
worker.topologyKey Override common lib default topology key. If empty - "kubernetes.io/hostname" is used ""
worker.affinity Affinity for Airflow worker pods assignment (evaluated as a template) {}
worker.nodeAffinityPreset.key Node label key to match. Ignored if worker.affinity is set. ""
worker.nodeAffinityPreset.type Node affinity preset type. Ignored if worker.affinity is set. Allowed values: soft or hard ""
worker.nodeAffinityPreset.values Node label values to match. Ignored if worker.affinity is set. []
worker.nodeSelector Node labels for Airflow worker pods assignment {}
worker.podAffinityPreset Pod affinity preset. Ignored if worker.affinity is set. Allowed values: soft or hard. ""
worker.podAntiAffinityPreset Pod anti-affinity preset. Ignored if worker.affinity is set. Allowed values: soft or hard. soft
worker.tolerations Tolerations for Airflow worker pods assignment []
worker.topologySpreadConstraints Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template []
worker.priorityClassName Priority Class Name ""
worker.schedulerName Use an alternate scheduler, e.g. "stork". ""
worker.terminationGracePeriodSeconds Seconds Airflow worker pod needs to terminate gracefully ""
worker.podManagementPolicy Pod management policy for the worker statefulset OrderedReady
worker.updateStrategy.type Airflow worker statefulset strategy type RollingUpdate
worker.updateStrategy.rollingUpdate Airflow worker statefulset rolling update configuration parameters {}
worker.sidecars Add additional sidecar containers to the Airflow worker pods []
worker.initContainers Add additional init containers to the Airflow worker pods []
worker.extraVolumeMounts Optionally specify extra list of additional volumeMounts for the Airflow worker pods []
worker.extraVolumes Optionally specify extra list of additional volumes for the Airflow worker pods []
worker.extraVolumeClaimTemplates Optionally specify extra list of volumesClaimTemplates for the Airflow worker statefulset []
worker.podTemplate Template to replace the default one to be use when executor=KubernetesExecutor to create Airflow worker pods {}
worker.pdb.create Deploy a pdb object for the Airflow worker pods true
worker.pdb.minAvailable Maximum number/percentage of unavailable Airflow worker replicas ""
worker.pdb.maxUnavailable Maximum number/percentage of unavailable Airflow worker replicas ""
worker.autoscaling.enabled DEPRECATED: use worker.autoscaling.hpa.enabled instead false
worker.autoscaling.minReplicas DEPRECATED: use worker.autoscaling.hpa.minReplicas instead ""
worker.autoscaling.maxReplicas DEPRECATED: use worker.autoscaling.hpa.maxReplicas instead ""
worker.autoscaling.targetMemory DEPRECATED: use worker.autoscaling.hpa.targetMemory instead ""
worker.autoscaling.targetCPU DEPRECATED: use worker.autoscaling.hpa.targetCPU instead ""
worker.autoscaling.vpa.enabled Enable VPA for Airflow Worker false
worker.autoscaling.vpa.annotations Annotations for VPA resource {}
worker.autoscaling.vpa.controlledResources List of resources that the VPA can control. Defaults to cpu and memory []
worker.autoscaling.vpa.maxAllowed VPA max allowed resources for the pod {}
worker.autoscaling.vpa.minAllowed VPA min allowed resources for the pod {}
worker.autoscaling.vpa.updatePolicy.updateMode Autoscaling update policy Auto
worker.autoscaling.hpa.enabled Enable HPA for Airflow Worker false
worker.autoscaling.hpa.minReplicas Minimum number of replicas 1
worker.autoscaling.hpa.maxReplicas Maximum number of replicas 3
worker.autoscaling.hpa.targetCPU Target CPU utilization percentage 80
worker.autoscaling.hpa.targetMemory Target Memory utilization percentage 80
worker.networkPolicy.enabled Specifies whether a NetworkPolicy should be created true
worker.networkPolicy.allowExternal Don't require client label for connections true
worker.networkPolicy.allowExternalEgress Allow the pod to access any range of port and all destinations. true
worker.networkPolicy.extraIngress Add extra ingress rules to the NetworkPolicy []
worker.networkPolicy.extraEgress Add extra ingress rules to the NetworkPolicy []
worker.networkPolicy.ingressNSMatchLabels Labels to match to allow traffic from other namespaces {}
worker.networkPolicy.ingressNSPodMatchLabels Pod labels to match to allow traffic from other namespaces {}

Airflow "setup-db" K8s Job parameters

Name Description Value
setupDBJob.enabled Enable setting up the Airflow database using a K8s job (otherwise it's done by the Webserver on startup) true
setupDBJob.backoffLimit set backoff limit of the job 10
setupDBJob.command Override default container command on "setup-db" job's containers []
setupDBJob.args Override default container args on "setup-db" job's containers []
setupDBJob.containerSecurityContext.enabled Enabled "setup-db" job's containers' Security Context true
setupDBJob.containerSecurityContext.seLinuxOptions Set SELinux options in "setup-db" job's containers {}
setupDBJob.containerSecurityContext.runAsUser Set runAsUser in "setup-db" job's containers' Security Context 1001
setupDBJob.containerSecurityContext.runAsGroup Set runAsUser in "setup-db" job's containers' Security Context 1001
setupDBJob.containerSecurityContext.runAsNonRoot Set runAsNonRoot in "setup-db" job's containers' Security Context true
setupDBJob.containerSecurityContext.readOnlyRootFilesystem Set readOnlyRootFilesystem in "setup-db" job's containers' Security Context true
setupDBJob.containerSecurityContext.privileged Set privileged in "setup-db" job's containers' Security Context false
setupDBJob.containerSecurityContext.allowPrivilegeEscalation Set allowPrivilegeEscalation in "setup-db" job's containers' Security Context false
setupDBJob.containerSecurityContext.capabilities.add List of capabilities to be added in "setup-db" job's containers []
setupDBJob.containerSecurityContext.capabilities.drop List of capabilities to be dropped in "setup-db" job's containers ["ALL"]
setupDBJob.containerSecurityContext.seccompProfile.type Set seccomp profile in "setup-db" job's containers RuntimeDefault
setupDBJob.podSecurityContext.enabled Enabled "setup-db" job's pods' Security Context true
setupDBJob.podSecurityContext.fsGroupChangePolicy Set fsGroupChangePolicy in "setup-db" job's pods' Security Context Always
setupDBJob.podSecurityContext.sysctls List of sysctls to allow in "setup-db" job's pods' Security Context []
setupDBJob.podSecurityContext.supplementalGroups List of supplemental groups to add to "setup-db" job's pods' Security Context []
setupDBJob.podSecurityContext.fsGroup Set fsGroup in "setup-db" job's pods' Security Context 1001
setupDBJob.extraEnvVars Array containing extra env vars to configure the Airflow "setup-db" job's container []
setupDBJob.extraEnvVarsCM ConfigMap containing extra env vars to configure the Airflow "setup-db" job's container ""
setupDBJob.extraEnvVarsSecret Secret containing extra env vars to configure the Airflow "setup-db" job's container (in case of sensitive data) ""
setupDBJob.resourcesPreset Set Airflow "setup-db" job's container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if setupDBJob.resources is set (setupDBJob.resources is recommended for production). small
setupDBJob.resources Set Airflow "setup-db" job's container requests and limits for different resources like CPU or memory (essential for production workloads) {}
setupDBJob.automountServiceAccountToken Mount Service Account token in Airflow "setup-db" job's pods false
setupDBJob.hostAliases Add deployment host aliases []
setupDBJob.annotations Add annotations to the Airflow "setup-db" job {}
setupDBJob.podLabels Additional pod labels for Airflow "setup-db" job {}
setupDBJob.podAnnotations Additional pod annotations for Airflow "setup-db" job {}
setupDBJob.topologyKey Override common lib default topology key. If empty - "kubernetes.io/hostname" is used ""
setupDBJob.affinity Affinity for Airflow setup-db pods assignment (evaluated as a template) {}
setupDBJob.nodeAffinityPreset.key Node label key to match. Ignored if setupDBJob.affinity is set. ""
setupDBJob.nodeAffinityPreset.type Node affinity preset type. Ignored if setupDBJob.affinity is set. Allowed values: soft or hard ""
setupDBJob.nodeAffinityPreset.values Node label values to match. Ignored if setupDBJob.affinity is set. []
setupDBJob.nodeSelector Node labels for Airflow setup-db pods assignment {}
setupDBJob.podAffinityPreset Pod affinity preset. Ignored if setupDBJob.affinity is set. Allowed values: soft or hard. ""
setupDBJob.podAntiAffinityPreset Pod anti-affinity preset. Ignored if setupDBJob.affinity is set. Allowed values: soft or hard. soft
setupDBJob.tolerations Tolerations for Airflow setup-db pods assignment []
setupDBJob.topologySpreadConstraints Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template []
setupDBJob.priorityClassName Priority Class Name ""
setupDBJob.schedulerName Use an alternate scheduler, e.g. "stork". ""
setupDBJob.terminationGracePeriodSeconds Seconds Airflow setup-db pod needs to terminate gracefully ""
setupDBJob.extraVolumes Optionally specify extra list of additional volumes for Airflow "setup-db" job's pods []
setupDBJob.extraVolumeMounts Optionally specify extra list of additional volumeMounts for the Airflow "setup-db" job's containers []
setupDBJob.initContainers Add additional init containers to the Airflow "setup-db" job's pods []

Airflow ldap parameters

Name Description Value
ldap.enabled Enable LDAP authentication false
ldap.uri Server URI, eg. ldap://ldap_server:389 ldap://ldap_server:389
ldap.basedn Base of the search, eg. ou=example,o=org. dc=example,dc=org
ldap.searchAttribute if doing an indirect bind to ldap, this is the field that matches the username when searching for the account to bind to cn
ldap.binddn DN of the account used to search in the LDAP server. cn=admin,dc=example,dc=org
ldap.bindpw Bind Password ""
ldap.existingSecret Name of an existing secret containing the LDAP bind password ""
ldap.userRegistration Set to True to enable user self registration True
ldap.userRegistrationRole Set role name to be assign when a user registers himself. This role must already exist. Mandatory when using ldap.userRegistration Public
ldap.rolesMapping mapping from LDAP DN to a list of roles { "cn=All,ou=Groups,dc=example,dc=org": ["User"], "cn=Admins,ou=Groups,dc=example,dc=org": ["Admin"], }
ldap.rolesSyncAtLogin replace ALL the user's roles each login, or only on registration True
ldap.tls.enabled Enabled TLS/SSL for LDAP, you must include the CA file. false
ldap.tls.allowSelfSigned Allow to use self signed certificates true
ldap.tls.certificatesSecret Name of the existing secret containing the certificate CA file that will be used by ldap client ""
ldap.tls.certificatesMountPath Where LDAP certifcates are mounted. /opt/bitnami/airflow/conf/certs
ldap.tls.CAFilename LDAP CA cert filename ""

Traffic Exposure Parameters

Name Description Value
service.type Airflow service type ClusterIP
service.ports.http Airflow service HTTP port 8080
service.nodePorts.http Node port for HTTP ""
service.sessionAffinity Control where client requests go, to the same pod or round-robin None
service.sessionAffinityConfig Additional settings for the sessionAffinity {}
service.clusterIP Airflow service Cluster IP ""
service.loadBalancerIP Airflow service Load Balancer IP ""
service.loadBalancerSourceRanges Airflow service Load Balancer sources []
service.externalTrafficPolicy Airflow service external traffic policy Cluster
service.annotations Additional custom annotations for Airflow service {}
service.extraPorts Extra port to expose on Airflow service []
ingress.enabled Enable ingress record generation for Airflow false
ingress.ingressClassName IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+) ""
ingress.pathType Ingress path type ImplementationSpecific
ingress.apiVersion Force Ingress API version (automatically detected if not set) ""
ingress.hostname Default host for the ingress record airflow.local
ingress.path Default path for the ingress record /
ingress.annotations Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations. {}
ingress.tls Enable TLS configuration for the host defined at ingress.hostname parameter false
ingress.selfSigned Create a TLS secret for this ingress record using self-signed certificates generated by Helm false
ingress.extraHosts An array with additional hostname(s) to be covered with the ingress record []
ingress.extraPaths An array with additional arbitrary paths that may need to be added to the ingress under the main host []
ingress.extraTls TLS configuration for additional hostname(s) to be covered with this ingress record []
ingress.secrets Custom TLS certificates as secrets []
ingress.extraRules Additional rules to be covered with this ingress record []

Other Parameters

Name Description Value
serviceAccount.create Enable creation of ServiceAccount for Airflow pods true
serviceAccount.name The name of the ServiceAccount to use. ""
serviceAccount.automountServiceAccountToken Allows auto mount of ServiceAccountToken on the serviceAccount created false
serviceAccount.annotations Additional custom annotations for the ServiceAccount {}
rbac.create Create Role and RoleBinding false
rbac.rules Custom RBAC rules to set []

StatsD metrics parameters

Name Description Value
metrics.enabled Enable a StatsD exporter that collects StatsD metrics from Airflow components and expose them as Prometheus metrics false
metrics.image.registry StatsD exporter image registry REGISTRY_NAME
metrics.image.repository StatsD exporter image repository REPOSITORY_NAME/statsd-exporter
metrics.image.digest StatsD exporter image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag ""
metrics.image.pullPolicy StatsD exporter image pull policy IfNotPresent
metrics.image.pullSecrets StatsD exporter image pull secrets []
metrics.configuration Specify content for StatsD exporter's mappings.yml ""
metrics.existingConfigmap Name of an existing config map containing the StatsD exporter's mappings.yml ""
metrics.containerPorts.ingest StatsD exporter ingest container port (used for the metrics ingestion from Airflow components) 9125
metrics.containerPorts.metrics StatsD exporter metrics container port (used to expose Prometheus metrics) 9102
metrics.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if metrics.resources is set (metrics.resources is recommended for production). nano
metrics.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads) {}
metrics.podSecurityContext.enabled Enable security context for the pods true
metrics.podSecurityContext.fsGroupChangePolicy Set filesystem group change policy Always
metrics.podSecurityContext.sysctls Set kernel settings using the sysctl interface []
metrics.podSecurityContext.supplementalGroups Set filesystem extra groups []
metrics.podSecurityContext.fsGroup Set StatsD exporter pod's Security Context fsGroup 1001
metrics.containerSecurityContext.enabled Enable StatsD exporter containers' Security Context true
metrics.containerSecurityContext.seLinuxOptions Set SELinux options in container {}
metrics.containerSecurityContext.runAsUser Set StatsD exporter containers' Security Context runAsUser 1001
metrics.containerSecurityContext.runAsGroup Set StatsD exporter containers' Security Context runAsGroup 1001
metrics.containerSecurityContext.runAsNonRoot Set StatsD exporter containers' Security Context runAsNonRoot true
metrics.containerSecurityContext.privileged Set StatsD exporter containers' Security Context privileged false
metrics.containerSecurityContext.allowPrivilegeEscalation Set StatsD exporter containers' Security Context allowPrivilegeEscalation false
metrics.containerSecurityContext.readOnlyRootFilesystem Set StatsD exporter containers' Security Context readOnlyRootFilesystem true
metrics.containerSecurityContext.capabilities.drop List of capabilities to be dropped ["ALL"]
metrics.containerSecurityContext.seccompProfile.type Set containers' Security Context seccomp profile RuntimeDefault
metrics.livenessProbe.enabled Enable livenessProbe on StatsD exporter containers true
metrics.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 180
metrics.livenessProbe.periodSeconds Period seconds for livenessProbe 20
metrics.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 5
metrics.livenessProbe.failureThreshold Failure threshold for livenessProbe 6
metrics.livenessProbe.successThreshold Success threshold for livenessProbe 1
metrics.readinessProbe.enabled Enable readinessProbe on StatsD exporter containers true
metrics.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 30
metrics.readinessProbe.periodSeconds Period seconds for readinessProbe 10
metrics.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 5
metrics.readinessProbe.failureThreshold Failure threshold for readinessProbe 6
metrics.readinessProbe.successThreshold Success threshold for readinessProbe 1
metrics.startupProbe.enabled Enable startupProbe on StatsD exporter containers false
metrics.startupProbe.initialDelaySeconds Initial delay seconds for startupProbe 60
metrics.startupProbe.periodSeconds Period seconds for startupProbe 10
metrics.startupProbe.timeoutSeconds Timeout seconds for startupProbe 1
metrics.startupProbe.failureThreshold Failure threshold for startupProbe 15
metrics.startupProbe.successThreshold Success threshold for startupProbe 1
metrics.customLivenessProbe Custom livenessProbe that overrides the default one {}
metrics.customReadinessProbe Custom readinessProbe that overrides the default one {}
metrics.customStartupProbe Custom startupProbe that overrides the default one {}
metrics.lifecycleHooks for the StatsD exporter containers' to automate configuration before or after startup {}
metrics.automountServiceAccountToken Mount Service Account token in pod false
metrics.hostAliases StatsD exporter pods host aliases []
metrics.podLabels Extra labels for StatsD exporter pods {}
metrics.podAnnotations Extra annotations for StatsD exporter pods {}
metrics.topologyKey Override common lib default topology key. If empty - "kubernetes.io/hostname" is used ""
metrics.podAffinityPreset Pod affinity preset. Ignored if metrics.affinity is set. Allowed values: soft or hard ""
metrics.podAntiAffinityPreset Pod anti-affinity preset. Ignored if metrics.affinity is set. Allowed values: soft or hard soft
metrics.nodeAffinityPreset.type Node affinity preset type. Ignored if metrics.affinity is set. Allowed values: soft or hard ""
metrics.nodeAffinityPreset.key Node label key to match Ignored if metrics.affinity is set. ""
metrics.nodeAffinityPreset.values Node label values to match. Ignored if metrics.affinity is set. []
metrics.affinity Affinity for StatsD exporter pods assignment {}
metrics.nodeSelector Node labels for StatsD exporter pods assignment {}
metrics.priorityClassName StatsD exporter pods' priorityClassName ""
metrics.tolerations Tolerations for pod assignment []
metrics.topologySpreadConstraints Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template []
metrics.schedulerName Name of the k8s scheduler (other than default) for StatsD exporter ""
metrics.terminationGracePeriodSeconds Seconds StatsD exporter pod needs to terminate gracefully ""
metrics.extraVolumes Optionally specify extra list of additional volumes for the StatsD exporter pods []
metrics.extraVolumeMounts Optionally specify extra list of additional volumeMounts for the StatsD exporter containers []
metrics.service.ports.ingest StatsD exporter ingest service port (used for the metrics ingestion from Airflow components) 9125
metrics.service.ports.metrics StatsD exporter metrics service port (used to expose Prometheus metrics) 9102
metrics.service.clusterIP Static clusterIP or None for headless services ""
metrics.service.sessionAffinity Control where client requests go, to the same pod or round-robin None
metrics.service.annotations Annotations for the StatsD metrics service {}
metrics.serviceMonitor.enabled if true, creates a Prometheus Operator ServiceMonitor (requires metrics.enabled to be true) false
metrics.serviceMonitor.namespace Namespace in which Prometheus is running ""
metrics.serviceMonitor.interval Interval at which metrics should be scraped ""
metrics.serviceMonitor.scrapeTimeout Timeout after which the scrape is ended ""
metrics.serviceMonitor.labels Additional labels that can be used so ServiceMonitor will be discovered by Prometheus {}
metrics.serviceMonitor.selector Prometheus instance selector labels {}
metrics.serviceMonitor.relabelings RelabelConfigs to apply to samples before scraping []
metrics.serviceMonitor.metricRelabelings MetricRelabelConfigs to apply to samples before ingestion []
metrics.serviceMonitor.honorLabels Specify honorLabels parameter to add the scrape endpoint false
metrics.serviceMonitor.jobLabel The name of the label on the target service to use as the job name in prometheus. ""
metrics.networkPolicy.enabled Specifies whether a NetworkPolicy should be created true
metrics.networkPolicy.allowExternal Don't require client label for connections true
metrics.networkPolicy.allowExternalEgress Allow the pod to access any range of port and all destinations. true
metrics.networkPolicy.extraIngress Add extra ingress rules to the NetworkPolicy []
metrics.networkPolicy.extraEgress Add extra ingress rules to the NetworkPolicy []
metrics.networkPolicy.ingressNSMatchLabels Labels to match to allow traffic from other namespaces {}
metrics.networkPolicy.ingressNSPodMatchLabels Pod labels to match to allow traffic from other namespaces {}

Airflow database parameters

Name Description Value
postgresql.enabled Switch to enable or disable the PostgreSQL helm chart true
postgresql.auth.enablePostgresUser Assign a password to the "postgres" admin user. Otherwise, remote access will be blocked for this user true
postgresql.auth.username Name for a custom user to create bn_airflow
postgresql.auth.password Password for the custom user to create ""
postgresql.auth.database Name for a custom database to create bitnami_airflow
postgresql.auth.existingSecret Name of existing secret to use for PostgreSQL credentials ""
postgresql.architecture PostgreSQL architecture (standalone or replication) standalone
postgresql.primary.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if primary.resources is set (primary.resources is recommended for production). nano
postgresql.primary.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads) {}
externalDatabase.host Database host (ignored if externalDatabase.sqlConnection is set) localhost
externalDatabase.port Database port number (ignored if externalDatabase.sqlConnection is set) 5432
externalDatabase.user Non-root username for Airflow (ignored if externalDatabase.sqlConnection is set) bn_airflow
externalDatabase.password Password for the non-root username for Airflow (ignored if externalDatabase.sqlConnection or externalDatabase.existingSecret are set) ""
externalDatabase.database Airflow database name (ignored if externalDatabase.sqlConnection is set) bitnami_airflow
externalDatabase.sqlConnection SQL connection string ""
externalDatabase.existingSecret Name of an existing secret resource containing the database credentials ""
externalDatabase.existingSecretPasswordKey Name of an existing secret key containing the database credentials (ignored if externalDatabase.existingSecretSqlConnectionKey is set) ""
externalDatabase.existingSecretSqlConnectionKey Name of an existing secret key containing the SQL connection string ""
redis.enabled Switch to enable or disable the Redis® helm true
redis.auth.enabled Enable password authentication true
redis.auth.password Redis® password ""
redis.auth.existingSecret The name of an existing secret with Redis® credentials ""
redis.architecture Redis® architecture. Allowed values: standalone or replication standalone
redis.master.service.ports.redis Redis® port 6379
redis.master.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if master.resources is set (master.resources is recommended for production). nano
redis.master.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads) {}
externalRedis.host Redis® host localhost
externalRedis.port Redis® port number 6379
externalRedis.username Redis® username ""
externalRedis.password Redis® password ""
externalRedis.existingSecret Name of an existing secret resource containing the Redis&trade credentials ""
externalRedis.existingSecretPasswordKey Name of an existing secret key containing the Redis&trade credentials ""

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

helm install my-release \
               --set auth.username=my-user \
               --set auth.password=my-passsword \
               --set auth.fernetKey=my-fernet-key \
               --set auth.secretKey=my-secret-key \
               oci://REGISTRY_NAME/REPOSITORY_NAME/airflow

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

The above command sets the credentials to access the Airflow web UI.

NOTE: Once this chart is deployed, it is not possible to change the application's access credentials, such as usernames or passwords, using Helm. To change these application credentials after deployment, delete any persistent volumes (PVs) used by the chart and re-deploy it, or use the application's built-in administrative tools if available.

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

helm install my-release -f values.yaml oci://REGISTRY_NAME/REPOSITORY_NAME/airflow

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts. Tip: You can use the default values.yaml

Troubleshooting

Find more information about how to deal with common errors related to Bitnami's Helm charts in this troubleshooting guide.

Upgrading

To 24.0.0

This major updates the Redis® subchart to its newest major, 21.0.0, which updates Redis® from 7.4 to 8.0. Here you can find more information about the changes introduced in that version. No major issues are expected during the upgrade.

To 23.0.0

This major release adds support for Airflow 3.x.y series. Additionally, previous Airflow 2.x.y series can be deployed by setting the corresponding image parameters. The chart logic will detect which image version you are using, and it will generate the required Airflow configuration and Kubernetes objects.

We recommend following the next procedure in order to upgrade from 22.x.y chart version to the 23.x.y series, and also upgrade to Airflow 3.y.z series:

  • Upgrade your release (maintaining Airflow 2.x.y series):
helm upgrade airflow oci://REGISTRY_NAME/REPOSITORY_NAME/airflow --set image.tag=2
  • Follow the recommended steps for the database backup and the DAGs files verification available at the official "upgrading to Airflow 3" guide.

  • Upgrade your release now using the default Airflow 3.x.y series:

helm upgrade airflow oci://REGISTRY_NAME/REPOSITORY_NAME/airflow

To 22.4.0

This version introduces image verification for security purposes. To disable it, set global.security.allowInsecureImages to true. More details at GitHub issue.

To 22.2.0

This minor version no longer expects custom Airflow configuration (set via the configuration parameter) to be provided as a string. Instead, it expects a dictionary with the configuration sections/keys/values. Find more info in the section above.

To 22.0.0

This major version replaces exposing Prometheus metrics using the Airflow prometheus exporter, that exposes metrics based on the data retrieved from the database, by configuring Airflow components to send StatsD metrics to the StatsD exporter that transforms them into Prometheus metrics. Find more information about this approach in the Apache Airflow official documentation.

No upgrades issues are expected when upgrading from 21.x.x but existing dashboards and alerts based on the previous metrics should be adapted to the new ones.

To 21.0.0

This major version uses a single container image (bitnami/airflow by default) to run every Airflow component (Webserver, Scheduler and Worker) so bitnami/airflow-scheduler and bitnami/airflow-worker images are no longer necessary. Also, operations to load custom DAGs and plugins via init containers also use this same image so bitnami/git and bitnami/os-shell are no longer necessary either. These changes implies several simplifications in the chart values:

  • New image.* parameters are introduced to configure the container image used to run the Airflow components.
  • web.image.*, scheduler.image.* and worker.image.* parameters are removed.
  • dags.image.* and git.image.* parameters are removed.

Some other simplifications are introduced around adding custom DAGs and plugins:

  • dags.* and git.dags.* parameters are merged into a single dags.* parameter.
  • git.plugins.* parameter are renamed to plugins.*.
  • git.clone.* and git.sync. parameters are now available under defaultInitContainers.loadDAGsPlugins.* and defaultSidecars.syncDAGsPlugins.*, respectively.

No upgrades issues are expected when upgrading from 20.x.x if DAGs and plugins related parameters are properly adapted as described above.

To 20.0.0

This major updates the PostgreSQL subchart to its newest major, 16.0.0, which uses PostgreSQL 17.x. Follow the official instructions to upgrade to 17.x.

To 19.0.0

This major updates the Redis® subchart to its newest major, 20.0.0. Here you can find more information about the changes introduced in that version.

To 18.0.0

This major bump changes the following security defaults:

  • runAsGroup is changed from 0 to 1001
  • readOnlyRootFilesystem is set to true
  • resourcesPreset is changed from none to the minimum size working in our test suites (NOTE: resourcesPreset is not meant for production usage, but resources adapted to your use case).
  • global.compatibility.openshift.adaptSecurityContext is changed from disabled to auto.

This could potentially break any customization or init scripts used in your deployment. If this is the case, change the default values to the previous ones.

To 17.0.0

This major release bumps the PostgreSQL chart version to 14.x.x; no major issues are expected during the upgrade.

To 16.0.0

This major updates the PostgreSQL subchart to its newest major, 13.0.0. Here you can find more information about the changes introduced in that version.

To 15.0.0

This major updates the Redis® subchart to its newest major, 18.0.0. Here you can find more information about the changes introduced in that version.

NOTE: Due to an error in our release process, Redis®' chart versions higher or equal than 17.15.4 already use Redis® 7.2 by default.

To 14.0.0

This major updates the PostgreSQL subchart to its newest major, 12.0.0. Here you can find more information about the changes introduced in that version.

To 13.0.0

This major update the Redis® subchart to its newest major, 17.0.0, which updates Redis® from its version 6.2 to the latest 7.0.

To 12.0.0

This major release renames several values in this chart and adds missing features, in order to be inline with the rest of assets in the Bitnami charts repository. Additionally updates the PostgreSQL & Redis subcharts to their newest major 11.x.x and 16.x.x, respectively, which contain similar changes.

  • auth.forcePassword parameter is deprecated. The new version uses Helm's lookup functionalities and forcing passwords isn't required anymore.
  • config and configurationConfigMap have been renamed to configuration and existingConfigmap, respectively.
  • dags.configMap and web.configMap have been renamed to dags.existingConfigmap and web.existingConfigmap, respectively.
  • web.containerPort and worker.port have been regrouped under the web.containerPorts and worker.containerPorts maps, respectively.
  • web.podDisruptionBudget, scheduler.podDisruptionBudget and worker.podDisruptionBudget maps have been renamed to web.pdb, scheduler.pdb and worker.pdb, respectively.
  • worker.autoscaling.replicas.min, worker.autoscaling.replicas.max, worker.autoscaling.targets.cpu and worker.autoscaling.targets.memory have been renamed to worker.autoscaling.minReplicas, worker.autoscaling.maxReplicas, worker.autoscaling.targetCPU and .Values.worker.autoscaling.targetMemory, respectively.
  • service.port and service.httpsPort have been regrouped under the service.ports map.
  • ingress map is completely redefined.
  • metrics.service.port has been regrouped under the metrics.service.ports map.
  • Support for Network Policies is dropped and it'll be properly added in the future.
  • The secret keys airflow-fernetKey and airflow-secretKey were renamed to airflow-fernet-key and airflow-secret-key, respectively.

How to upgrade to version 12.0.0

To upgrade to 12.0.0 from 11.x, it should be done reusing the PVC(s) used to hold the data on your previous release. To do so, follow the instructions below (the following example assumes that the release name is airflow and the release namespace default):

NOTE: Please, create a backup of your database before running any of those actions.

  1. Obtain the credentials and the names of the PVCs used to hold the data on your current release:
        export AIRFLOW_PASSWORD=$(kubectl get secret --namespace default airflow -o jsonpath="{.data.airflow-password}" | base64 --decode)
        export AIRFLOW_FERNET_KEY=$(kubectl get secret --namespace default airflow -o jsonpath="{.data.airflow-fernetKey}" | base64 --decode)
        export AIRFLOW_SECRET_KEY=$(kubectl get secret --namespace default airflow -o jsonpath="{.data.airflow-secretKey}" | base64 --decode)
        export POSTGRESQL_PASSWORD=$(kubectl get secret --namespace default airflow-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
        export REDIS_PASSWORD=$(kubectl get secret --namespace default airflow-redis -o jsonpath="{.data.redis-password}" | base64 --decode)
        export POSTGRESQL_PVC=$(kubectl get pvc -l app.kubernetes.io/instance=airflow,app.kubernetes.io/name=postgresql,role=primary -o jsonpath="{.items[0].metadata.name}")
  1. Delete the Airflow worker & PostgreSQL statefulset (notice the option --cascade=false) and secrets:
        kubectl delete statefulsets.apps --cascade=false airflow-postgresql
        kubectl delete statefulsets.apps --cascade=false airflow-worker
        kubectl delete secret postgresql --namespace default
        kubectl delete secret airflow --namespace default
  1. Upgrade your release using the same PostgreSQL version:
        CURRENT_PG_VERSION=$(kubectl exec airflow-postgresql-0 -- bash -c 'echo $BITNAMI_IMAGE_VERSION')
        helm upgrade airflow bitnami/airflow \
          --set loadExamples=true \
          --set web.baseUrl=http://127.0.0.1:8080 \
          --set auth.password=$AIRFLOW_PASSWORD \
          --set auth.fernetKey=$AIRFLOW_FERNET_KEY \
          --set auth.secretKey=$AIRFLOW_SECRET_KEY \
          --set postgresql.image.tag=$CURRENT_VERSION \
          --set postgresql.auth.password=$POSTGRESQL_PASSWORD \
          --set postgresql.persistence.existingClaim=$POSTGRESQL_PVC \
          --set redis.password=$REDIS_PASSWORD \
          --set redis.cluster.enabled=true
  1. Delete the existing Airflow worker & PostgreSQL pods and the new statefulset will create a new one:
        kubectl delete pod airflow-postgresql-0
        kubectl delete pod airflow-worker-0

To 11.0.0

This major update the Redis® subchart to its newest major, 15.0.0. Here you can find more info about the specific changes.

To 10.0.0

This major updates the Redis® subchart to it newest major, 14.0.0, which contains breaking changes. For more information on this subchart's major and the steps needed to migrate your data from your previous release, please refer to Redis® upgrade notes..

To 7.0.0

On November 13, 2020, Helm v2 support was formally finished, this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL. The following changes were introduced in this version:

  • Previous versions of this Helm Chart use apiVersion: v1 (installable by both Helm 2 and 3), this Helm Chart was updated to apiVersion: v2 (installable by Helm 3 only). Here you can find more information about the apiVersion field.
  • Move dependency information from the requirements.yaml to the Chart.yaml
  • After running helm dependency update, a Chart.lock file is generated containing the same structure used in the previous requirements.lock
  • The different fields present in the Chart.yaml file has been ordered alphabetically in a homogeneous way for all the Bitnami Helm Chart.
  • Several parameters were renamed or disappeared in favor of new ones on this major version:
    • The image objects have been moved to its corresponding component object, e.g: workerImage now is located at worker.image.
    • The prefix airflow has been removed. Therefore, parameters prefixed with airflow are now at root level, e.g. airflow.loadExamples now is loadExamples or airflow.worker.resources now is worker.resources.
    • Parameters related to the git features has completely been refactored:
      • They have been regrouped under the git map.
      • airflow.cloneDagsFromGit no longer exists, instead you must use git.dags and git.dags.repositories has been introduced that will add support for multiple repositories.
      • airflow.clonePluginsFromGit no longer exists, instead you must use git.plugins. airflow.clonePluginsFromGit.repository, airflow.clonePluginsFromGit.branch and airflow.clonePluginsFromGit.path have been removed in favour of git.dags.repositories.
    • Liveness and readiness probe have been separated by components airflow.livenessProbe.* and airflow.readinessProbe have been removed in favour of web.livenessProbe, worker.livenessProbe, web.readinessProbe and worker.readinessProbe.
    • airflow.baseUrl has been moved to web.baseUrl.
    • Security context has been migrated to the bitnami standard way so that securityContext has been divided into podSecurityContext that will define the fsGroup for all the containers in the pod and containerSecurityContext that will define the user id that will run the main containers.
    • ./files/dags/.py* will not be include in the deployment any more.
  • Additionally updates the PostgreSQL & Redis subcharts to their newest major 10.x.x and 11.x.x, respectively, which contain similar changes.

Considerations when upgrading to this version

  • If you want to upgrade to this version using Helm v2, this scenario is not supported as this version does not support Helm v2 anymore.
  • If you installed the previous version with Helm v2 and wants to upgrade to this version with Helm v3, please refer to the official Helm documentation about migrating from Helm v2 to v3.

How to upgrade to version 7.0.0

To upgrade to 7.0.0 from 6.x, it should be done reusing the PVC(s) used to hold the data on your previous release. To do so, follow the instructions below (the following example assumes that the release name is airflow and the release namespace default):

NOTE: Please, create a backup of your database before running any of those actions.

  1. Obtain the credentials and the names of the PVCs used to hold the data on your current release:
        export AIRFLOW_PASSWORD=$(kubectl get secret --namespace default airflow -o jsonpath="{.data.airflow-password}" | base64 --decode)
        export AIRFLOW_FERNET_KEY=$(kubectl get secret --namespace default airflow -o jsonpath="{.data.airflow-fernetKey}" | base64 --decode)
        export AIRFLOW_SECRET_KEY=$(kubectl get secret --namespace default airflow -o jsonpath="{.data.airflow-secretKey}" | base64 --decode)
        export POSTGRESQL_PASSWORD=$(kubectl get secret --namespace default airflow-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
        export REDIS_PASSWORD=$(kubectl get secret --namespace default airflow-redis -o jsonpath="{.data.redis-password}" | base64 --decode)
        export POSTGRESQL_PVC=$(kubectl get pvc -l app.kubernetes.io/instance=airflow,app.kubernetes.io/name=postgresql,role=primary -o jsonpath="{.items[0].metadata.name}")
  1. Delete the Airflow worker & PostgreSQL statefulset (notice the option --cascade=false):
        kubectl delete statefulsets.apps --cascade=false airflow-postgresql
        kubectl delete statefulsets.apps --cascade=false airflow-worker
  1. Upgrade your release:

NOTE: Please remember to migrate all the values to its new path following the above notes, e.g: airflow.loadExamples -> loadExamples or airflow.baseUrl=http://127.0.0.1:8080 -> web.baseUrl=http://127.0.0.1:8080.

        helm upgrade airflow bitnami/airflow \
          --set loadExamples=true \
          --set web.baseUrl=http://127.0.0.1:8080 \
          --set auth.password=$AIRFLOW_PASSWORD \
          --set auth.fernetKey=$AIRFLOW_FERNET_KEY \
          --set auth.secretKey=$AIRFLOW_SECRET_KEY \
          --set postgresql.postgresqlPassword=$POSTGRESQL_PASSWORD \
          --set postgresql.persistence.existingClaim=$POSTGRESQL_PVC \
          --set redis.password=$REDIS_PASSWORD \
          --set redis.cluster.enabled=true
  1. Delete the existing Airflow worker & PostgreSQL pods and the new statefulset will create a new one:
        kubectl delete pod airflow-postgresql-0
        kubectl delete pod airflow-worker-0

License

Copyright © 2025 Broadcom. The term "Broadcom" refers to Broadcom Inc. and/or its subsidiaries.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.