Skip to main content

Kubernetes

Overview

Check out our Kubernetes Chart Repository on GitHub and our published Helm Charts.

Quick-start

helm repo add flagsmith https://flagsmith.github.io/flagsmith-charts/
helm install -n flagsmith --create-namespace flagsmith flagsmith/flagsmith
kubectl -n flagsmith port-forward svc/flagsmith-frontend 8080:8080

Then view http://localhost:8080 in a browser. This will install the chart using default options, in a new namespace flagsmith.

Refer to the chart's default values.yaml file to learn which values are expected by the chart. You can use it as a reference for building your own values file:

wget https://raw.githubusercontent.com/Flagsmith/flagsmith-charts/main/charts/flagsmith/values.yaml
helm install -n flagsmith --create-namespace flagsmith flagsmith/flagsmith -f values.yaml

We would suggest only doing this when running the platform locally, and recommend reading the Helm docs for installation, upgrading and values for further information.

Configuration

Ingress configuration

The above is a quick way of gaining access to Flagsmith, but in many cases you will need to configure ingress to work with an ingress controller.

Port forwarding

In a terminal, run:

kubectl -n [flagsmith-namespace] port-forward svc/[flagsmith-release-name]-frontend 8080:8080

Then access http://localhost:8080 in a browser.

In a cluster that has an ingress controller, using the frontend proxy

In this configuration, api requests are proxied by the frontend. This is simpler to configure, but introduces some latency.

Set the following values for flagsmith, with changes as needed to accommodate your ingress controller, and any associated DNS changes.

Eg in the charts/flagsmith/values.yaml file:

ingress:
frontend:
enabled: true
hosts:
- host: flagsmith.[MYDOMAIN]
paths:
- /

Then, once any out-of-cluster DNS or CDN changes have been applied, access https://flagsmith.[MYDOMAIN] in a browser.

In a cluster that has an ingress controller, using separate ingresses for frontend and api

Set the following values for flagsmith, with changes as needed to accommodate your ingress controller, and any associated DNS changes. Also, set the FLAGSMITH_API_URL env-var such that the URL is reachable from a browser accessing the frontend.

Eg in the charts/flagsmith/values.yaml file:

ingress:
frontend:
enabled: true
hosts:
- host: flagsmith.[MYDOMAIN]
paths:
- /
api:
enabled: true
hosts:
- host: flagsmith.[MYDOMAIN]
paths:
- /api/
- /health/

frontend:
extraEnv:
FLAGSMITH_API_URL: 'https://flagsmith.[MYDOMAIN]/api/v1/'

Then, once any out-of-cluster DNS or CDN changes have been applied, access https://flagsmith.[MYDOMAIN] in a browser.

Minikube ingress

(See [https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/] for more details.)

If using minikube, enable ingress with minikube addons enable ingress.

Then set the following values for flagsmith in the charts/flagsmith/values.yaml file:

ingress:
frontend:
enabled: true
hosts:
- host: flagsmith.local
paths:
- /

and apply. This will create two ingress resources.

Run minikube ip. Set this ip and flagsmith.local in your /etc/hosts, eg:

192.168.99.99 flagsmith.local

Then access http://flagsmith.local in a browser.

Provided Database configuration

By default, the chart creates its own PostgreSQL server within the cluster, referencing https://github.com/helm/charts/tree/master/stable/postgresql for the service.

caution

We recommend running an externally managed database in production, either by deploying your own Postgres instance in your cluster, or using a service like AWS RDS.

You can provide configuration options to the postgres database by modifying the values, for example the below changes the max_connections in the charts/flagsmith/values.yaml file:

postgresql:
enabled: true

postgresqlConfiguration:
max_connections: '200' # override the default max_connections of 100

External Database configuration

To connect the Flagsmith API to an external PostgreSQL server set the values under databaseExternal, eg in the charts/flagsmith/values.yaml file:

postgresql:
enabled: false # turn off the chart-managed postgres

databaseExternal:
enabled: true
# Can specify the full URL
url: 'postgres://myuser:mypass@myhost:5432/mydbname'
# Or can specify each part (url takes precedence if set)
type: postgres
host: myhost
port: 5432
database: mydbname
username: myuser
password: mypass
# Or can specify a pre-existing k8s secret containing the database URL
urlFromExistingSecret:
enabled: true
name: my-precreated-db-config
key: DB_URL

Environment variables

caution

It's important to define a secretKey value in your helm chart when running in Kubernetes. Use a password manager to generate a random hash and set this so that all the API nodes are running with an identical DJANGO_SECRET_KEY.

If you are using our Helm charts and don't provide a secretKey, one will be generated for you and shared across the running pods, but this will change upon redeployment, which you probably don't want to happen.

The chart handles most environment variables required, but see the API readme for all available configuration options. These can be set using api.extraEnv, eg in the charts/flagsmith/values.yaml file:

api:
extraEnv:
LOG_LEVEL: DEBUG

Resource allocation

By default, no resource limits or requests are set.

TODO: recommend some defaults

Replicas

By default, 1 replica of each of the frontend and api is used.

TODO: recommend some defaults.

TODO: consider some autoscaling options.

TODO: create a pod-disruption-budget

Deployment strategy

For each of the deployments, you can set deploymentStrategy. By default this is unset, meaning you get the default Kubernetes behaviour, but you can set this to an object to adjust this. See [https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy].

Eg in the charts/flagsmith/values.yaml file:

api:
deploymentStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: '50%'

PgBouncer

By default, Flagsmith connects directly to the database - either in-cluster, or external. Can enable PgBouncer with pgbouncer.enabled: true to have Flagsmith connect to PgBouncer, and PgBouncer connect to the database.

All-in-one Docker image

The Docker image at [https://hub.docker.com/r/flagsmith/flagsmith/] contains both the API and the frontend. To make use of this, set the following values:

api:
image:
repository: flagsmith/flagsmith # or some other repository hosting the combined image
tag: 2.14 # or some other tag that exists in that repository
separateApiAndFrontend: false

This switches off the Kubernetes deployment for the frontend. However, the ingress and service are retained, but all requests are handled by the API deployment.

InfluxDB

By default, Flagsmith uses Postgres to store time series data. You can alternatively use Influx to track:

  • SDK API traffic
  • SDK Flag Evaluations

You need to perform some additional steps to configure InfluxDB..

Task Processor

The task processor itself is documented here. See the table below for the values to set to configure the task processor using the helm chart.

Chart Values

The following table lists the configurable parameters of the chart and their default values.

ParameterDescriptionDefault
api.image.repositorydocker image repository for flagsmith apiflagsmith/flagsmith-api
api.image.tagdocker image tag for flagsmith apiappVersion
api.image.imagePullPolicyIfNotPresent
api.image.imagePullSecrets[]
api.separateApiAndFrontendSet to false if using flagsmith/flagsmith image for the apitrue
api.replicacountnumber of replicas for the flagsmith api, null to unset1
api.deploymentStrategySee "Deployment strategy" above
api.resourcesresources per pod for the flagsmith api{}
api.podLabelsadditional labels to apply to pods for the flagsmith api{}
api.extraEnvextra environment variables to set for the flagsmith api{}
api.secretKeySee secretKey docs abovenull
api.secretKeyFromExistingSecret.enabledSet to true to use a secret key stored in an existing k8s secretfalse
api.secretKeyFromExistingSecret.nameThe name of the secret key k8s secretnull
api.secretKeyFromExistingSecret.keyThe key of the secret key in the k8s secretnull
api.nodeSelector{}
api.tolerations[]
api.affinity{}
api.podSecurityContext{}
api.defaultPodSecurityContext.enabledwhether to use the default security contexttrue
api.livenessProbe.failureThreshold5
api.livenessProbe.initialDelaySeconds10
api.livenessProbe.periodSeconds10
api.livenessProbe.successThreshold1
api.livenessProbe.timeoutSeconds2
api.readinessProbe.failureThreshold10
api.readinessProbe.initialDelaySeconds10
api.readinessProbe.periodSeconds10
api.readinessProbe.successThreshold1
api.readinessProbe.timeoutSeconds2
api.dbWaiter.image.repositorywillwill/wait-for-it
api.dbWaiter.image.taglatest
api.dbWaiter.image.imagePullPolicyIfNotPresent
api.dbWaiter.timeoutSecondsTime before init container will retry30
frontend.enabledWhether the flagsmith frontend is enabledtrue
frontend.image.repositorydocker image repository for flagsmith frontendflagsmith/flagsmith-frontend
frontend.image.tagdocker image tag for flagsmith frontendappVersion
frontend.image.imagePullPolicyIfNotPresent
frontend.image.imagePullSecrets[]
frontend.replicacountnumber of replicas for the flagsmith frontend, null to unset1
frontend.deploymentStrategySee "Deployment strategy" above
frontend.resourcesresources per pod for the flagsmith frontend{}
frontend.apiProxy.enabledproxy API requests to the API service within the clustertrue
frontend.extraEnvextra environment variables to set for the flagsmith frontend{}
frontend.nodeSelector{}
frontend.tolerations[]
frontend.affinity{}
api.podSecurityContext{}
api.defaultPodSecurityContext.enabledwhether to use the default security contexttrue
frontend.livenessProbe.failureThreshold20
frontend.livenessProbe.initialDelaySeconds20
frontend.livenessProbe.periodSeconds10
frontend.livenessProbe.successThreshold1
frontend.livenessProbe.timeoutSeconds10
frontend.readinessProbe.failureThreshold20
frontend.readinessProbe.initialDelaySeconds20
frontend.readinessProbe.periodSeconds10
frontend.readinessProbe.successThreshold1
frontend.readinessProbe.timeoutSeconds10
taskProcessor.image.repository(same as for api.image)
taskProcessor.image.tag(same as for api.image)
taskProcessor.image.imagePullPolicy(same as for api.image)
taskProcessor.image.imagePullSecrets(same as for api.image)
taskProcessor.enabledWhether to run the task processorfalse
taskProcessor.replicacount1
taskProcessor.sleepIntervalMsPassed as --sleepintervalms to the task processor
taskProcessor.numThreadsPassed as --numthreads to the task processor
taskProcessor.gracePeriodMsPassed as --graceperiodms to the task processor
taskProcessor.queuePopSizePassed as --queuepopsize to the task processor
taskProcessor.livenessProbe.failureThreshold5
taskProcessor.livenessProbe.initialDelaySeconds5
taskProcessor.livenessProbe.periodSeconds10
taskProcessor.livenessProbe.successThreshold1
taskProcessor.livenessProbe.timeoutSeconds2
taskProcessor.readinessProbe.failureThreshold10
taskProcessor.readinessProbe.initialDelaySeconds1
taskProcessor.readinessProbe.periodSeconds10
taskProcessor.readinessProbe.successThreshold1
taskProcessor.readinessProbe.timeoutSeconds2
taskProcessor.podAnnotations{}
taskProcessor.resources{}
taskProcessor.podLabels{}
taskProcessor.nodeSelector{}
taskProcessor.tolerations[]
taskProcessor.affinity{}
taskProcessor.podSecurityContext{}
taskProcessor.defaultPodSecurityContext.enabledwhether to use the default security contexttrue
postgresql.enabledif true, creates in-cluster PostgreSQL databasetrue
postgresql.serviceAccount.enabledcreates a serviceaccount for the postgres podtrue
nameOverrideflagsmith-postgres
postgresqlDatabaseflagsmith
postgresqlUsernamepostgres
postgresqlPasswordflagsmith
databaseExternal.enableduse an external database. Specify database URL, or all parts.false
databaseExternal.urlSee [https://github.com/kennethreitz/dj-database-url#url-schema]
databaseExternal.typeNote: Only postgres supported by default images.postgres
databaseExternal.port5432
databaseExternal.databaseName of the database within the server
databaseExternal.username
databaseExternal.password
databaseExternal.urlFromExistingSecret.enabledReference an existing secret containing the database URL
databaseExternal.urlFromExistingSecret.nameName of referenced secret
databaseExternal.urlFromExistingSecret.keyKey within the referenced secrt to use
influxdb2.enabledtrue
influxdb2.nameOverrideinfluxdb
influxdb2.image.repositorydocker image repository for influxdbquay.io/influxdb/influxdb
influxdb2.image.tagdocker image tag for influxdbv2.0.2
influxdb2.image.imagePullPolicyIfNotPresent
influxdb2.image.imagePullSecrets[]
influxdb2.adminUser.organizationinfluxdata
influxdb2.adminUser.bucketdefault
influxdb2.adminUser.useradmin
influxdb2.adminUser.passwordrandomly generated
influxdb2.adminUser.tokenrandomly generated
influxdb2.persistence.enabledfalse
influxdb.resourcesresources per pod for the influxdb{}
influxdb.nodeSelector{}
influxdb.tolerations[]
influxdb.affinity{}
influxdbExternal.enabledUse an InfluxDB not managed by this chartfalse
influxdbExternal.url
influxdbExternal.bucket
influxdbExternal.organization
influxdbExternal.token
influxdbExternal.tokenFromExistingSecret.enabledUse reference to a k8s secret not managed by this chartfalse
influxdbExternal.tokenFromExistingSecret.nameReferenced secret name
influxdbExternal.tokenFromExistingSecret.keyKey within the referenced secret to use
pgbouncer.enabledfalse
pgbouncer.image.repositorybitnami/pgbouncer
pgbouncer.image.tag1.16.0
pgbouncer.image.imagePullPolicyIfNotPresent
pgbouncer.image.imagePullSecrets[]
pgbouncer.replicaCountnumber of replicas for pgbouncer, null to unset1
pgbouncer.deploymentStrategySee "Deployment strategy" above
pgbouncer.podAnnotations{}
pgbouncer.resources{}
pgbouncer.podLabels{}
pgbouncer.extraEnv{}
pgbouncer.nodeSelector{}
pgbouncer.tolerations[]
pgbouncer.affinity{}
pgbouncer.podSecurityContext{}
pgbouncer.securityContext{}
pgbouncer.defaultSecurityContext.enabledtrue
pgbouncer.defaultSecurityContext{}
pgbouncer.livenessProbe.failureThreshold5
pgbouncer.livenessProbe.initialDelaySeconds5
pgbouncer.livenessProbe.periodSeconds10
pgbouncer.livenessProbe.successThreshold1
pgbouncer.livenessProbe.timeoutSeconds2
pgbouncer.readinessProbe.failureThreshold10
pgbouncer.readinessProbe.initialDelaySeconds1
pgbouncer.readinessProbe.periodSeconds10
pgbouncer.readinessProbe.successThreshold1
pgbouncer.readinessProbe.timeoutSeconds2
service.influxdb.externalPort8080
service.api.typeClusterIP
service.api.port8000
service.frontend.typeClusterIP
service.frontend.port8080
ingress.frontend.enabledfalse
ingress.frontend.ingressClassName
ingress.frontend.annotations{}
ingress.frontend.hosts[].hostchart-example.local
ingress.frontend.hosts[].paths[]
ingress.frontend.tls[]
ingress.api.enabledfalse
ingress.api.ingressClassName
ingress.api.annotations{}
ingress.api.hosts[].hostchart-example.local
ingress.api.hosts[].paths[]
ingress.api.tls[]
api.statsd.enabledEnable statsd metric reporting from gunicorn.false
api.statsd.hostHost URL to receive statsd metricsnull
api.statsd.hostFromNodeIpSet as true to use the node IP as the statsd host insteadfalse
api.statsd.portHost port to receive statsd metrics8125
api.statsd.prefixPrefix to add to metric idsflagsmith.api

Key upgrade notes

  • 0.20.0: upgrades the bundled in-cluster Postgres. This makes no effort to preserve data in the bundled in-cluster Postgres if it is in use. This also renames the bundled in-cluster Postgres to have dev-postgresql in the name, to signify that it exists such that the chart can be deployed self-contained, but that this Postgres instance is treated as disposable. All Flagsmith installations for which the data is not disposable should use an externally managed database.

Development and contributing

Requirements

helm version > 3.0.2

To run locally

You can test and run the application locally on OSX using minikube like this:

# Install Docker for Desktop and then:

brew install minikube
minikube start --memory 8192 --cpus 4
helm install flagsmith --debug ./flagsmith
minikube dashboard

Test chart installation

Install Chart without building a package:

helm install flagsmith --debug ./flagsmith

Run template and check kubernetes resouces are made:

helm template flagsmith flagsmith --debug -f flagsmith/values.yaml

Build chart package

To build chart package run:

helm package ./flagsmith