Using ConfigSeeder in a multi-cluster setup

In our article Using the Kubernetes Connector we made a recommendation about how the Kubernetes Connector should be setup up. The goal of this article is to demonstrate how ConfigSeeder and the Kubernetes Connector can be used in a setup with multiple Kubernetes Clusters.

Christian Cavegn

6 mins read

Find out how ConfigSeeder and Kubernetes Connector can be used in a multi-cluster setup.

Scenario

Our imaginary customer works with the following three environments:

  • DEV (Used by the developers for software engineering work)
  • UAT (Test environment used for integration & acceptance tests)
  • PRD (Production environment)

Of course, there are different ways to map these environments to Kubernetes clusters. For example, you can use one dedicated cluster per environment, run all environments on one cluster, or do something in between.

Most companies working with Kubernetes don’t run only one Kubernetes Cluster. Normally, there are at least two clusters, one for running all the test environments and one for running production. If the company also does Kubernetes engineering work, there will probably be an additional dev cluster.

Our imaginary customer has decided to work with the following three clusters:

  • eng (Engineering Cluster)
  • test (Cluster running all the test environments)
  • prod (Cluster running production)

In this article, we assume that the environments DEV and UAT will be run on the test cluster, PRD will have its own prod cluster.

Recommended Setup

For scenarios without special requirements, we recommend a setup with the following guidelines:

  • Firstly, use one ConfigSeeder Management installation in production. As a result, only one ConfigSeeder is responsible for managing the configuration data for all environments.
  • Secondly, set up a set of Kubernetes Connectors responsible for managing the API Keys (also see Using the Kubernetes Connector).
  • In addition, use a second ConfigSeeder Management installation only for testing changes in the ConfigSeeder set up.

These recommendations lead to a setup like shown in the following sketch:

ConfigSeeder and Kubernetes Connector in a multi-cluster setup

Characteristics and responsibilities

  • ConfigSeeder in the prod cluster:
    The ConfigSeeder Management instance deployed in the prod cluster is used used by all other ConfigSeeder components and other applications. As a result you will have a single point of truth regarding configuration data.
  • ConfigSeeder in the test cluster:
    Because all ConfigSeeder components and other applications accesses the ConfigSeeder Management installed in the prod cluster, there is no ConfigSeeder Management installation in the test cluster.
  • Infrastructure Kubernetes Connectors:
    Like mentioned before, all Kubernetes Connectors point to the ConfigSeeder Management instance deployed on the prod cluster. There should be one Kubernetes Connector per environment, also have a look at our blog article Using the Kubernetes Connector.
  • ConfigSeeder in the eng cluster:
    The ConfigSeeder setup deployed in the eng cluster should be similar to the one used in the prod & test Cluster. Like mentioned before, the deployed components are only used for testing purposes (test an upgrade in the eng cluster before you upgrade prod and test.)

Requirements

  • Connectivity:
    The setup shown before has the requirement that all Kubernetes Connectors from all Clusters are able to access the ConfigSeeder Management installed on the prod Cluster.
  • Kubernetes Access Permission:
    Like also described in Using the Kubernetes Connector, the Kubernetes Connectors require permissions to manage ConfigMaps and/or Secrets in different Namespaces.

Tasks

Manually created API Keys

The installation steps described later on requires you to manually create API Keys. The Kubernetes Connectors will use these API Keys to access the ConfigSeeder, also see Using the Kubernetes Connector with Helm 3 – Part 1. You can create the required API Keys in different ways, for instance:

  1. Create one API Key
    • All Kubernetes Connectors in all clusters use the same key (deployed to all clusters)
    • Requires access to all environments
    • Requires access to all configuration groups
    • Only one API Key has to be manually managed
  2. Create one API Key per Cluster
    • All Kubernetes Connectors deployed to one cluster use one key
    • Requires access to all environments relevant for the cluster
    • Requires access to all configuration groups
    • Only one API Key per cluster has to be manually managed
  3. Create one API Key for each Kubernetes Connector
    • A single Kubernetes Connector uses one key
    • Requires access to one environment
    • Requires access to all configuration groups
    • Multiple API Keys have to be manually managed

The keys used in #1 and #2 grant access to a lot of configuration data. In addition, in #1 the key is deployed to multiple clusters. Especially if the security level of your test clusters is lower than on the prod cluster, there is a higher probability that the key leaks to somebody who is not authorized to access all data. Therefore, the risk of granting somebody access to configuration data he shouldn’t be able to see is just too high in #1 and #2.

For this reason, we recommend that you use API Keys with as few permissions as possible. To sum up, we recommend using one API Key per Kubernetes Connector (#2).

Test setup

  1. Setup the test-installation of ConfigSeeder
    • Install ConfigSeeder Management using our Helm Charts
    • Add your license. If you don’t have one yet, contact us for a trial license.
    • Prepare API Key & store it manually in a Secret
    • Install the Kubernetes Connector using our Helm Charts
  2. Play around with ConfigSeeder
    • Let the Kubernetes Connectors create API Keys
    • Setup your applications (and/or ConfigSeeder extensions) to retrieve configuration data from ConfigSeeder.

Production setup

  1. Setup the productive ConfigSeeder Management
    • Install ConfigSeeder Management using our Helm Charts
    • Add your license
  2. Setup an Infrastructure Kubernetes Connector per Environment
    • Prepare API Key & store it manually in a Secret
    • Install the Kubernetes Connector using our Helm Chart
    • Grant the Kubernetes Connector permissions to create ConfigMaps and Secrets in the required Namespaces
    • See Using the Kubernetes Connector with Helm 3 – Part 1 for more detailed installation instructions
    • Repeat for all environments
  3. Setup ConfigSeeder
    • Let the Kubernetes Connectors create API Keys
    • Setup your applications (and/or ConfigSeeder extensions) to retrieve configuration data from ConfigSeeder.

Conclusion

To sum up, ConfigSeeder works well in an environment with multiple Kubernetes clusters. We recommend

  • firstly, and most importantly, a setup with one ConfigSeeder installation used for production use (holds configuration data of all environments) and another one for testing (ConfigSeeder version upgrade, adding additional components, try out new features, …).
  • secondly, the use of Infrastructure Kubernetes Connectors to provide the API Keys for all other components and applications accessing ConfigSeeder. However, the API Keys for these Kubernetes Connectors must be provided manually (chicken-egg problem).

With this setup, you get a single point of truth regarding configuration data for all applications deployed in all your Kubernetes Clusters. If you have any questions regarding the described setup, please don’t hesitate to contact us.