ConfigSeeder® with Release 2.27 improves security for default values, but also helps on comparing values. Full UTF-8 support has been added for names and descriptions.
Based on customer feedback, ConfigSeeder® now supports an improved display of the differences between two different values. This can be seen in the Compare view, but also in the Restore view. The user can choose between the inline and the side-by-side view.
Restore view:
Support Unicode titles and descriptions
The international ConfigSeeder community requested support for Unicode characters in order to be able to utilise descriptions in the local language. It is now possible to use the characters of the UTF-8 character set for names and descriptions.
Secure default value
Previously there was no way to prevent malicious edit of values assigned to no environment which could lead that values on a productive environment could have been altered by an unprivileged user. With the new release Configuration Values that are not explicitly assigned to an Environment are implicitly assigned to the default environment. Initially the the write mode is set to derived, but can be changed to write to protect the value from being edited without the needed privilege.
ConfigSeeder® now enables the recording of change descriptions for every configuration change. This helps the team to better understand why certain configuration changes were made.
Global labels are not provided in the labels filter #1497
Force update velocity engine due vulnerabilities #1498
ConfigSeeder 2.25 – Embrace Labels & DB Connector
ConfigSeeder® with Release 2.25.0 brings a bunch of new interesting features, especially the switch from a single context free text field to the support for Labels
Previously, ConfigSeeder® supported the context field as a simple way to classify configurations with a free text field. This could be used, for example, to use values only for a specific cluster. With the introduction of labels, a new and very powerful instrument is available for assigning and classifying values with different labels.
It is also now possible to control permissions using labels. For example, certain values can only be changed by a certain group of people.
Labels can also be grouped by using two colons. This not only helps readability but also enables values not to be entered twice.
Hide non-editable or non-readable values
An important principle in ConfigSeeder® is that it must be visible which values have changed without revealing the actual content. Previously, all values were always displayed, even if the user could not see the current value. The filters now offer the option of displaying only the readable or editable values and hiding the others.
DB Connector (preview)
A new connector is born: ConfigSeeder® DB Connector. It allows any configuration value to be synchronized to a database table or to trigger a stored procedure. This feature is available as a preview and can be licensed for free for one year.
ConfigSeeder® with Release 2.24.1 is a maintenance updating all libraries to the latest version and fixing security issues. Nevertheless, there are some noticeable improvements:
GitOps Support
GitConnector supports besides the JSON output also YAML output file for Kubernetes files.
SQL Server 2019 Support
ConfigSeeder® installations with SQL Server 2019 support have not been able to update, due to license issues with the DB migration framework. This is solved with the Release 2.24.0. Please have a look at the changelog and follow the steps.
All Changes (ConfigSeeder® Management) 2.21.x until 2.24.1
ConfigSeeder® already supports templating since version 2.0, but the solution was limited in that only variables could be used within templates. With release 2.20 ConfigSeeder supports templating based on handlebars and thus allows not only the use of variables but also of conditions and functions. So depending on values or filter criteria the configuration files can be structured differently. For more information take a look at chapter 4.3.3.2. Preview for File: Generated & ConfigMap: Template of the manual, where all available functions are documented.
Support of templating using Handlebars and ConfigSeeder® Kubernetes Connector or ConfigSeeder® OS Connector.
GitOps Support
Some companies rely heavily on GitOps. This has the advantage that all configurations are stored audit-proof and versioned but brings the disadvantage that credentials must be stored encrypted, only a restricted group of people can work with Git, and also the reusability of configuration values is made more difficult. The ConfigSeeder® GitConnector introduced with Release 2.19 brings together the advantages of both worlds and still allows central management of configuration values, but enables configuration values to be automatically synchronized into the correct Git repository. ConfigSeeder® GitConnector also supports Kubernetes ConfigMaps, Secrets, and Sealed Secrets.
Easily manage Configurations, ConfigMaps, and Secrets in Git.
Support for ImagePullSecrets
In Kubernetes, ImagePullSecrets are used to download Docker images from secured repositories. These can now also be created and managed with ConfigSeeder®.
Easily manage Kubernetes ImagePullSecret.
Keystore Assemblies with multiple private keys
Previously, only one private key could be stored in a Keystore. This restriction has been lifted and now multiple private keys can be stored in a Keystore.
Add multiple private keys to a keystore.
All Changes (ConfigSeeder® Management) since version 2.18
With this release, it is even easier to verify how the keystore (PKCS12/JKS) is assembled. Regardless of whether the various certificates are outsourced to a separate configuration group or contained within the assembly, the preview shows the certificates and the private key.
A tree based preview shows all certificates and private keys contained in the keystore
Certificate Management: Notification if a certificate is nearing the end of its lifetime
Unfortunately, this still happens too often nowadays: certificates expire without this being detected in time, resulting in the inability to establish a connection. ConfigSeeder® notifies in time when stored certificates are about to expire and need to be replaced. For more details check our Configuration Documentation.
Lookup Plugin for Ansible has reached 1.0.0
Easily retrieve configuration data used by Ansible Playbooks from ConfigSeeder. Compared with storing configuration data directly in ansible, using the ConfigSeeder Ansible Lookup Plugin has the following advantages:
While Ansible Vault is a secure way to store secrets directly in the ansible files, it is quite cumbersome to work with it. The values must be manually encrypted (and decrypted if an encrypted value has to be verified) and the key for the vault must be distributed manually if multiple administrators need to work with the key. Secrets retrieved from ConfigSeeder® are also stored encrypted, but can easily be managed using the Web UI.
Configuration data stored in the ansible files can only be used by Ansible playbooks. Configuration data stored in ConfigSeeder® can also be used for other purposes (access with clients, Kubernetes Connector, and/or OS Connector).
To see what configuration values are used by ansible, one normally has to look into the source control system storing the Ansible files. With ConfigSeeder®, the configuration data can easily be accessed with the Web UI.
Configuration Value: Warning column missing space #1269
Configuration Value Restore: Broken restore dialog for assemblies without attributes #1275
API Key: Renew not possible when only the validity field has changed #1276
Set up the Kubernetes Connector with a self-managed API Key
In this article, we show how ConfigSeeder Kubernetes Connector can renew its own API Key (aka self-managed API Key), which reduces tedious manual work.
The Infrastructure Kubernetes Connectors are responsible to provide the API Keys for all other Components (including other Kubernetes Connectors)
The Application Kubernetes Connectors are responsible to provide ConfigMaps & Secrets used by the Applications.
The reason for this recommendation is that with this setup, only the API Keys for the Infrastructure Kubernetes Connectors have to be created manually. The API Keys required by the Application Kubernetes Connectors can be managed by the Infrastructure Kubernetes Connectors.
This setup has some advantages and some drawbacks. One major drawback is, that the Infrastructure Kubernetes Connectors requires an API Key with far-reaching permissions (normally all configuration groups and one environment), and permission to manage secrets in potentially a lot of namespaces of other applications.
This article describes how to use the Kubernetes Connector with a self-managed API Key – meaning a setup in which the Kubernetes Connectors are able to replace and renew the API Keys they need to access ConfigSeeder®. When all Application Kubernetes Connectors are able to manage their own API Keys, the need for Infrastructure Kubernetes Connectors diminishes.
Problem statement
This article addresses the following situation:
The Kubernetes Connectors requires an API Key to access ConfigSeeder®
All API Keys have a finite lifetime and therefore must be replaced regularly
Creating and renewing API Keys and storing them in Secrets is a troublesome task and should be automated
The solution proposed in the Blog Article Series Using the Kubernetes Connector is quite complex and not always required.
Kubernetes Connectors with self-managed API Keys
The following conditions must be met for the Kubernetes Connector to manage an API key stored in a secret:
The Kubernetes Connector must be allowed to execute CRUD operations on the Secret holding the API Key
The Secret must be annotated with the correct Annotations so the Kubernetes Connector knows it is allowed to manage the Secret
An assembly of type Secret: API Key that matches the API Key must exist
The Kubernetes Connector must be configured to process this assembly
If these requirements are fulfilled, the Kubernetes Connector will be able to manage its own API Key.
Solution Overview
In the first step, the User creates the API Key and provisions it on the Kubernetes Cluster.
The User then creates an assembly that describes the API Key created in the first step
When the Kubernetes Connector is installed, he will replace and manage the Secret holding the API Key based on the Assembly created in the second step.
In the following section, the setup is explained in detail.
Setup
Install a Kubernetes Connector self-managing it’s API Key with the following instructions:
Create the Configuration Group self-managed-apikey and the Environment TEST
Create an API Key to be used by the Kubernetes Connector
Choose Type Kubernetes Connector
Grant Access to self-managed-apikey and TEST
Choose a short Lifetime of 2 days, this API Key will be replaced by the Kubernetes Connector
Choose the name Self Managed API Key-API Key (The Kubernetes connector uses the pattern <Application>-<Name> for naming API Keys)
Save the API Key in a file called apiKey-test.txt
Open Configuration Group self-managed-apikey, create Assembly Secret: API Key pointing to the secret, choose the same configuration groups, secret name & namespace
Create the Secret containing the API Key and add the required annotations:
Wait until the CronJob was scheduled and verify the result.
First of all, there should be a new API Key visible in ConfigSeeder®. The new API Key was created by the Kubernetes Connector and is therefore marked as generated.
Secondly, the logs should contain the following information:
time="2020-11-04T06:35:16Z" level=info msg="Received ApiKey for secret" assemblyApplication="API Key" assemblyId=f6f5ff57-250d-4837-8036-8c68b9e2052a assemblyName="Self Managed API Key" assemblyType=K8S_SECRET_API_KEY objectName=self-managed-apikey-test/kubernetes-connector-apikey objectType=Secret
time="2020-11-04T06:35:16Z" level=info msg="API Key for secret must be recreated (apiKeyTypeChanged: false, applicationChanged: false, configGroupsChanged: false, environmentChanged: false, lifetimeEnding: true)" assemblyApplication="API Key" assemblyId=f6f5ff57-250d-4837-8036-8c68b9e2052a assemblyName="Self Managed API Key" assemblyType=K8S_SECRET_API_KEY objectName=self-managed-apikey-test/kubernetes-connector-apikey objectType=Secret
time="2020-11-04T06:35:16Z" level=info msg="Updated API Key Secret for Application" assemblyApplication="API Key" assemblyId=f6f5ff57-250d-4837-8036-8c68b9e2052a assemblyName="Self Managed API Key" assemblyType=K8S_SECRET_API_KEY objectName=self-managed-apikey-test/kubernetes-connector-apikey objectType=Secret
Important is, that the API Key was recreated because of an ending lifetime and that the Secret was updated.
Characteristics
Advantages:
Only provide an API Key once, afterwards it gets managed automatically
No Infrastructure Kubernetes Connectors required
Works also for the API Keys of the Infrastructure Kubernetes Connectors
Limitations:
It’s easy to reach a situation in which an API Key must be provided manually
Delete Namespace or Secret holding the API Key
Disable or Delete the Assembly describing the API Key and the Kubernetes Connector will delete its own API Key
If access to additional configuration groups is required, the API Key must be renewed manually (Unable to create an API Key with more permissions that the current API Key)
Please be aware that it is currently not possible to define an Assembly of type Secret: API Key that grants access to all configuration groups. As a result, a Kubernetes connector that manages it’s own API Key cannot have access to all configuration groups. We plan to lift this restriction in the future.
Conclusion
The presented use of the Kubernetes Connector allows reducing the number of manual tasks required for managing a ConfigSeeder Setup. If the mentioned limitations can be dealt with, you should definitely consider letting the Kubernetes Connector renew their own API Keys.
ConfigSeeder 2.16 improves your Configuration Management
We are pleased to announce version 2.16. With this and the last release, we have mainly improved Kubernetes ConfigMap and Secret previews, but also reduced the number of bugs and improved the UI handling.
Preview for Kubernetes Key/Value ConfigMaps and Secrets
Kubernetes ConfigMap Preview
Until now, there was no preview for the Kubernetes Key/Value ConfigMaps and Secrets and therefore it was not immediately clear which values were available as a result to the application. The preview supports the different filter criteria as usual. Values that are marked as secured in the configuration group are also shown secured in the preview.
Statistics for Clients
Client Statistics
Especially when many applications obtain values from the ConfigSeeder®, it is not immediately obvious which versions of the clients are currently in use and whether there are still older versions that need to be updated. This is where the new Client Statistics comes in, which can show the different instances per client type and version.
Statistics for API Keys
API Key Statistics
When using many API Keys, it can happen that the overview gets lost and it is not clear anymore which API Keys are still in use. The API Key page optionally shows the usage of the API Key within the last month.
Highlight replaced values
Highlight of replaced values
The templating preview has been given a function to highlight the replacement of values. This increases the overview even in more complex documents. This function is also available for Kubernetes ConfigMaps templates.
All Changes (ConfigSeeder® Management)
Added
Release 2.16.0
Configuration Groups: Add copy-icon to group-keys #630
Multiline Dialog: Show number of found entries #853
In our article Using the Kubernetes Connector we made a recommendation about how the Kubernetes Connector should be setup up. The goal of this article is to demonstrate how ConfigSeeder and the Kubernetes Connector can be used in a setup with multiple Kubernetes Clusters.
Find out how ConfigSeeder and Kubernetes Connector can be used in a multi-cluster setup.
Scenario
Our imaginary customer works with the following three environments:
DEV (Used by the developers for software engineering work)
UAT (Test environment used for integration & acceptance tests)
PRD (Production environment)
Of course, there are different ways to map these environments to Kubernetes clusters. For example, you can use one dedicated cluster per environment, run all environments on one cluster, or do something in between.
Most companies working with Kubernetes don’t run only one Kubernetes Cluster. Normally, there are at least two clusters, one for running all the test environments and one for running production. If the company also does Kubernetes engineering work, there will probably be an additional dev cluster.
Our imaginary customer has decided to work with the following three clusters:
eng (Engineering Cluster)
test (Cluster running all the test environments)
prod (Cluster running production)
In this article, we assume that the environments DEV and UAT will be run on the test cluster, PRD will have its own prod cluster.
Recommended Setup
For scenarios without special requirements, we recommend a setup with the following guidelines:
Firstly, use one ConfigSeeder Management installation in production. As a result, only one ConfigSeeder is responsible for managing the configuration data for all environments.
Secondly, set up a set of Kubernetes Connectors responsible for managing the API Keys (also see Using the Kubernetes Connector).
In addition, use a second ConfigSeeder Management installation only for testing changes in the ConfigSeeder set up.
These recommendations lead to a setup like shown in the following sketch:
Characteristics and responsibilities
ConfigSeeder in the prod cluster: The ConfigSeeder Management instance deployed in the prod cluster is used used by all other ConfigSeeder components and other applications. As a result you will have a single point of truth regarding configuration data.
ConfigSeeder in the test cluster: Because all ConfigSeeder components and other applications accesses the ConfigSeeder Management installed in the prod cluster, there is no ConfigSeeder Management installation in the test cluster.
Infrastructure Kubernetes Connectors: Like mentioned before, all Kubernetes Connectors point to the ConfigSeeder Management instance deployed on the prod cluster. There should be one Kubernetes Connector per environment, also have a look at our blog article Using the Kubernetes Connector.
ConfigSeeder in the eng cluster: The ConfigSeeder setup deployed in the eng cluster should be similar to the one used in the prod & test Cluster. Like mentioned before, the deployed components are only used for testing purposes (test an upgrade in the eng cluster before you upgrade prod and test.)
Requirements
Connectivity: The setup shown before has the requirement that all Kubernetes Connectors from all Clusters are able to access the ConfigSeeder Management installed on the prod Cluster.
Kubernetes Access Permission: Like also described in Using the Kubernetes Connector, the Kubernetes Connectors require permissions to manage ConfigMaps and/or Secrets in different Namespaces.
Tasks
Manually created API Keys
The installation steps described later on requires you to manually create API Keys. The Kubernetes Connectors will use these API Keys to access the ConfigSeeder, also see Using the Kubernetes Connector with Helm 3 – Part 1. You can create the required API Keys in different ways, for instance:
Create one API Key
All Kubernetes Connectors in all clusters use the same key (deployed to all clusters)
Requires access to all environments
Requires access to all configuration groups
Only one API Key has to be manually managed
Create one API Key per Cluster
All Kubernetes Connectors deployed to one cluster use one key
Requires access to all environments relevant for the cluster
Requires access to all configuration groups
Only one API Key per cluster has to be manually managed
Create one API Key for each Kubernetes Connector
A single Kubernetes Connector uses one key
Requires access to one environment
Requires access to all configuration groups
Multiple API Keys have to be manually managed
The keys used in #1 and #2 grant access to a lot of configuration data. In addition, in #1 the key is deployed to multiple clusters. Especially if the security level of your test clusters is lower than on the prod cluster, there is a higher probability that the key leaks to somebody who is not authorized to access all data. Therefore, the risk of granting somebody access to configuration data he shouldn’t be able to see is just too high in #1 and #2.
For this reason, we recommend that you use API Keys with as few permissions as possible. To sum up, we recommend using one API Key per Kubernetes Connector (#2).
Test setup
Setup the test-installation of ConfigSeeder
Install ConfigSeeder Management using our Helm Charts
Add your license. If you don’t have one yet, contact us for a trial license.
Prepare API Key & store it manually in a Secret
Install the Kubernetes Connector using our Helm Charts
Play around with ConfigSeeder
Let the Kubernetes Connectors create API Keys
Setup your applications (and/or ConfigSeeder extensions) to retrieve configuration data from ConfigSeeder.
Production setup
Setup the productive ConfigSeeder Management
Install ConfigSeeder Management using our Helm Charts
Add your license
Setup an Infrastructure Kubernetes Connector per Environment
Prepare API Key & store it manually in a Secret
Install the Kubernetes Connector using our Helm Chart
Grant the Kubernetes Connector permissions to create ConfigMaps and Secrets in the required Namespaces
Setup your applications (and/or ConfigSeeder extensions) to retrieve configuration data from ConfigSeeder.
Conclusion
To sum up, ConfigSeeder works well in an environment with multiple Kubernetes clusters. We recommend
firstly, and most importantly, a setup with one ConfigSeeder installation used for production use (holds configuration data of all environments) and another one for testing (ConfigSeeder version upgrade, adding additional components, try out new features, …).
secondly, the use of Infrastructure Kubernetes Connectors to provide the API Keys for all other components and applications accessing ConfigSeeder. However, the API Keys for these Kubernetes Connectors must be provided manually (chicken-egg problem).
With this setup, you get a single point of truth regarding configuration data for all applications deployed in all your Kubernetes Clusters. If you have any questions regarding the described setup, please don’t hesitate to contact us.