Mapping Config to MAS Deployments
A combination of ArgoCD Application Sets and the App of Apps pattern is used by MAS GitOps to generate a tree of ArgoCD Applications that install and manage MAS instances in Target Clusters based on the configuration files in the Config Repository.
The tree of Applications and Application Sets looks like this:
The following describes how this tree is generated.
The Account Root Application
It begins with the Account Root Application. This is created directly on the cluster running ArgoCD. It serves as the "entrypoint" to the MAS GitOps Helm Charts and is where several key pieces of global configuration values are provided.
The manifest for the Account Root Application in our example is shown in the snippet below. The account ID, source repo, config (aka "generator") repo are configured here.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: root.dev
namespace: openshift-gitops
spec:
destination:
namespace: openshift-gitops
server: 'https://kubernetes.default.svc'
project: "mas"
source:
path: root-applications/ibm-mas-account-root
repoURL: https://github.com/ibm-mas/gitops
targetRevision: master
helm:
values: |
account:
id: dev
source:
repo_url: "https://github.com/ibm-mas/gitops"
revision: "mas"
generator:
repo_url: "https://github.com/me/my-config-repo"
revision: "main"
argo:
namespace: "openshift-gitops"
The Account Root Application establishes the Cluster Root Application Set.
The Cluster Root Application Set
The Cluster Root Application Set generates a set of Cluster Root Applications based on the configuration in the Config Repository.
The Cluster Root Application Set employs an ArgoCD Merge Generator with a list of ArgoCD Git File Generators. The Git File Generators monitor for named YAML configuration files at the cluster level in the Config Repository and the Merge Generator combines each of these files into a single YAML object per MAS cluster.
A simplified and abridged snippet showing the Merge and Git File generators from the Cluster Root Application Set template is shown below:
spec:
...
generators:
- merge:
mergeKeys:
- 'merge-key'
generators:
- git:
files:
- path: "{{ .Values.account.id }}/*/ibm-mas-cluster-base.yaml"
- git:
files:
- path: "{{ .Values.account.id }}/*/ibm-operator-catalog.yaml"
...
To illustrate, the following shows an example Config Repository that defines a dev
account containing configuration for two Target Clusters (cluster1
and cluster2
). These are the files that the Git File Generators above are looking for.
├── dev
│ ├── cluster1
│ │ ├── ibm-mas-cluster-base.yaml
│ │ ├── ibm-operator-catalog.yaml
│ └── cluster2
│ │ ├── ibm-mas-cluster-base.yaml
│ │ ├── ibm-operator-catalog.yaml
Now let's take a look at the contents of these files:
├── dev
│ ├── cluster1
| | |-------------------------------------------
│ │ ├── ibm-mas-cluster-base.yaml
| | |-------------------------------------------
| | | merge-key: "dev/cluster1"
| | | account:
| | | id: dev
| | | cluster:
| | | id: cluster1
| | | url: https://api.cluster1.cakv.p3.openshiftapps.com:443
| | |
| | |-------------------------------------------
│ │ ├── ibm-operator-catalog.yaml
| | |-------------------------------------------
| | | merge-key: "dev/cluster1"
| | | ibm_operator_catalog:
| | | mas_catalog_version: v8-240430-amd64
| | |
│ └── cluster2
| | |-------------------------------------------
│ │ ├── ibm-mas-cluster-base.yaml
| | |-------------------------------------------
| | | merge-key: "dev/cluster2"
| | | account:
| | | id: dev
| | | cluster:
| | | id: cluster2
| | | url: https://api.cluster2.jsig.p3.openshiftapps.com:443
| | |
| | |-------------------------------------------
│ │ ├── ibm-operator-catalog.yaml
| | |-------------------------------------------
| | | merge-key: "dev/cluster2"
| | | ibm_operator_catalog:
| | | mas_catalog_version: v8-240405-amd64
All of the files contain a merge-key
which includes the account ID and the cluster ID (e.g. dev/cluster1
). This is used by the Merge generator to group together configuration into per-cluster YAML objects.
The ibm-mas-cluster-base.yaml
file contains global configuration for the cluster, including the account.id
, and the cluster.id
and the cluster.url
which determines the Target Cluster that ArgoCD will deploy resources to.
The other YAML configuration files (such as ibm-operator-catalog.yaml
shown above) represent one type of cluster-level resource that we wish to install on the Target Cluster.
Given the config above, Cluster Root Application Set generates two YAML objects:
merge-key: "dev/cluster1"
account:
id: dev
cluster:
id: cluster1
url: https://api.cluster1.cakv.p3.openshiftapps.com:443
ibm_operator_catalog:
mas_catalog_version: v8-240430-amd64
merge-key: "dev/cluster2"
account:
id: dev
cluster:
id: cluster2
url: https://api.cluster2.jsig.p3.openshiftapps.com:443
ibm_operator_catalog:
mas_catalog_version: v8-240405-amd64
The generated YAML objects are used to render the template defined in the Cluster Root Application Set to generate Cluster Root Applications in the Management Cluster.
-
Go Template expressions are used to inject cluster-specific configuration from the cluster's YAML object into the template (e.g.
{{.cluster.id}}
). -
Global configuration that applies to all clusters is passed down from the Helm values used to render the Cluster Root Application Set template (e.g.
{{ .Values.source.repo_url }}
).
A simplified and abridged snippet of the Cluster Root Application Set template is shown below, followed by a breakdown of the purpose of each section:
template:
metadata:
name: "cluster.{{ `{{.cluster.id}}` }}"
...
spec:
source:
path: root-applications/ibm-mas-cluster-root
helm:
values: "{{ `{{ toYaml . }}` }}"
parameters:
- name: "source.repo_url"
value: "{{ .Values.source.repo_url }}"
- name: "argo.namespace"
value: "{{ .Values.argo.namespace }}"
destination:
server: 'https://kubernetes.default.svc'
namespace: {{ .Values.argo.namespace }}
What are the backticks for?
Since the Cluster Root Application Set is itself a Helm template (rendered by the Account Root Application) we need to tell Helm to not attempt to parse the Go Template expressions, treating them as literals instead. This is achieved by wrapping the Go Template expressions in backticks. The expressions in the snippet above will be rendered by Helm as "cluster.{{.cluster.id}}"
and "{{ toYaml . }}"
.
The Cluster Root Applications are named according to their ID:
template:
metadata:
name: "cluster.{{ `{{.cluster.id}}` }}"
Cluster Root Applications render the Cluster Root Chart:
source:
path: root-applications/ibm-mas-cluster-root
The entire cluster's YAML object is passed in as Helm values to the Cluster Root Chart:
helm:
values: "{{ `{{ toYaml . }}` }}"
Additional global configuration parameters (such as details of the Source Repository and the namespace where ArgoCD is running) set on the the Account Root Application are passed down as additional Helm parameters:
arameters:
- name: "source.repo_url"
value: "{{ .Values.source.repo_url }}"
- name: "argo.namespace"
value: "{{ .Values.argo.namespace }}"
Cluster Root Applications are created in the ArgoCD namespace on the Management Cluster:
destination:
server: 'https://kubernetes.default.svc'
namespace: {{ .Values.argo.namespace }}
Given the config above, two Cluster Root Applications are generated:
kind: Application
metadata:
name: cluster.cluster1
spec:
source:
path: root-applications/ibm-mas-cluster-root
helm:
values: |-
merge-key: dev/cluster1`
account:
id: dev
cluster:
id: cluster1
url: https://api.cluster1.cakv.p3.openshiftapps.com:443
ibm_operator_catalog:
mas_catalog_version: v8-240430-amd64
parameters:
- name: source.repo_url
value: "https://github.com/..."
- name: argo.namespace
value: "openshift-gitops"
destination:
server: 'https://kubernetes.default.svc'
namespace: openshift-gitops
kind: Application
metadata:
name: cluster.cluster2
spec:
source:
path: root-applications/ibm-mas-cluster-root
helm:
values: |-
merge-key: dev/cluster2`
account:
id: dev
cluster:
id: cluster2
url: https://api.cluster2.jsig.p3.openshiftapps.com:443
ibm_operator_catalog:
mas_catalog_version: v8-240405-amd64
parameters:
- name: source.repo_url
- value: "https://github.com/..."
- name: argo.namespace
value: "openshift-gitops"
destination:
server: 'https://kubernetes.default.svc'
namespace: openshift-gitops
The Cluster Root Application
Cluster Root Applications render the Cluster Root Chart into the ArgoCD namespace of the Management Cluster.
The Cluster Root Chart contains templates to conditionally render ArgoCD Applications that deploy cluster-wide resources to Target Clusters once the configuration for those resources is present in the Config Repository.
Application-specific configuration is held under a unique top-level field. For example, the ibm_operator_catalog
field in our example above holds all configuration for the 000-ibm-operator-catalog chart. The 000-ibm-operator-catalog-app template that renders this chart is guarded by:
{{- if not (empty .Values.ibm_operator_catalog) }}
Continuing with our example, because ibm_operator_catalog
is present in the Helm values for both Cluster Root Applications, both will render the 000-ibm-operator-catalog-app template into the respective Target Cluster.
A simplified and abridged snippet of the 000-ibm-operator-catalog-app template is shown below, followed by a breakdown of the purpose of each section:
kind: Application
metadata:
name: operator-catalog.{{ .Values.cluster.id }}
spec:
source:
path: cluster-applications/000-ibm-operator-catalog
plugin:
name: argocd-vault-plugin-helm
env:
- name: HELM_VALUES
value: |
mas_catalog_version: "{{ .Values.ibm_operator_catalog.mas_catalog_version }}"
destination:
server: {{ .Values.cluster.url }}
The template generates an Operator Catalog Application named according to its type (operator-catalog
) and includes the cluster ID:
kind: Application
metadata:
name: operator-catalog.{{ .Values.cluster.id }}
The Operator Catalog Application renders the 000-ibm-operator-catalog chart:
spec:
source:
path: cluster-applications/000-ibm-operator-catalog
Values are mapped from those in the Cluster Root Application manifest into the form expected by the 000-ibm-operator-catalog chart.
plugin:
name: argocd-vault-plugin-helm
env:
- name: HELM_VALUES
value: |
mas_catalog_version: "{{ .Values.ibm_operator_catalog.mas_catalog_version }}"
Info
Some of these values (not shown here) will be inline-path placeholders for referencing secrets in the Secrets Vault, so we pass the values in via the AVP plugin source (rather than the helm
source):
Finally, the resources in the 000-ibm-operator-catalog chart should created on the Target Cluster in order to install the IBM operator catalog there:
destination:
server: {{ .Values.cluster.url }}
For our example configuration, two Operator Catalog Applications will be generated:
kind: Application
metadata:
name: operator-catalog.cluster1
spec:
destination:
server: https://api.cluster1.cakv.p3.openshiftapps.com:443
source:
path: cluster-applications/000-ibm-operator-catalog
plugin:
name: argocd-vault-plugin-helm
env:
- name: HELM_VALUES
value: |
mas_catalog_version: "v8-240430-amd64"
kind: Application
metadata:
name: operator-catalog.cluster2
spec:
destination:
server: https://api.cluster2.jsig.p3.openshiftapps.com:443
source:
path: cluster-applications/000-ibm-operator-catalog
plugin:
name: argocd-vault-plugin-helm
env:
- name: HELM_VALUES
value: |
mas_catalog_version: "v8-240405-amd64"
The other Application templates in the Cluster Root Chart (e.g. 010-ibm-redhat-cert-manager-app.yaml, 020-ibm-dro-app.yaml and so on) all follow this pattern and work in a similar way.
The Cluster Root Chart also includes the Instance Root Application Set template which generates a new Instance Root Application Set for each cluster.
The Instance Root Application Set
The Instance Root Application Set generates a set of Instance Root Applications based on the configuration in the Config Repository. It follows the same pattern as the Cluster Root Application Set as described above.
The key differences are:
merge-keys
in the instance-level configuration YAML files also contain a MAS instance ID, e.g.dev/cluster1/instance1
.- The generated Instance Root Applications source the ibm-mas-instance-root Chart.
- The Git File Generators look for a different set of named YAML files at the instance level in the Config Repository:
A simplified and abridged snippet showing the Merge and Git File generators from the Instance Root Application Set template is shown below:
spec:
...
generators:
- merge:
mergeKeys:
- 'merge-key'
generators:
- git:
files:
- path: "{{ .Values.account.id }}/{{ .Values.cluster.id }}/*/ibm-mas-instance-base.yaml"
- git:
files:
- path: "{{ .Values.account.id }}/{{ .Values.cluster.id }}/*/ibm-mas-suite.yaml"
Continuing with our example, let's add some additional instance-level config files to the Config Repository (only showing cluster1
this time for brevity). These are the files that the Git File Generators above are looking for.
├── dev
│ ├── cluster1
│ │ ├── ibm-mas-cluster-base.yaml
│ │ ├── ibm-operator-catalog.yaml
│ | ├── instance1
│ | │ ├── ibm-mas-instance-base.yaml
│ | │ ├── ibm-mas-suite.yaml
Now let's take a look at the contents of the new instance-level files:
├── dev
│ ├── cluster1
│ │ ├── ibm-mas-cluster-base.yaml
│ │ ├── ibm-operator-catalog.yaml
│ | ├── instance1
| | | |-------------------------------------------
│ | │ ├── ibm-mas-instance-base.yaml
| | | |-------------------------------------------
| | | | merge-key: "dev/cluster1/instance1"
| | | | account:
| | | | id: dev
| | | | cluster:
| | | | id: cluster1
| | | | url: https://api.cluster1.cakv.p3.openshiftapps.com:443
| | | | instance:
| | | | id: instance1
| | | |
| | | |-------------------------------------------
│ | │ ├── ibm-mas-suite.yaml
| | | |-------------------------------------------
| | | | merge-key: "dev/cluster1/instance1"
| | | | ibm_mas_suite:
| | | | mas_channel: "8.11.x"
...
As with the cluster-level config, all files contain the merge-key
, but this times it also includes the MAS instance ID. This is used by the Merge generator to group together configuration into per-instance YAML objects for each Target Cluster.
The ibm-mas-instance-base.yaml
file contains global configuration for the instance on the Target Cluster, including the account.id
, and the cluster.id
, the cluster.url
and the instance.id
.
The other YAML configuration files (such as ibm-mas-suite.yaml
shown above) represent one type of instance-level resource that we wish to install on the Target Cluster.
Given the config above, the
merge-key: "dev/cluster1/instance1"
account:
id: dev
cluster:
id: cluster1
url: https://api.cluster1.cakv.p3.openshiftapps.com:443
instance:
id: instance1
ibm_mas_suite:
mas_channel: "8.11.x"
Follow the same pattern used in the Cluster Root Application Set as described above, the YAML object is used to render tje Instance Root Application Set template, generating an Instance Root Application:
kind: Application
metadata:
name: instance.cluster1.instance1
spec:
source:
path: root-applications/ibm-mas-instance-root
helm:
values: |-
merge-key: dev/cluster1/instance1
account:
id: dev
cluster:
id: cluster1
url: https://api.cluster1.cakv.p3.openshiftapps.com:443
instance:
id: instance1
ibm_mas_suite:
mas_channel: "8.11.x"
parameters:
- name: source.repo_url
value: "https://github.com/..."
- name: argo.namespace
value: "openshift-gitops"
destination:
server: 'https://kubernetes.default.svc'
namespace: openshift-gitops
.
The Instance Root Application
Instance Root Applications render the Instance Root Chart into the ArgoCD namespace of the Management Cluster.
The Instance Root Chart contains templates to conditionally render ArgoCD Applications that deploy MAS instances to Target Clusters once the configuration for the ArgoCD Application is present in the Config Repository.
It follows the same pattern as the Cluster Root Application described above; specific applications are enabled once their configuration is pushed to the Config Repository. For instance, the 130-ibm-mas-suite-app.yaml template generates an Application that deploys the MAS Suite
CR to the target cluster once configuration under the ibm_mas_suite
key is present.
Some special templates are capable of generating multiple applications:
- 120-db2-databases-app.yaml
- 130-ibm-mas-suite-configs-app.yaml
- 200-ibm-mas-workspaces.yaml
- 510-550-ibm-mas-masapp-configs
These are used when there can be more than one instance of the type of resource that these Applications are responsible for managing.
For example, MAS instances may require more than one DB2 Database. To accommodate this, we make use of the Helm range
control structure to iterate over a list in YAML configuration files in the Config Repository.
For instance, the ibm-db2u-databases.yaml
configuration file contains:
ibm_db2u_databases:
- mas_application_id: iot
db2_memory_limits: 12Gi
...
- mas_application_id: manage
db2_memory_limits: 16Gi
db2_database_db_config:
CHNGPGS_THRESH: '40'
...
...
The 120-db2-databases-app.yaml template iterates over this list to generate multiple DB2 Database Applications configured as needed:
{{- range $i, $value := .Values.ibm_db2u_databases }}
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: "db2-db.{{ $.Values.cluster.id }}.{{ $.Values.instance.id }}.{{ $value.mas_application_id }}"
...
{{- end}}
Why not use ApplicationSets here?
We encountered some limitations when using ApplicationSets for this purpose. For instance, Applications generated by ApplicationSets do not participate in the ArgoCD syncwave with other Applications so we would have no way of ensuring that resources would be configured in the correct order. By using the Helm range
control structure we generate "normal" Applications that do not suffer from this limitation. This means, for instance, that we can ensure that DB2 Databases are configured before attempting to provide the corresponding JDBC configuration to MAS.