Installation¶
Usage¶
mas install [OPTIONS]
MAS Catalog Selection & Entitlement¶
Configure which IBM Maximo Operator Catalog to install and provide your IBM entitlement key for access to container images.
| Option | Type | Default | Description |
|---|---|---|---|
-c, --mas-catalog-version |
string |
- | IBM Maximo Operator Catalog to install |
--mas-catalog-digest |
string |
- | IBM Maximo Operator Catalog Digest, only required when installing development catalog sources |
--ibm-entitlement-key |
string |
- | IBM entitlement key |
Basic Configuration¶
Core configuration options for your MAS instance including instance ID, workspace settings, subscription channels, and user settings.
| Option | Type | Default | Description |
|---|---|---|---|
-i, --mas-instance-id |
string |
- | MAS Instance ID |
-w, --mas-workspace-id |
string |
- | MAS Workspace ID |
-W, --mas-workspace-name |
string |
- | MAS Workspace Name |
--mas-channel |
string |
- | Subscription channel for the Core Platform |
--aiservice-instance-id |
string |
- | AI Service Instance ID |
--allow-special-chars |
flag | false |
Allow special characters for user username/ID |
Advanced Configuration¶
Advanced configuration options for MAS including DNS providers, certificates, domain settings, and IPv6 support.
| Option | Type | Default | Description |
|---|---|---|---|
--superuser-username |
string |
- | |
--superuser-password |
string |
- | |
--additional-configs |
string |
- | Path to a directory containing additional configuration files to be applied |
--pod-templates |
string |
- | Path to directory containing custom podTemplates configuration files to be applied |
--non-prod |
flag | false |
Install MAS in non-production mode |
--disable-ca-trust |
flag | - | Disable built-in trust of well-known CAs |
--routing |
{path, subdomain} |
- | Configure MAS with path or subdomain routing |
--configure-ingress |
flag | false |
Automatically configure IngressController to allow InterNamespaceAllowed for path-based routing |
--ingress-controller-name |
string |
- | Name of the IngressController to use for path-based routing (default: 'default') |
--manual-certificates |
string |
- | Path to directory containing the certificates to be applied |
--domain |
string |
- | Configure MAS with a custom domain |
--disable-walkme |
flag | - | Disable MAS guided tour |
--disable-feature-usage |
flag | - | Disable feature adoption metrics reporting |
--disable-deployment-progression |
flag | - | Disable deployment progression metrics reporting |
--disable-usability-metrics |
flag | - | Disable usability metrics reporting |
--dns-provider |
{cloudflare, cis, route53} |
- | Enable automatic DNS management (see DNS Configuration options) |
--ocp-ingress |
string |
- | Overwrites Ingress Domain |
--mas-cluster-issuer |
string |
- | Provide the name of the ClusterIssuer to configure MAS to issue certificates |
--enable-ipv6 |
flag | - | Configure MAS to run in IP version 6. Before setting this option, be sure your cluster is configured in IP version 6 |
DNS Integration - CIS¶
| Option | Type | Default | Description |
|---|---|---|---|
--cis-email |
string |
- | Required when DNS provider is CIS and you want to use a Let's Encrypt ClusterIssuer |
--cis-apikey |
string |
- | Required when DNS provider is CIS |
--cis-crn |
string |
- | Required when DNS provider is CIS |
--cis-subdomain |
string |
- | Optionally setup MAS instance as a subdomain under a multi-tenant CIS DNS record |
DNS Integration - CloudFlare¶
Configuration options for Cloudflare DNS provider, including API credentials, zone, and subdomain settings.
| Option | Type | Default | Description |
|---|---|---|---|
--cloudflare-email |
string |
- | Required when DNS provider is Cloudflare |
--cloudflare-apitoken |
string |
- | Required when DNS provider is Cloudflare |
--cloudflare-zone |
string |
- | Required when DNS provider is Cloudflare |
--cloudflare-subdomain |
string |
- | Required when DNS provider is Cloudflare |
Storage¶
Configure storage classes for ReadWriteOnce (RWO) and ReadWriteMany (RWX) volumes, and pipeline storage settings.
| Option | Type | Default | Description |
|---|---|---|---|
--storage-class-rwo |
string |
- | ReadWriteOnce (RWO) storage class (e.g. ibmc-block-gold) |
--storage-class-rwx |
string |
- | ReadWriteMany (RWX) storage class (e.g. ibmc-file-gold-gid) |
--storage-pipeline |
string |
- | Install pipeline storage class (e.g. ibmc-file-gold-gid) |
--storage-accessmode |
{ReadWriteMany, ReadWriteOnce} |
- | Install pipeline storage class access mode |
IBM Suite License Service¶
Configure IBM Suite License Service (SLS) including license file location, namespace, and subscription channel.
| Option | Type | Default | Description |
|---|---|---|---|
--license-file |
|
- | Path to MAS license file |
--sls-namespace |
string |
ibm-sls |
Customize the SLS install namespace |
--dedicated-sls |
flag | false |
Set the SLS namespace to mas-<instanceid>-sls |
--sls-channel |
string |
- | Customize the SLS channel when in development mode |
IBM Data Reporting Operator¶
Configure IBM Data Reporting Operator (DRO) with contact information and namespace settings for usage data collection.
| Option | Type | Default | Description |
|---|---|---|---|
--contact-email, --uds-email |
string |
- | Contact e-mail address |
--contact-firstname, --uds-firstname |
string |
- | Contact first name |
--contact-lastname, --uds-lastname |
string |
- | Contact last name |
--dro-namespace |
string |
- | Namespace for DRO |
MongoDB Community Operator¶
Configure the namespace for MongoDB Community Operator deployment.
| Option | Type | Default | Description |
|---|---|---|---|
--mongodb-namespace |
string |
- | Namespace for MongoDB Community Operator |
OCP Configuration¶
OpenShift Container Platform specific configuration including ingress certificate settings.
| Option | Type | Default | Description |
|---|---|---|---|
--ocp-ingress-tls-secret-name |
string |
- | Name of the secret holding the cluster's ingress certificates |
Grafana¶
Configure Grafana installation, namespace and storage size.
| Option | Type | Default | Description |
|---|---|---|---|
--skip-grafana-install |
flag | false |
Skip Grafana installation |
--grafana-v5-namespace |
string |
grafana5 |
Customize the Grafana namespace |
--grafana-instance-storage-size |
string |
10Gi |
Customize the Grafana storage size |
MAS Applications¶
Configure subscription channels for MAS applications including Assist, IoT, Monitor, Optimizer, Predict, and Visual Inspection.
| Option | Type | Default | Description |
|---|---|---|---|
--assist-channel |
string |
- | Subscription channel for Maximo Assist |
--iot-channel |
string |
- | Subscription channel for Maximo IoT |
--monitor-channel |
string |
- | Subscription channel for Maximo Monitor |
--manage-channel |
string |
- | Subscription channel for Maximo Manage |
--predict-channel |
string |
- | Subscription channel for Maximo Predict |
--visualinspection-channel |
string |
- | Subscription channel for Maximo Visual Inspection |
--optimizer-channel |
string |
- | Subscription channel for Maximo optimizer |
--optimizer-plan |
{full, limited} |
- | Install plan for Maximo Optimizer |
--facilities-channel |
string |
- | Subscription channel for Maximo Real Estate and Facilities |
--aiservice-channel |
string |
- | Subscription channel for Maximo AI Service |
Maximo Location Services for Esri (arcgis)¶
| Option | Type | Default | Description |
|---|---|---|---|
--arcgis-channel |
string |
- | Subscription channel for IBM Maximo Location Services for Esri. Only applicable if installing Manage with Spatial or Facilities |
Advanced Settings - Manage¶
| Option | Type | Default | Description |
|---|---|---|---|
--manage-server-bundle-size |
{dev, snojms, small, jms} |
- | Set Manage server bundle size configuration |
--manage-jms |
flag | - | Set JMS configuration |
--manage-persistent-volumes |
flag | - | |
--manage-jdbc |
{system, workspace-application} |
- | |
--manage-demodata |
flag | - | |
--manage-components |
string |
"" |
Set Manage Components to be installed (e.g 'base=latest,health=latest,civil=latest') |
--manage-health-wsl |
flag | - | Set boolean value indicating if Watson Studio must be bound to Manage. It is expected a system level WatsonStudioCfg applied in the cluster. |
--manage-customization-archive-name |
string |
- | Manage Archive name |
--manage-customization-archive-url |
string |
- | Manage Archive url |
--manage-customization-archive-username |
string |
- | Manage Archive username (HTTP basic auth) |
--manage-customization-archive-password |
string |
- | Manage Archive password (HTTP basic auth) |
--manage-db-tablespace |
string |
- | Database tablespace name that Manage will use to be installed. Default is 'MAXDATA' |
--manage-db-indexspace |
string |
- | Database indexspace name that Manage will use to be installed. Default is 'MAXINDEX' |
--manage-db-schema |
string |
- | Database schema name that Manage will use to be installed. Default is 'maximo' |
--manage-crypto-key |
string |
- | Customize Manage database encryption keys |
--manage-cryptox-key |
string |
- | Customize Manage database encryption keys |
--manage-old-crypto-key |
string |
- | Customize Manage database encryption keys |
--manage-old-cryptox-key |
string |
- | Customize Manage database encryption keys |
--manage-encryption-secret-name |
string |
- | Name of the Manage database encryption secret |
--manage-base-language |
string |
- | Manage base language to be installed. Default is `EN` (English) |
--manage-secondary-languages |
string |
- | Comma-separated list of Manage secondary languages to be installed (e.g. 'JA,DE,AR') |
--manage-server-timezone |
string |
- | Manage server timezone. Default is `GMT` |
--manage-upgrade-type |
{regularUpgrade, onlineUpgrade} |
regularUpgrade |
Set Manage upgrade type (default: regularUpgrade) |
--manage-attachments-provider |
{filestorage, ibm, aws} |
- | Storage provider type for Maximo Manage attachments |
--manage-attachments-mode |
{cr, db} |
- | How attachment properties will be configured in Manage |
--manage-aiservice-instance-id |
string |
- | AI Service Instance ID to bind with Manage |
--manage-aiservice-tenant-id |
string |
- | AI Service Tenant ID to bind with Manage |
Advanced Settings - Facilities¶
Advanced configuration for Maximo Real Estate and Facilities including deployment size, image pull policy, routes timeout, Liberty extensions, vault secrets, workflow agents, connection pool size, and storage settings.
| Option | Type | Default | Description |
|---|---|---|---|
--facilities-app-om-upgrade-mode |
{manual, load-only, automatic} |
- | Sets the Application Object Migration Mode |
--facilities-size |
{small, medium, large} |
- | Size of Facilities deployment |
--facilities-pull-policy |
{IfNotPresent, Always} |
- | Image pull policy for Facilities |
--facilities-routes-timeout |
string |
600s |
Timeout for Facilities routes (default: 600s) |
--facilities-xml-extension |
string |
- | Secret name containing Liberty server extensions |
--facilities-vault-secret |
string |
- | Secret name containing AES encryption password |
--facilities-dwfagent |
str |
- | List of dedicated workflow agents |
--facilities-maxconnpoolsize |
int |
200 |
Maximum database connection pool size (default: 200) |
--facilities-log-storage-class |
string |
- | Storage class for Facilities logs |
--facilities-log-storage-mode |
string |
- | Storage mode for Facilities logs |
--facilities-log-storage-size |
string |
- | Storage size for Facilities logs |
--facilities-userfiles-storage-class |
string |
- | Storage class for Facilities user files |
--facilities-userfiles-storage-mode |
string |
- | Storage mode for Facilities user files |
--facilities-userfiles-storage-size |
string |
- | Storage size for Facilities user files |
Open Data Hub¶
| Option | Type | Default | Description |
|---|---|---|---|
--odh-model-deployment-type |
string |
raw |
Model deployment type for ODH |
RedHat Openshift AI¶
| Option | Type | Default | Description |
|---|---|---|---|
--rhoai-model-deployment-type |
string |
raw |
Model deployment type for RedHat Openshift AI |
--rhoai |
flag | - | temporary flag to install Redhat Openshift AI instead of Opendatahub |
S3 Storage¶
Configure S3-compatible object storage for AI Service including Minio installation or external S3 connection details (host, port, SSL, credentials, bucket, and region).
| Option | Type | Default | Description |
|---|---|---|---|
--install-minio |
flag | - | Install Minio and configure it as the S3 provider for AI Service |
--minio-root-user |
string |
- | Root user for minio |
--minio-root-password |
string |
- | Password for minio root user |
--s3-host |
string |
- | Hostname or IP address of the S3 storage service |
--s3-port |
string |
- | Port number for the S3 storage service |
--s3-ssl |
string |
- | Enable or disable SSL for S3 connection (true/false) |
--s3-accesskey |
string |
- | Access key for authenticating with the S3 storage service |
--s3-secretkey |
string |
- | Secret key for authenticating with the S3 storage service |
--s3-region |
string |
- | Region for the S3 storage service |
--s3-bucket-prefix |
string |
- | Bucket prefix configured with S3 storage service |
--s3-tenants-bucket |
string |
km-tenants |
Name of the S3 bucket for tenants storage |
--s3-templates-bucket |
string |
km-templates |
Name of the S3 bucket for templates storage |
Watsonx¶
Configure IBM Watsonx integration for AI Service including API key, instance ID, project ID, and service URL.
| Option | Type | Default | Description |
|---|---|---|---|
--watsonxai-apikey |
string |
- | API key for WatsonX |
--watsonxai-url |
string |
- | URL endpoint for WatsonX |
--watsonxai-project-id |
string |
- | Project ID for WatsonX |
--watsonx-action |
string |
- | Action to perform with WatsonX (install/remove) |
--watsonxai-ca-crt |
string |
- | CA certificate for WatsonX AI (PEM format, optional, only if using self-signed certs) |
--watsonxai-deployment-id |
string |
- | WatsonX deployment ID |
--watsonxai-space-id |
string |
- | WatsonX space ID |
--watsonxai-instance-id |
string |
- | WatsonX instance ID |
--watsonxai-username |
string |
- | WatsonX username |
--watsonxai-version |
string |
- | WatsonX version |
--watsonxai-onprem |
string |
- | WatsonX deployed on prem |
Maximo AI Service Tenant¶
| Option | Type | Default | Description |
|---|---|---|---|
--tenant-entitlement-type |
string |
standard |
Entitlement type for AI Service tenant |
--tenant-entitlement-start-date |
string |
- | Start date for AI Service tenant |
--tenant-entitlement-end-date |
string |
- | End date for AI Service tenant |
--rsl-url |
string |
- | rsl url |
--rsl-org-id |
string |
- | org id for rsl |
--rsl-token |
string |
- | token for rsl |
--rsl-ca-crt |
string |
- | CA certificate for RSL API (PEM format, optional, only if using self-signed certs) |
Maximo AI Service¶
Maximo AI Service configuration such as certificate Issuer, environment type
| Option | Type | Default | Description |
|---|---|---|---|
--environment-type |
string |
non-production |
Environment type (default: non-production) |
--aiservice-certificate-issuer |
string |
- | Provide the name of the Issuer to configure AI Service to issue certificates |
IBM Cloud Pak for Data¶
Configure IBM Cloud Pak for Data applications including Watson Studio, Watson Machine Learning, Watson Discovery, Analytics Engine (Spark), Cognos Analytics, SPSS Modeler, and Canvas Base.
| Option | Type | Default | Description |
|---|---|---|---|
--cp4d-version |
string |
- | Product version of CP4D to use |
--cp4d-install-cognos |
flag | - | Add Cognos as part of Cloud Pak for Data |
--cp4d-install-ws |
flag | - | Add Watson Studio as part of Cloud Pak for Data |
--cp4d-install-wml |
flag | - | Add Watson Machine Learning as part of Cloud Pak for Data |
--cp4d-install-ae |
flag | - | Add Spark Analytics Engine as part of Cloud Pak for Data |
IBM Db2 Universal Operator¶
Configure IBM Db2 instances including namespace, channel, installation options for system/manage/facilities databases, database type, timezone, affinity, tolerations, resource limits, and storage capacity.
| Option | Type | Default | Description |
|---|---|---|---|
--db2-namespace |
string |
- | Change namespace where Db2u instances will be created |
--db2-channel |
string |
- | Subscription channel for Db2u |
--db2-system |
flag | - | Install a shared Db2u instance for MAS (required by IoT & Monitor, supported by Manage) |
--db2-manage |
flag | - | Install a dedicated Db2u instance for Maximo Manage (supported by Manage) |
--db2-facilities |
flag | - | Install a dedicated Db2u instance for Maximo Real Estate and Facilities (supported by Facilities) |
--db2-type |
{db2wh, db2oltp} |
- | Type of Manage dedicated Db2u instance (default: db2wh) |
--db2-timezone |
string |
- | Timezone for Db2 instance |
--db2-affinity-key |
string |
- | Set a node label to declare affinity to |
--db2-affinity-value |
string |
- | Set the value of the node label to affine with |
--db2-tolerate-key |
string |
- | Set a node taint to tolerate |
--db2-tolerate-value |
string |
- | Set the value of the taint to tolerate |
--db2-tolerate-effect |
{NoSchedule, PreferNoSchedule, NoExecute} |
- | Taint effect to tolerate |
--db2-cpu-requests |
string |
- | Customize Db2 CPU request |
--db2-cpu-limits |
string |
- | Customize Db2 CPU limit |
--db2-memory-requests |
string |
- | Customize Db2 memory request |
--db2-memory-limits |
string |
- | Customize Db2 memory limit |
--db2-backup-storage |
string |
- | Db2 backup storage capacity |
--db2-data-storage |
string |
- | Db2 data storage capacity |
--db2-logs-storage |
string |
- | Db2 logs storage capacity |
--db2-meta-storage |
string |
- | Db2 metadata storage capacity |
--db2-temp-storage |
string |
- | Db2 temporary storage capacity |
--db2u-kind |
string |
- | Db2 resource kind in the cluster |
ECK Integration¶
Configure Elastic Cloud on Kubernetes (ECK) integration for logging and monitoring capabilities.
| Option | Type | Default | Description |
|---|---|---|---|
--eck |
flag | - | |
--eck-enable-logstash |
flag | false |
|
--eck-remote-es-hosts |
string |
- | |
--eck-remote-es-username |
string |
- | |
--eck-remote-es-password |
string |
- |
Kafka - Common¶
Common Kafka configuration options including provider selection (Strimzi, Red Hat AMQ Streams, IBM Event Streams, or AWS MSK) and authentication credentials.
| Option | Type | Default | Description |
|---|---|---|---|
--kafka-provider |
{strimzi, redhat, ibm, aws} |
- | Kafka provider: redhat (Red Hat AMQ Streams), strimzi, ibm (IBM Event Streams), or aws (AWS MSK) |
--kafka-username |
string |
- | Kafka instance username (applicable for redhat, strimzi, or aws providers) |
--kafka-password |
string |
- | Kafka instance password (applicable for redhat, strimzi, or aws providers) |
--kafka-namespace |
string |
- | Set Kafka namespace. Only applicable if installing `redhat` (Red Hat AMQ Streams) or `strimzi` |
Kafka - Strimzi and AMQ Streams¶
Configuration options specific to Strimzi and Red Hat AMQ Streams Kafka deployments including namespace and cluster version.
| Option | Type | Default | Description |
|---|---|---|---|
--kafka-version |
string |
- | Set version of the Kafka cluster that the Strimzi or AMQ Streams operator will create |
Kafka - AWS MSK¶
Configuration options for Amazon Managed Streaming for Apache Kafka (MSK) including instance type, node count, volume size, CIDR subnets for availability zones, and egress settings.
| Option | Type | Default | Description |
|---|---|---|---|
--msk-instance-type |
string |
- | Set the MSK instance type |
--msk-instance-nodes |
string |
- | Set total number of MSK instance nodes |
--msk-instance-volume-size |
string |
- | Set storage/volume size for the MSK instance |
--msk-cidr-az1 |
string |
- | Set the CIDR subnet for availability zone 1 for the MSK instance |
--msk-cidr-az2 |
string |
- | Set the CIDR subnet for availability zone 2 for the MSK instance |
--msk-cidr-az3 |
string |
- | Set the CIDR subnet for availability zone 3 for the MSK instance |
--msk-cidr-egress |
string |
- | Set the CIDR for egress connectivity |
--msk-cidr-ingress |
string |
- | Set the CIDR for ingress connectivity |
Kafka - Event Streams¶
Configuration options for IBM Event Streams including resource group, instance name, and location.
| Option | Type | Default | Description |
|---|---|---|---|
--eventstreams-resource-group |
string |
- | Set IBM Cloud resource group to target the Event Streams instance provisioning |
--eventstreams-instance-name |
string |
- | Set IBM Event Streams instance name |
--eventstreams-instance-location |
string |
- | Set IBM Event Streams instance location |
Cloud Object Storage¶
| Option | Type | Default | Description |
|---|---|---|---|
--cos |
{ibm, ocs} |
- | Set cloud object storage provider. Supported options are `ibm` and `ocs` |
--cos-resourcegroup |
string |
- | When using IBM COS, set the resource group where the instance will run |
--cos-apikey |
string |
- | When using IBM COS, set COS priviledged apikey for IBM Cloud |
--cos-instance-name |
string |
- | When using IBM COS, set COS instance name to be used/created |
--cos-bucket-name |
string |
- | When using IBM COS, set COS bucket name to be used/created |
Cloud Providers¶
Configure cloud provider settings including AWS region, availability zones, and IBM Cloud API key.
| Option | Type | Default | Description |
|---|---|---|---|
--ibmcloud-apikey |
string |
- | Set IBM Cloud API Key |
--aws-region |
string |
- | Set target AWS region for the MSK instance |
--aws-access-key-id |
string |
- | Set AWS access key ID for the target AWS account |
--secret-access-key |
string |
- | Set AWS secret access key for the target AWS account |
--aws-vpc-id |
string |
- | Set target Virtual Private Cloud ID for the MSK instance |
Integrated Approval Workflow¶
Configure approval checkpoints during installation for Core Platform and each MAS application workspace (Assist, IoT, Manage, Monitor, Optimizer, Predict, Visual Inspection, Facilities, and AI Service). Format: MAX_RETRIES:RETRY_DELAY:IGNORE_FAILURE
| Option | Type | Default | Description |
|---|---|---|---|
--approval-core |
string |
"" |
Require approval after the Core Platform has been configured |
--approval-assist |
string |
"" |
Require approval after the Maximo Assist workspace has been configured |
--approval-iot |
string |
"" |
Require approval after the Maximo IoT workspace has been configured |
--approval-manage |
string |
"" |
Require approval after the Maximo Manage workspace has been configured |
--approval-monitor |
string |
"" |
Require approval after the Maximo Monitor workspace has been configured |
--approval-optimizer |
string |
"" |
Require approval after the Maximo Optimizer workspace has been configured |
--approval-predict |
string |
"" |
Require approval after the Maximo Predict workspace has been configured |
--approval-visualinspection |
string |
"" |
Require approval after the Maximo Visual Inspection workspace has been configured |
--approval-facilities |
string |
"" |
Require approval after the Maximo Real Estate and Facilities workspace has been configured |
--approval-aiservice |
string |
"" |
Require approval after the AI Service has been configured |
More¶
Additional options including advanced/simplified mode toggles, license acceptance, development mode, Artifactory credentials, PVC wait control, pre-check skip, confirmation prompts, image pull policy, and custom service account.
| Option | Type | Default | Description |
|---|---|---|---|
--artifactory-username |
string |
- | Username for access to development builds on Artifactory |
--artifactory-token |
string |
- | API Token for access to development builds on Artifactory |
--advanced |
flag | false |
Show advanced install options (in interactive mode) |
--simplified |
flag | false |
Don't show advanced install options (in interactive mode) |
--accept-license |
flag | false |
Accept all license terms without prompting |
--dev-mode |
flag | false |
Configure installation for development mode |
--skip-pre-check |
flag | false |
Disable the 'pre-install-check' at the start of the install pipeline |
--no-confirm |
flag | false |
Launch the installation without prompting for confirmation |
--image-pull-policy |
{IfNotPresent, Always} |
- | Image pull policy for Tekton Pipeline |
--service-account |
string |
- | Custom service account for install pipeline (disables default 'pipeline' service account creation) |
Preparation¶
IBM Entitlement Key¶
Access Container Software Library using your IBMId to obtain your entitlement key.
MAS License File¶
Access IBM License Key Center, on the Get Keys menu select IBM AppPoint Suites. Select IBM MAXIMO APPLICATION SUITE AppPOINT LIC and on the next page fill in the information as below:
| Field | Content |
|---|---|
| Number of Keys | How many AppPoints to assign to the license file |
| Host ID Type | Set to Ethernet Address |
| Host ID | Enter any 12 digit hexadecimal string |
| Hostname | Set to the hostname of your OCP instance, but this can be any value really. |
| Port | Set to 27000 |
The other values can be left at their defaults. Finally, click Generate and download the license file to your home directory as entitlement.lic.
Note
For more information about how to access the IBM License Key Center review the getting started documentation available from the IBM support website.
OpenShift Container Platform¶
You should already have a target OpenShift cluster ready to install Maximo Application suite into. If you do not already have one then refer to the OpenShift Container Platform installation overview. The CLI also supports OpenShift provisioning in many hyperscaler providers:
IBM Maximo Application Suite is designed to run on a continuously evolving OpenShift platform. Red Hat regularly updates its operator catalogs (including the Community Operators catalog that provides components such as Grafana), and these updates can sometimes introduce breaking changes. To ensure stable, reproducible installations, it is essential to align the versions of OpenShift, Red Hat operator catalogs, the IBM Maximo operator catalog, and the MAS CLI.
Supporting Older Versions of MAS¶
When newer Red Hat operator catalogs are used with older MAS versions, incompatibilities can occur. A common example is issues with Grafana (sourced from the Red Hat Community Operators catalog). The Maximo team deliver compatibility fixes and workarounds for these kinds of issues in the monthly updates; but these fixes can not be made available in older catalogs because they are immutable.
Understanding the Challenge: An Analogy¶
If you want to freeze the version of the apps you are running on your mobile device you would also turn off auto-updates for the operating system. You would not expect year-old versions of those apps to function with modern versions of the operating system. With Red Hat OpenShift and Maximo Application Suite it is the same.
Customers who wish to maintain an older MAS version for an extended period must apply the same versioning discipline to the entire underlying platform. Mixing an older Maximo operator catalog with newer Red Hat operator catalogs is not supported and will lead to unexpected behavior.
If your organization's policies require extended stability windows (e.g., 6–12 months without updates), you must lock all components at specific, immutable versions. This approach is supported but carries increased security exposure over time.
When to Use Static Versioning¶
Use this approach if:
- Your organization requires 6-12+ month stability windows between updates
- You need guaranteed reproducibility for compliance, audit, or regulatory purposes
- You have strict change control processes that prevent frequent updates
- You can accept delayed security updates and increased security risk over time
- You have resources to manage and document version pinning
Do NOT use this approach if:
- You need the latest security patches and bug fixes
- Your compliance requirements mandate current security updates
- You lack resources to maintain detailed version documentation
- You cannot accept increased security exposure
Creating a Fully Reproducible, Static Installation¶
To guarantee an identical installation months or years later, you must lock all four of the following elements:
| Component | Example | How to Pin | Notes |
|---|---|---|---|
| OpenShift Version | 4.18.3 |
Disable cluster auto-updates | Specific patch version required |
| Red Hat Catalogs | sha256:xxxxx |
Use digest in CatalogSource | Must pin all Red Hat catalogs |
| IBM MAS Catalog | v9-260326-amd64 |
Specify exact catalog version | Use immutable catalog version |
| MAS CLI Version | 19.5.0 |
Use specific image tag | Document exact version used |
Critical Requirement
All four elements must be pinned together. Pinning only some components creates an unstable, unsupported configuration. If any one element is allowed to vary, you do not have a static, reproducible environment.
Step-by-Step Guidance¶
Step 1: Extract Catalog Digests¶
For each Red Hat CatalogSource you want to pin, retrieve the current digest:
for CATALOG_NAME in "community-operators" "redhat-operators" "certified-operators"; do
oc get pods -n openshift-marketplace -l olm.catalogSource=${CATALOG_NAME} \
-o jsonpath='{.items[0].status.containerStatuses[0].imageID}' && echo
done
This will produce output similar to the following:
registry.redhat.io/redhat/community-operator-index@sha256:7e2eca1a...
registry.redhat.io/redhat/redhat-operator-index@sha256:17e179ef...
registry.redhat.io/redhat/certified-operator-index@sha256:1df4aaf5...
Step 2: Update CatalogSource Resources¶
For each CatalogSource, update the resource to use the digest and remove automatic polling.
Before (Dynamic):
spec:
image: registry.redhat.io/redhat/community-operator-index:v4.18
updateStrategy:
registryPoll:
interval: 10m
After (Static):
spec:
image: registry.redhat.io/redhat/community-operator-index@sha256:xxxxxxxx
# updateStrategy section completely removed – no automatic polling
You can update the CatalogSource using:
oc edit catalogsource community-operators -n openshift-marketplace
Step 3: Document Your Configuration¶
Create a comprehensive configuration document that includes:
- Date of pinning: When the static configuration was implemented
- All four pinned versions:
- OpenShift version (e.g.,
4.18.3) - Red Hat catalog digests (all catalogs with full SHA256)
- IBM MAS catalog version (e.g.,
v9-260326-amd64) - MAS CLI version (e.g.,
19.5.0) - Reason for pinning: Business justification and requirements
- Planned review date: When the configuration will be reviewed for updates
- Responsible contact: Person or team responsible for maintenance
- Exact installation command: The complete
mas installcommand used
Store this documentation in your configuration management system and update it whenever changes are made.
Important Security and Support Considerations¶
Security Exposure Timeline¶
| Time Since Pinning | Risk Level | Recommended Action |
|---|---|---|
| 0-3 months | Low | Monitor security advisories |
| 3-6 months | Medium | Plan update window |
| 6-12 months | High | Update strongly recommended |
| 12+ months | Critical | Update immediately |
Security Implications¶
- Installing older versions of Maximo Application Suite, or not applying updates for extended periods of time means you are not receiving important security updates.
- You must maintain your own security scanning and vulnerability monitoring processes.
- Security patches and fixes released after your pinned versions will not be automatically applied.
IBM Support Implications¶
- IBM Support may require you to reproduce issues on current versions.
- IBM cannot accept liability for security incidents that occur in environments that do not have the latest updates.
- Support for older versions may be limited depending on the age of the pinned components.
Compliance Considerations¶
- Some compliance frameworks (e.g., PCI-DSS, HIPAA) require current security patches and may not permit extended use of outdated software.
- Document your risk acceptance decision with your compliance and security teams.
- Maintain evidence of security monitoring and compensating controls.
- If you choose to adopt low-frequency updates, you accept full responsibility for the security posture of your environment.
Operator Catalog Selection¶
If you have not already determined the catalog version for your installation, refer to the information in the Operator Catalog topic, or contact IBM Support for guidance.
Disconnected Install Preparation¶
Prepare the Private Registry¶
You must have a production grade Docker v2 compatible registry such as Quay Enterprise, JFrog Artifactory, or Docker Registry. If you do not already have a private registry available to use as your mirror then you can use the setup-mirror function to deploy a private registry using the Docker registry container image inside a target OpenShift cluster.
docker run -ti --rm --pull always quay.io/ibmmas/cli mas setup-registry
The registry will be setup running on port 32500. For more details on this step, refer to the setup-registry command's documentation. Regardless of whether you set up a new registry or already had one, you need to collect the following information about your private registry:
| Name | Detail |
|---|---|
| Private Hostname | The hostname by which the registry will be accessible from the target OCP cluster. |
| Private Port | The port number by which the registry will be accessible from the target OCP cluster. |
| Public Hostname | The hostname by which the registry will be accessible from the machine that will be performing image mirroring. |
| Public Port | The port number by which the registry will be accessible from the machine that will be performing image mirroring. |
| CA certificate file | The CA certificate that the registry will present on the private hostname. Save this to your home directory. |
| Username | Optional. Authentication username for the registry. |
| Password | Optional. Authentication password for the registry. |
Mirror Container Images¶
Mirroring the images is a simple but time consuming process, this step must be performed from a system with internet connectivity and network access your private registry, but does not need access to your target OpenShift cluster. Three modes are available for the mirror process:
- direct mirrors images directly from the source registry to your private registry
- to-filesystem mirrors images from the source to a local directory
- from-filesystem mirrors images from a local directory to your private registry
For full details on this process review the image mirroring guide.
Configure OpenShift to use your Private Registry¶
Your cluster must be configured to use the private registry as a mirror for the MAS container images. An ImageContentSourcePolicy named mas-and-dependencies will be created in the cluster, this is also the resource that the MAS install will use to detect whether the installation is a disconnected install and tailor the options presented when you run the mas install command.
docker run -ti --pull always quay.io/ibmmas/cli mas configure-airgap
To set up Red Hat Operator, Community, and Certified catalogs with IDMS, run the below command. (Needed to install DRO and Grafana operators)
docker run -ti --pull always quay.io/ibmmas/cli mas configure-airgap --setup-redhat-catalogs
You will be prompted to provide information about the private registry, including the CA certificate necessary to configure your cluster to trust the private registry.
This command can also be ran non-interactive, for full details refer to the configure-airgap command documentation.
mas configure-airgap \
-H myprivateregistry.com -P 5000 -u $REGISTRY_USERNAME -p $REGISTRY_PASSWORD \
--ca-file /mnt/local-mirror/registry-ca.crt \
--no-confirm
Interactive Install¶
Regardless of whether you are running a connected or disconnected installation, simply run the mas install command and follow the prompts, the basic structure of the interactive flow is described below. We will need the entitlement.lic file to perform the installation so we will mount your home directory into the running container. When prompted you will be able to set license file to /mnt/home/entitlement.lic
NEW: AI Service Installation Options
NEW UPDATE: AI Service can now be installed in two ways:
- Integrated Installation: AI Service is now available as an option during the MAS installation process using the
mas installcommand. You can select AI Service along with other MAS applications during the interactive application selection step or you can run Non-interactive command as well. - Standalone Installation: For standalone AI Service installation, use the dedicated
mas aiservice-installcommand to install AI Service independently of the main MAS installation.
docker run -ti --rm -v ~:/mnt/home quay.io/ibmmas/cli:19.5.0 mas install
The interactive install will guide you through a series of questioned designed to help you arrive at the best configuration for your scenario, it can be broken down as below:
If you are not already connected to an OpenShift cluster you will be prompted to provide the server URL & token to make a new connection. If you are already connected to a cluster you will be given the option to change to another cluster
You will be presented with a table of available catalogs with information about the different releases of MAS available in each
Confirm that you accept the IBM Maximo Application Suite license terms
MAS requires both a `ReadWriteMany` and a `ReadWriteOnce` capable storage class to be available in the cluster. The installer has the ability to recognize certain storage class providers and will default to the most appropriate storage class in these cases:
- IBMCloud Storage (ibmc-block-gold & ibmc-file-gold-gid)
- OpenShift Container Storage (ocs-storagecluster-ceph-rbd & ocs-storagecluster-cephfs)
- External OpenShift Container Storage (ocs-external-storagecluster-ceph-rbd & ocs-external-storagecluster-cephfs)
- NFS Client (nfs-client)
- Azure Managed Storage (managed-premium & azurefiles-premium)
- AWS Storage (gp3-cs & efs)
The names in brackets represent the `ReadWriteOnce` and `ReadWriteMany` class that will be used, in the case of NFS the same storage class will be used for both `ReadWriteOnce` and `ReadWriteMany` volumes. Even when a recognized storage provider is detected you will be provided with the option to select your own storages classes if you wish.
When selecting your own storage classes you will be presented with a list of those available and must select both a `ReadWriteMany` and a `ReadWriteOnce` storage class. Unfortunately there is no way for the install to verify that the storage class selected actually supports the appropriate access mode, refer to the documentation from the storage class provider to determine whether your storage class supports `ReadWriteOnce` and/or `ReadWriteMany`.
Provide the location of your license file, contact information, and IBM entitlement key (if you have set the IBM_ENTITLEMENT_KEY environment variable then this field will be pre-filled with that value already).
Provide the basic information about your MAS instance:
- Instance ID
- Workspace ID
- Workspace Display Name
- Operational Mode (production or non-production)
By default MAS will be installed in a subdomain of your OpenShift clusters domain matching the MAS instance ID that you chose. For example if your OpenShift cluster is myocp.net and you are installing MAS with an instance ID of prod1 then MAS will be installed with a default domain something like prod1.apps.myocp.net, depending on the exact network configuration of your cluster.
If you wish to use a custom domain for the MAS install you can choose to configure this by selecting "n" at the prompt. The install supports DNS integrations for Cloudflare, IBM Cloud Internet Services, AWS Route 53 out of the box and is able to configure a certificate issuer using LetsEncrypt (production or staging) or a self-signed certificate authority per your choices.
You will also be able to configure the following advanced settings:
- Single Sign-On (SSO)
- Whether to allow special character in User IDs and Usernames
- Whether Guided Tours are enabled
- Network Routing Mode (path or subdomain)
Routing Mode
Starting from MAS 9.2.0, you can configure how Maximo Application Suite is accessed through URLs:
- Path Mode (single domain): All applications are accessed through a single domain with different paths (e.g.,
mas.example.com/manage,mas.example.com/admin) - Subdomain Mode (multi domain): Each application is accessed through its own subdomain (e.g.,
manage.mas.example.com,admin.mas.example.com)
Path-Based Routing Requirements: When using path mode, the OpenShift IngressController must be configured with namespaceOwnership: InterNamespaceAllowed. The CLI will validate the configuration and offer to configure it automatically if needed. --ingress-controller-name and --configure-ingress both applicable only for --routing path. If --configure-ingress not specified and the IngressController is not configured, the installation will fail with instructions.
Note: --ingress-controller-name specifies the name of the OpenShift IngressController resource to use for path-based routing. The IngressController is an OpenShift resource (in the openshift-ingress-operator namespace) that manages how external traffic is routed into the cluster. Defaults to default if not specified.
Select the applications that you would like to install. Note that some applications cannot be installed unless an application they depend on is also installed:
- Version-based dependencies:
- Monitor < 9.2.0: Monitor depends on IoT (IoT must be installed first)
- Monitor >= 9.2.0: IoT depends on Monitor (Monitor must be installed first)
- Assist and Predict are only available for install if Monitor is selected
- From MAS 9.1 onwards, Assist will be rebranded as Collaborate in the MAS UI. It will still appear as Assist in the MAS CLI and within the OpenShift Cluster, but from the MAS UI it will appear as Collaborate.
- NEW UPDATE: AI Service is now available as an installation option during the application selection step.
Some Maximo applications support additional configuration, you will be taken through the configuration options for each application that you chose to install.
NEW UPDATE: Maximo Manage - AI Service Binding
NEW UPDATE: When installing Maximo Manage, you can optionally bind it to an AI Service Tenant. This integration enables AI capabilities within Manage through the AI Config Application.
- Installing AI Service with Manage: If you select AI Service during the application selection step (using
--aiservice-channel), the binding is configured automatically:- A default tenant ID "user" is automatically created and bound to Manage
- The AI Service instance being installed is automatically used for the binding
- No additional configuration is required - the binding parameters are set automatically
- Important: When AI Service is being installed, any
--manage-aiservice-instance-idor--manage-aiservice-tenant-idparameters provided will be ignored, as the binding is automatically configured
- Using Existing AI Service: If AI Service is already installed in your cluster (not using
--aiservice-channel), you can bind Manage to an existing AI Service tenant:- Interactive Mode: You will be prompted to select from available AI Service instances and tenants
- Non-Interactive Mode: Use
--manage-aiservice-instance-idand--manage-aiservice-tenant-idparameters to specify the binding
The install supports the automatic provision of in-cluster MongoDb and Db2 databases for use with Maximo Application Suite, you may also choose to bring your own (BYO) by providing the necessary configuration files (which the installer will also help you create).
The install supports the abilty to install and configure the Grafana Community Operator. Additional resource definitions can be applied to the OpenShift Cluster during the MAS configuration step, here you will be asked whether you wish to provide any additional configurations and if you do in what directory they reside.
If you provided one or more configurations for BYO databases then additional configurations will already be enabled and pointing at the directory you chose earlier.
You can choose between three pre-defined pod templates allowing you to configure MAS in each of the standard Kubernetes quality of service (QoS) levels: Burstable, BestEffort and Guaranteed. By default MAS applications are deployed with a Burstable QoS.
Additionally, you may provide your own custom pod templates definition by providing the directory containing your configuration files. More information on podTemplates can be found in the product documentation. Note that pod templating support is only available from IBM Maximo Application Suite v8.11 onwards
Before the install actually starts you will be presented with a summary of all your choices and a non-interactive command that will allow you to repeat the same installation without going through all the prompts again.
Non-Interactive Install¶
The following command will launch the MAS CLI container image, login to your OpenShift Cluster and start the install of MAS without triggering any prompts. This is how we install MAS in development hundreds of times every single week.
IBM_ENTITLEMENT_KEY=xxx
SUPERUSER_PASSWORD=xxx
docker run -e IBM_ENTITLEMENT_KEY -e SUPERUSER_PASSWORD -ti --rm -v ~:/mnt/home quay.io/ibmmas/cli:19.5.0 bash -c "
oc login --token=sha256~xxxx --server=https://xxx &&
mas install \
--mas-catalog-version v9-260326-amd64 \
--mas-instance-id mas1 \
--mas-workspace-id ws1 \
--mas-workspace-name "My Workspace"
\
--superuser-username superuser \
--superuser-password '${SUPERUSER_PASSWORD}' \
\
--mas-channel 9.1.x \
\
--ibm-entitlement-key '${IBM_ENTITLEMENT_KEY}' \
--license-file /mnt/home/entitlement.lic \
--contact-email myemail@email.com \
--contact-firstname John \
--contact-lastname Barnes \
\
--storage-rwo ibmc-block-gold \
--storage-rwx ibmc-file-gold-gid \
--storage-pipeline ibmc-file-gold-gid \
--storage-accessmode ReadWriteMany \
\
--accept-license --no-confirm
How It Works¶
The engine that performs all tasks is written in Ansible, you can directly use the same automation outside of this CLI if you wish. The code is open source and available in ibm-mas/ansible-devops, the collection is also available to install directly from Ansible Galaxy, the install supports the following actions:
- IBM Maximo Operator Catalog installation
- Required dependency installation:
- MongoDb (Community Edition)
- IBM Suite License Service
- IBM Data Reporter Operator
- Red Hat Certificate Manager
- Optional dependency installation:
- Apache Kafka
- IBM Maximo AI Service
- IBM Db2
- IBM Cloud Pak for Data Platform and Services
- Watson Studio Local
- Watson Machine Learning
- Analytics Engine (Apache Spark)
- Cognos Analytics
- Grafana
- Suite core services installation
- Suite application installation
The installation is performed inside your RedHat OpenShift cluster utilizing Openshift Pipelines
OpenShift Pipelines is a Kubernetes-native CI/CD solution based on Tekton. It builds on Tekton to provide a CI/CD experience through tight integration with OpenShift and Red Hat developer tools. OpenShift Pipelines is designed to run each step of the CI/CD pipeline in its own container, allowing each step to scale independently to meet the demands of the pipeline.