MAS DevOps Ansible Collection Ansible CLI
Home Ansible Automation Platform OCP Install Cloud Pak For Data Install Core Add AIBroker Add IoT Add Manage Add Monitor Add Optimizer Add Predict Add Visual Inspection Update Upgrade Uninstall Core Backup & Restore ocp_cluster_monitoring ocp_config ocp_deprovision ocp_efs ocp_github_oauth ocp_login ocp_node_config ocp_provision ocp_roks_upgrade_registry_storage ocp_upgrade ocp_verify appconnect aws_bucket_access_point aws_documentdb_user aws_policy aws_route53 aws_user_creation aws_vpc cert_manager cis common-services configure_manage_eventstreams cos cos_bucket cp4d_admin_pwd_update cp4d cp4d_service db2 dro eck grafana ibm_catalogs kafka nvidia_gpu mongodb ocs sls turbonomic uds mirror_case_prepare mirror_extras_prepare mirror_images mirror_ocp ocp_idms ocp_simulate_disconnected_network registry suite_app_config suite_app_install suite_app_uninstall suite_app_upgrade suite_app_rollback suite_app_backup_restore suite_certs suite_config suite_db2_setup_for_manage suite_dns suite_install suite_manage_attachments_config suite_manage_birt_report_config suite_manage_bim_config suite_manage_customer_files_config suite_manage_imagestitching_config suite_manage_import_certs_config suite_manage_load_dbc_scripts suite_manage_logging_config suite_manage_pvc_config suite_uninstall suite_upgrade suite_rollback suite_verify suite_backup_restore ansible_version_check entitlement_key_rotation gencfg_jdbc gencfg_watsonstudio gencfg_workspace gencfg_mongo

kafka¤

This role provides support to install a Kafka Cluster using Strimzi, Red Hat AMQ Streams, IBM Event Streams or AWS MSK and generate configuration that can be directly applied to Maximo Application Suite.

Both Strimzi and Red Hat AMQ streams component are massively scalable, distributed, and high-performance data streaming platform based on the Apache Kafka project. Both offer a distributed backbone that allows microservices and other applications to share data with high throughput and low latency.

As more applications move to Kubernetes and Red Hat OpenShift, it is increasingly important to be able to run the communication infrastructure on the same platform. Red Hat OpenShift, as a highly scalable platform, is a natural fit for messaging technologies such as Kafka. The AMQ streams component makes running and managing Apache Kafka OpenShift native through the use of powerful operators that simplify the deployment, configuration, management, and use of Apache Kafka on Red Hat OpenShift.

The AMQ streams component is part of the Red Hat AMQ family, which also includes the AMQ broker, a longtime innovation leader in Java™ Message Service (JMS) and polyglot messaging, as well as the AMQ interconnect router, a wide-area, peer-to-peer messaging solution. Under the covers, AMQ streams leverages Strimzi's architecture, resources and configurations.

Note: The MAS license does not include entitlement for AMQ streams. The MAS Devops Collection supports this Kafka deployment as an example only. Therefore, we recommend the use of Strimzi for an opensource Kafka provider.

Tip

The role will generate a yaml file containing the definition of a Secret and KafkaCfg resource that can be used to configure the deployed cluster as the MAS system Kafka.

This file can be directly applied using oc apply -f $MAS_CONFIG_DIR/kafkacfg-amqstreams-system.yaml or used in conjunction with the suite_config role.

Role Variables¤

kafka_action¤

Action to be performed by Kafka role. Valid values are install, upgrade or uninstall. The upgrade action applies only to the strimzi and redhat providers.

kafka_provider¤

Valid kafka providers are strimzi (opensource), redhat (installs AMQ Streams which requires a license that is not included with MAS entitlement), ibm (provisions a paid Event Streams instance in the target IBM Cloud account) and aws (provisions a paid MSK instance in the target AWS account).

Red Hat AMQ Streams & Strimzi Role Variables¤

kafka_version¤

The version of Kafka to deploy by the operator. Before changing the kafka_version make the version is supported by the amq-streams operator version or strimzi operator version.

kafka_namespace¤

The namespace where the operator and Kafka cluster will be deployed.

kafka_cluster_name¤

The name of the Kafka cluster that will be created

kafka_cluster_size¤

The configuration to apply, there are two configurations available: small and large.

kafka_storage_class¤

The name of the storage class to configure the AMQStreams operator to use for persistent storage in the Kafka cluster. Storage class must support ReadWriteOnce(RWO) access mode.

kafka_storage_size¤

The size of the storage to configure the AMQStreams operator to use for persistent storage in the Kafka cluster.

zookeeper_storage_class¤

The name of the storage class to configure the AMQStreams operator to use for persistent storage in the Zookeeper cluster. Storage class must support ReadWriteOnce(RWO) access mode.

zookeeper_storage_size¤

The size of the storage to configure the AMQStreams operator to use for persistent storage in the Zookeeper cluster.

kafka_user_name¤

The name of the user to setup in the cluster for MAS.

kafka_user_password (supported in Strimzi operator verion 0.25.0 - amq streams operator version 2.x)¤

The password of the user to setup in the cluster for MAS.

mas_instance_id¤

The instance ID of Maximo Application Suite that the KafkaCfg configuration will target. If this or mas_config_dir are not set then the role will not generate a KafkaCfg template.

mas_config_dir¤

Local directory to save the generated KafkaCfg resource definition. This can be used to manually configure a MAS instance to connect to the Kafka cluster, or used as an input to the suite_config role. If this or mas_instance_id are not set then the role will not generate a KafkaCfg template.

custom_labels¤

List of comma separated key=value pairs for setting custom labels on instance specific resources.

IBM Cloud Evenstreams Role Variables¤

ibmcloud_apikey¤

Defines IBM Cloud API Key. This API Key needs to have access to manage (provision/deprovision) IBM Cloud Event Streams.

eventstreams_resourcegroup¤

Defines the IBM Cloud Resource Group to target the Event Streams instance.

eventstreams_name¤

Event Streams instance name.

eventstreams_plan¤

Event Streams instance plan.

eventstreams_location¤

eventstreams_retention¤

Event Streams topic retention period (in miliseconds).

eventstreams_create_manage_jms_topic¤

Defines whether to create specific Manage application JMS topics by default.

mas_instance_id¤

The instance ID of Maximo Application Suite that the KafkaCfg configuration will target. If this or mas_config_dir are not set then the role will not generate a KafkaCfg template.

mas_config_dir¤

Local directory to save the generated KafkaCfg resource definition. This can be used to manually configure a MAS instance to connect to the Kafka cluster, or used as an input to the suite_config role. If this or mas_instance_id are not set then the role will not generate a KafkaCfg template.

custom_labels¤

List of comma separated key=value pairs for setting custom labels on instance specific resources.

Example Playbook¤

- hosts: localhost
  any_errors_fatal: true
  vars:
    # Set storage class suitable for use on IBM Cloud ROKS
    kafka_storage_class: ibmc-block-gold

    # Generate a KafkaCfg template
    mas_instance_id: masinst1
    mas_config_dir: ~/masconfig
  roles:
    - ibm.mas_devops.kafka

AWS MSK Role Variables¤

Prerequisites¤

To run this role successfully you must have already installed the AWS CLI. Also, you need to have AWS user credentials configured via aws configure command or simply export AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables with your corresponding AWS username credentials prior running this role.

kafka_version¤

The version of Kafka to deploy for AWS MSK.

kafka_cluster_name¤

The name of the Kafka cluster that will be created

aws_region¤

vpc_id¤

The AWS Virtual Private Cloud identifier (VPC ID) where the MSK instance will be hosted.

aws_msk_cidr_az1¤

The CIDR address for the first Availability Zone subnet. This information is found in the subnet details under your VPC.

aws_msk_cidr_az2¤

The CIDR address for the second Availability Zone subnet. This information is found in the subnet details under your VPC.

aws_msk_cidr_az3¤

The CIDR address for the third Availability Zone subnet. This information is found in the subnet details under your VPC.

aws_msk_ingress_cidr¤

The IPv4 CIDR address for ingress connection. This information is found in the subnet details under your VPC.

aws_msk_egress_cidr¤

The IPv4 CIDR address for egress connection. This information is found in the subnet details under your VPC.

aws_kafka_user_name¤

The name of the user to setup in the cluster for MAS.

aws_kafka_user_password¤

The password of the user to setup in the cluster for MAS.

aws_msk_instance_type¤

The type/flavor of your MSK instance.

aws_msk_volume_size¤

The storage/volume size of your MSK instance.

aws_msk_instance_number¤

The number of broker/instances of your MSK instance.

mas_instance_id¤

The instance ID of Maximo Application Suite that the KafkaCfg configuration will target. If this or mas_config_dir are not set then the role will not generate a KafkaCfg template.

mas_config_dir¤

Local directory to save the generated KafkaCfg resource definition. This can be used to manually configure a MAS instance to connect to the Kafka cluster, or used as an input to the suite_config role. If this or mas_instance_id are not set then the role will not generate a KafkaCfg template.

custom_labels¤

List of comma separated key=value pairs for setting custom labels on instance specific resources.

aws_msk_secret¤

AWS MSK Secret name. The secret name must begin with the prefix AmazonMSK_. If this is not set, then default secret name will be AmazonMSK_SECRET_{{kafka_cluster_name}}

Example Playbook to install AWS MSK¤

- hosts: localhost
  any_errors_fatal: true
  vars:
    aws_region: ca-central-1
    aws_access_key_id: *****
    aws_secret_access_key: *****
    kafka_version: 3.3.1
    kafka_provider: aws
    kafka_action: install
    kafka_cluster_name: msk-abcd0zyxw
    kafka_namespace: msk-abcd0zyxw
    vpc_id: vpc-07088da510b3c35c5
    aws_kafka_user_name: mskuser-abcd0zyxw
    aws_msk_instance_type: kafka.t3.small
    aws_msk_volume_size: 100
    aws_msk_instance_number: 3
    aws_msk_cidr_az1: "10.0.128.0/20"
    aws_msk_cidr_az2: "10.0.144.0/20"
    aws_msk_cidr_az3: "10.0.160.0/20"
    aws_msk_ingress_cidr: "10.0.0.0/16"
    aws_msk_egress_cidr: "10.0.0.0/16"
    # Generate a KafkaCfg template
    mas_config_dir: /var/tmp/masconfigdir
    mas_instance_id: abcd0zyxw
  roles:
    - ibm.mas_devops.kafka

Example Playbook to uninstall AWS MSK¤

- hosts: localhost
  any_errors_fatal: true
  vars:
    aws_region: ca-central-1
    aws_access_key_id: *****
    aws_secret_access_key: *****
    vpc_id: vpc-07088da510b3c35c5
    kafka_provider: aws
    kafka_action: uninstall
    kafka_cluster_name: msk-abcd0zyxw
    aws_msk_cidr_az1: "10.0.128.0/20"
    aws_msk_cidr_az2: "10.0.144.0/20"
    aws_msk_cidr_az3: "10.0.160.0/20"
  roles:
    - ibm.mas_devops.kafka

License¤

EPL-2.0