Deploying Multi-Pod Cluster

The following procedure describes TIBCO Fulfillment Catalog deployment on Kubernetes:

Procedure

  1. To create a namespace for working with multicontainer TIBCO Fulfillment Catalog deployment on Kubernetes, ensure that the following values are entered in fc-namespace.json:
    • Kind: Namespace
    • Name in metadata: tibco-fc4-10
    • Name in labels: tibco-fc-410
    Command to create namespace:
    $ kubectl create -f fc-namespace.json
  2. Create the following entities in the specified order:
    No. Name Details
    i. Storage class Configuration details for fc-storageclass.json
    • Kind: StorageClass
    • Namespace: tibco-fc-410

    Command to create a storage class

    $ kubectl create -f fc-storageclass.json
    ii. Persistent Volume Configuration details for fc-pv.json
    • Kind: PersistentVolume
    • Path in hostpath: This path must be already present in the worker node or it is created by Kubernetes after kubectl creates -f fc-pv.json. The mounted directory mentioned in fc-pv.json file must be created and given all the permissions.
    • Name of the worker node: Name of the created worker node
    • Storage: Size of the Persistent Volume (10Gi for 10 gb)
    Note: In fc-pv.json, you must provide values for worker-node1 hostname and Kubernetes master hostname.
    Command to create a Persistent Volume
    $ kubectl create -f fc-pv.json
    iii. Persistent Volume Claim Configuration details for fc-pvc.json
    • Kind: PersistentVolumeClaim
    • StorageClassName: Name of the storage class created
    • Storage: Size of the Persistent Volume Claim (10Gi for 1 GB)
    Command to create a Persistent Volume Claim
    $ kubectl create -f fc-pvc.json
  3. TIBCO Fulfillment Catalog supports Oracle and PostgreSQL running as on-premise software as well as running on cloud. In both cases, after setting up the database, the database details are used in step 5.
    Option Description
    Deployment of containerized PostgreSQL Database For more details, see Deploying Containerized PostgreSQL Database.
    Deployment of on-premise PostgreSQL Database For more details, see Deploying On-premise PostgreSQL Database.
    Deployment of containerized Oracle Database For more details, see Deploying Containerized Oracle Database.
    Deployment of on-premise Oracle Database For more details, see Deploying On-premise Oracle Database
  4. TIBCO Fulfillment Catalog integrates with on-premise Enterprise Message Service as well as cloud Enterprise Message Service.
    Option Description
    Deployment of containerized Enterprise Message Service For more details, see Deploying Containerized Enterprise Message Service.
    Deployment of on-premise Enterprise Message Service For more details, see Deploying On-premise Enterprise Message Service.
  5. To create a configuration map, ensure that the following values are entered in fc-properties.properties:
    • MQ_MDM_DB_TYPE = POSTGRES/ ORACLE
    • MQ_MDM_DB_HOST = Database host / fc db Kubernetes service
    • MQ_MDM_DB_PORT = Database port
    • MQ_MDM_DB_NAME = Database service name as configured in tnsnames
    • MQ_MDM_DB_SCHEMA = Database schema name
    • MQ_MDM_DB_MIN_CONN_COUNT=20
    • MQ_MDM_DB_MAX_CONN_COUNT=250
    • MQ_MDM_DB_IGN_MIN_CONN_COUNT=1
    • MQ_MDM_DB_IGN_MAX_CONN_COUNT=10
    • EMS_SERVER_URL=tcp://host:port, if Enterprise Message Service is running on an on-premise setup. In this case, the host value is the hostname/IP address on which the Enterprise Message Service is running. If the Enterprise Message Service is containerized then the host value is Enterprise Message Service service name/IP address.
    • FC_SERVICE_HOST=IP address of service where TIBCO Fulfillment Catalog is deployed
    • FC_SERVICE_PORT=Port on which TIBCO Fulfillment Catalog is running
    • OPD_APP_DATA_VOLUME=heavy/light
    Run the following command to create a config map:
    $kubectl create cm fc-config -n tibco-fc-410 --from-env-file=<path to fc-properties.properties>
  6. To create secrets for a database and Enterprise Message Service, ensure that the following values are entered in fc-secret.json :
    • MQ_MDM_DB_USER=<base64 encoded database username>
    • MQ_MDM_DB_PASSWORD=<base64 encoded database password>
    • MQ_MDM_DB_ADMIN_USER=<base64 encoded database admin username>
    • MQ_MDM_DB_ADMIN_USER_PWD=<base64 encoded database admin password>
    • EMS_USER_NAME=<base 64 encoded ems user name>
    • EMS_PASSWORD=<base 64 encoded ems password>
  7. To deploy Apache Ignite, ensure that the following values are entered in ignite.json:
    • Kind: Deployment
    • App in matchLabels: fc
    • Tier in matchLabels: ignite
    • Image: Docker registry URL of image with latest tag
    • Name in imagePullSecrets: Name of the registry credentials created to pull the images
    Run the following command to create fcdb:
    $ kubectl create -f ignite.json
    Note: To change the JVM parameters of the Ignite server, other than the default ones, update the entry-point.sh file.
  8. When deploying with TIBCO Fulfillment Catalog, perform the following steps:
    1. In the namespace tibco-fc-410, give the permissions in the viewClusterRole to the service account located in the namespace tibco-fc-410 and named default. You must create rolebinding to access the TIBCO Fulfillment Catalog instance in the cluster to create a Wildfly cluster.
      $kubectl create rolebinding default-viewer --clusterrole=view
      --serviceaccount=tibco-fc-410:default --namespace=tibco-fc-410
      
    2. Ensure that the following values are entered in the fc.json:
      • Kind: Deployment
      • App in matchLabels: fc (this value must be the same as the value of the app in selector in fc-service.json)
      • Tier in matchLabels: instance (this value must be the same as the value of the tier in selector in fc-service.json)
      • Image: Docker registry url of image with the latest tag
      • Name in imagePullSecrets: name of the registry credentials created to pull images
      Run the following command to create TIBCO Fulfillment Catalog deployment:
      $kubectl create -f fc.json
      Note: To change the JVM parameters of the Ignite server, other than the default ones, update the entry-point.sh file.
    3. Run the following command to create TIBCO Fulfillment Catalog service:
      $kubectl create -f fc-service.json
    4. Run the following command to create the configurator service kubectl create -f configurator-service.json
      kubectl create -f configurator-service.json
  9. When deploying with Offer and Pricing Designer (OPD), perform the following steps:
    1. Ensure that the following values are entered in the opd.json:
      • Kind: Deployment
      • App in matchLabels: fc (this value must be the same as the value of the app in selector in opd-service.json)
      • Tier in matchLabels: opd (this value must be the same as the value of the tier in selector in opd-service.json)
      • Image: Docker registry url of image with the latest tag
      • Name in imagePullSecrets: name of the registry credentials created to pull images
      Run the following command to create TIBCO Offer and Pricing Designer:
      $kubectl create -f opd.json
      The existing topology of TIBCO Fulfillment Catalog deployment is that it supports only a single pod of OPD. If OPD pod is scaled up, OPD features do not work in an expected way.
    2. Ensure that the following values are entered in the opd-service.json:
      • Kind: Service
      • Type: LoadBalancer
      • Ports:
        [
                 {
                    "name": "http-opd",
                    "protocol": "TCP",
                    "port": 8070,
                    "targetPort": 8070
                 }
              ]
        
      Run the following command to create TIBCO Fulfillment Catalog-Offer and Pricing Designer service:
      $kubectl create -f opd-service.json