Deploying Multi-Pod Cluster

The following procedure describes TIBCO FSI deployment on Kubernetes:

Procedure

  1. To create a namespace for working with multi container TIBCO FSI deployment on Kubernetes, ensure that the following values are entered in fsi-namespace.json:
    • Kind: Namespace
    • Name in metadata: tibco-fsi200
    • Name in labels: tibco-fsi200
    Run the below command to create namespace:
    $ kubectl create -f fsi-namespace.json
  2. To create storage class, ensure that the following values are entered in fsi_storageclass.json:
    • Kind: StorageClass
    • Namespace: tibco-fsi200
    Run the following command to create storage class:
    $ kubectl create -f fsi_storageclass.json
  3. To create a Persistent Volume, ensure that the following values are entered in fsi_pv.json:
    • Kind: PersistentVolume
    • Path in hostpath: This path must be already present in the worker node or it is created by Kubernetes after kubectl creates -f fsi_pv.json.
    • Name of the worker node: Name of the created worker node
    • Storage: Size of the Persistent Volume (1Gi for 1 gb)
    Run the below command to create Persistent Volume:
    $ kubectl create -f fsi_pv.json
    Note: In fsi_pv.json provide values for worker-node1 hostname and Kubernetes master hostname.
  4. To create Persistent Volume Claim, ensure that the following values are entered in fsi_pvc.json:
    • Kind: PersistentVolumeClaim
    • StorageClassName: Name of the storage class created
    • Storage: Size of the Persistent Volume Claim (1Gi for 1 gb)
    Run the below command to create Persistent Volume Claim:
    $ kubectl create -f fsi_pvc.json
  5. Deploy the PostgreSQL or Oracle database. Database for TIBCO FSI can either be an on-premises installation on a separate physical or virtual server or deployed as a container application using the Docker image similar to other TIBCO FSI server. In an on-premises installation, skip database deployment and proceed to the next step. If deploying the database as a container application, perform the following steps:
    Option Description
    PostgreSQL For deploying PostgreSQL, perform the following steps:
    1. Create the fsidb Docker image and push to the Docker registry.
    2. Use the kubectl command to run the database pod and the database service.
      $ kubectl create -f fsidb.json
      $ kubectl create -f fsidbservice.json
    3. To create fsidb, ensure that the following values are entered in fsidb.json:
      • Kind: Deployment
      • App in matchLabels: fsi (this value must be similar to the app in selector in fsidbservice.json)
      • mountPath: /home/postgres/tibco/fsi/2.0/bulkload
      • claimName: fsi-pv-claim
      • Name in imagePullSecrets: Name of the registry credentials created to pull the images
    4. Run the following command to create fsidb:
      $ kubectl create -f fsidb.json
    5. To create fsidbservice, ensure that the following values are entered in fsidbservice.json:
      • Kind: Service
      • Type: LoadBalancer
      • ExternalIPs: [ "IP Address of K8s worker node" ]
      • Name: tcp
      • Protocol: TCP
      • Port: 5432
      • targetPort: 5432
    6. Run the below command to create fsidbservice:
      $ kubectl create -f fsidbservice.json
    Oracle
    For deploying Oracle in a container, either use an official Oracle 12.2.0.1 image or build one from the official 12.2.0.1 Dockerfile for Oracle database. In either case, the database installation must be partitioning enabled.
    1. After the Oracle image is available, use the kubectl command to run the Oracle database pod and the database service.
       $ kubectl create -f fsi_db_oracle.json
      $ kubectl create -f fsioracleservice.json
    2. To create fsi_db_oracle, ensure that the following values are entered in fsi_db_oracle.json:
      • Kind: Deployment
      • app in matchLabels: fsioracle (this value must be similar to the app in selector in fsioracleservice.json)
      • Image: Docker image registry url of official Oracle Docker image of version 12.2.0.1
      • mountPath: /home/postgres/tibco/fsi/2.0/bulkload
      • claimName: fsi-pv-claim
      • Name in imagePullSecrets: Name of registry credentials created to pull the images
    3. Run the below command to create fsi_db_oracle:
      $ kubectl create -f fsi_db_oracle.json

      Database pod is up and running. See Configuring TIBCO Fulfillment Subscriber Inventory for creating TIBCO FSI database schema.

      Note: The Oracle directory FSI_BULK_DIR which has to be created for setting up bulk load must be created with the path /home/postgres/tibco/fsi/2.0/bulkload.
    4. To create fsioracleservice, ensure that the following values are entered in fsioracleservice.json:
      • Kind: Service
      • appin selector: fsioracle
      • Type: LoadBalancer
      • ExternalIPs: [ "IP Address of K8s worker node" ]
      • Name: tcp
      • Protocol: TCP
      • Port: 1521
      • targetPort: 1521
    5. Run the below command to create fsioracleservice:
      $ kubectl create -f fsioracleservice.json
  6. When deploying with Enterprise Message Service™, perform the following steps:
    1. Ensure that the following values are entered in the emsapp.json:
      • Kind: Deployment
      • Image: Docker registry URL for image with tag version
      • Name: Name of the registry credentials created to pull the images
    2. To create the TIBCO EMS app, run the following command:
      $ kubectl create -f emsapp.json
    3. Ensure that the following values are entered in emsservice.json:
      • Kind: Service
      • App in selector: ems
      • Type: LoadBalancer
      • ExternalIPs: [ "IP Address of K8s worker node" ]
      • Name: tcp
      • Protocol: TCP
      • Port: 7222
      • targetPort: 7222
    4. To create the TIBCO EMS service, run the following command:
      $ kubectl create -f emsservice.json
    Note: You must replace the localhost in GenericConnectionFactory with the IP address from external IPs of emsservice by using GEMS.
  7. For deployment with TIBCO FTL as the messaging service, perform the following steps:
    1. Ensure that the following values are entered in ftlapp.json:
      • Kind: Deployment
      • Image: Docker registry url for image with tag version
      • Name: Name of the registry credentials created to pull the images
      • args: ["--client.url","discover://"]

      Ports require two entries:

      • containerPort: 13131
      • Name: fsiport
      • containerPort: 13134
      • Name: fsiport1
    2. To create the TIBCO FTL app, run the following command:
      $ kubectl create -f ftlapp.json
    3. Ensure that the following values are entered in ftlservice.json:
      • Kind: Service
      • Type: LoadBalancer
      • ExternalIPs: [ "IP Address of K8s worker node" ]

      Ports require two entries:

      • Name: tcp
      • Protocol: TCP
      • Port: 13131
      • targetPort: 13131
      And,
      • Name: tcp1
      • Protocol: TCP
      • Port: 13134
      • targetPort: 13134
    4. To create the TIBCO FTL service, run the following command:
      $ kubectl create -f ftlservice.json
  8. Add the database service name, port, TIBCO FTL, and Enterprise Message Service™ service name in the fsi_application.properties file. Ensure that the following values are entered in fsi_application.properties:
    com.tibco.fos.fsi.pooledDataSource.driverClassName=org.postgresql.Driver
    com.tibco.fos.fsi.pooledDataSource.host=<db service name>
    com.tibco.fos.fsi.pooledDataSource.port=5432
    com.tibco.fos.fsi.pooledDataSource.database=fsidb
    com.tibco.fos.fsi.pooledDataSource.username=fsiuser
    com.tibco.fos.fsi.pooledDataSource.password=fsipassword
    com.tibco.fos.fsi.pooledDataSource.url=jdbc:postgresql://<db service name>:5432/fsidb?currentSchema=fsischema
    com.tibco.fos.fsi.hibernate.dialect=org.hibernate.dialect.PostgreSQLDialect
    This property is used to enable/disable notification generation in locking use cases. Valid values are [true,false]
    com.tibco.fos.fsi.notification.enable=false
    
    This property is used to set messaging provider and must be set to one of [ftl,ems] if notification generation is enabled
    com.tibco.fos.fsi.notification.provider=none
    
    These properties are not used if messaging provider is either [none,ftl]
    com.tibco.fos.fsi.jms.jndi.url=tibjmsnaming://localhost:7222
    
    These ftl properties are not used if messaging provider is either [none,ems]
    com.tibco.fos.fsi.ftl.realmServer=http://localhost:8080
    com.tibco.fos.fsi.ftl.realm.principal=admin
    com.tibco.fos.fsi.ftl.realm.credentials=admin
    com.tibco.fos.fsi.ftl.realm.applicationName=fsiapp
    com.tibco.fos.fsi.ftl.realm.endpointName=fsiep
    
  9. Run the below command to create a ConfigMap:
    $ kubectl create configmap 
    configmap --from-env-file=/path/fsi_application.properties --namespace=tibco-fsi200
    Change the ConfigMap name in fsimulticontainerapp.json and deploy TIBCO FSI multi container pods.
  10. Ensure that the following values are entered in fsimulticontainerapp.json:
    • Kind: Deployment
    • Replicas: 2
    • Type of strategy: RollingUpdate
    • maxSurge of rollingUpdate of strategy: 1
    • maxUnavailable of rollingUpdate of strategy: 0
  11. You can change the rolling update strategy according to the requirement of the deployment by using <docker registry url for image with tag version>.
    You can use any one of the values below for environment:
    • /home/postgres/tibco/fsi/2.0/EMS_LIB_HOME/lib
    • /home/postgres/tibco/fsi/2.0/FTL_LIB_HOME/lib
    • Name under configMapRef: name of the created ConfigMap
    • claimName under volumes: fsi-pv-claim
    • mountPath under persistentVolumeClaim in volumeMounts: /home/postgres/tibco/fsi/2.0/bulkload
    • Name in imagePullSecrets: name of registry credentials created to pull images

  12. Run the below command to create a fsimulticontainerapp:
    $ kubectl create -f fsimulticontainerapp.json
  13. Ensure that the following values are entered in fsiservice.json:
    • Kind: Service
    • Appin selector: fsi
    • Type: LoadBalancer
    • ExternalIPs: [ "IP Address of K8s worker node" ]
    • Name: tcp
    • Protocol: TCP
    • Port: 8080
    • targetPort: 8080
  14. Run the below command to create a TIBCO FSI service:
    $ kubectl create -f fsiservice.json
    Note: fsimulticontainerapp.json uses readiness probe to check whether the containers are ready to receive traffic through Kubernetes service. Liveness probe can also be configured in fsimulticontainerapp.json using the same process for configuring the readiness probe. For more information, see Kubernetes documentation.
  15. Add the below json code to the TIBCO FSI multi-container json file for rolling update strategy:
     "strategy": {
             "type": "RollingUpdate",
             "rollingUpdate": {
                "maxSurge": 1,
                "maxUnavailable": 0
    }
    

    You can change maxSurge and maxUnavailable parameters according to your strategy. If you want to update a new image in existing deployment or you have changed the ConfigMap and want to update in the existing deployment then you can patch the changes in the existing deployment. These changes reflect in new pods that are functioning according to the rolling update strategy defined.

    For patching, copy the whole content of fsimulticontainerapp.json in fsimultipath.json. You can change the required field in fsimultipath.json. You can patch with the following kubectl command:

    $ kubectl patch deployment fsimulticontainerapp --patch "$(cat fsimultipatch.json)" -n tibco-fsi200
  16. Perform horizontal scaling by keeping the TIBCO FSI instance as 1 for the first deployment, for example, to scale TIBCO FSI pods as 3, use the following command:
    $ kubectl scale --replicas=3 -f fsimulticontainerapp.json