Deploying Multi-Pod Cluster

The following procedure describes TIBCO PSI deployment on Kubernetes:

    Procedure
  1. To create a namespace for working with multi-container TIBCO PSI deployment on Kubernetes, ensure that the following values are entered in psi-namespace.json:
    Kind: Namespace
    Name in metadata: tibco-psi200
    Name in labels: tibco-psi200
    Run the following command to create a namespace:
    $ kubectl create -f psi-namespace.json
  2. To create a storage class, ensure that the following values are entered in psi_storageclass.json:
    Kind: StorageClass
    Namespace: tibco-psi200
    Run the following command to create a storage class:
    $ kubectl create -f psi_storageclass.json
  3. To create a Persistent Volume, ensure that the following values are entered in psi_pv.json:
    Kind: PersistentVolume
    Path in hostpath: This path must be already present in the worker node or it is created by Kubernetes after kubectl creates -f psi_pv.json.
    Name of the worker node: Name of the created worker node
    Storage: Size of the Persistent Volume (1Gi for 1 gb)
    Run the following command to create a Persistent Volume:
    $ kubectl create -f psi_pv.json
    Note: In psi_pv.json provide values for worker-node1 hostname and Kubernetes master hostname.
  4. To create a Persistent Volume Claim, ensure that the following values are entered in psi_pvc.json:
    Kind: PersistentVolumeClaim
    StorageClassName: Name of the storage class created
    Storage: Size of the Persistent Volume Claim (1Gi for 1 gb)
    Run the following command to create a Persistent Volume Claim:
    $ kubectl create -f psi_pvc.json
  5. Deploy the PostgreSQL or Oracle database. Database for TIBCO PSI can either be an on-premises installation on a separate physical or virtual server or deployed as a container application using the Docker image similar to other TIBCO PSI servers. In an on-premises installation, skip database deployment and proceed to the next step. If deploying the database as a container application, perform the following steps:
    OptionDescription
    PostgreSQL For deploying PostgreSQL, perform the following steps:
    1. Create the psidb Docker image and push it to the Docker registry.
    2. Use the kubectl command to run the database pod and the database service.
      $ kubectl create -f psidb.json
      $ kubectl create -f psidbservice.json
    3. To create a psidb, ensure that the following values are entered in psidb.json:
      • Kind: Deployment
      • App in matchLabels: psi (this value must be similar to the app in the selector in psidbservice.json)
      • mountPath: /home/postgres/tibco/psi/2.1.0/bulkload
      • claimName: psi-pv-claim
      • Name in imagePullSecrets: Name of the registry credentials created to pull the images
    4. Run the following command to create a psidb:
      $ kubectl create -f psidb.json
    5. To create a psidbservice, ensure that the following values are entered in psidbservice.json:
      • Kind: Service
      • Type: LoadBalancer
      • ExternalIPs: [ "IP Address of K8s worker node" ]
      • Name: tcp
      • Protocol: TCP
      • Port: 5432
      • targetPort: 5432
    6. Run the following command to create psidbservice:
      $ kubectl create -f psidbservice.json
    Oracle

    For deploying Oracle in a container, either use an official Oracle 12.2.0.1 image or build one from the official 12.2.0.1 Dockerfile for the Oracle database. In either case, the database installation must be partitioning enabled.

    1. After the Oracle image is available, use the kubectl command to run the Oracle database pod and the database service.
       $ kubectl create -f psi_db_oracle.json
      $ kubectl create -f psioracleservice.json
    2. To create psi_db_oracle, ensure that the following values are entered in psi_db_oracle.json:
      • Kind: Deployment
      • app in matchLabels: psioracle (this value must be similar to the app in the selector in psioracleservice.json)
      • Image: Docker image registry url of official Oracle Docker image of version 12.2.0.1
      • mountPath: /home/postgres/tibco/psi/2.1.0/bulkload
      • claimName: psi-pv-claim
      • Name in imagePullSecrets: Name of registry credentials created to pull the images
    3. Run the following command to create psi_db_oracle:
      $ kubectl create -f psi_db_oracle.json

      The database pod is up and running. See Configuring TIBCO Product and Service Inventory for creating TIBCO PSI database schema.

      Note: The Oracle directory PSI_BULK_DIR which has to be created for setting up bulk load must be created with the path /home/postgres/tibco/psi/2.1.0/bulkload.
    4. To create psioracleservice, ensure that the following values are entered in psioracleservice.json:
      • Kind: Service
      • appin selector: psioracle
      • Type: LoadBalancer
      • ExternalIPs: [ "IP Address of K8s worker node" ]
      • Name: tcp
      • Protocol: TCP
      • Port: 1521
      • targetPort: 1521
    5. Run the following command to create psioracleservice:
      $ kubectl create -f psioracleservice.json

  6. When deploying with Enterprise Message Service™, perform the following steps:
    1. Ensure that the following values are entered in the emsapp.json:
      • Kind: Deployment
      • Image: Docker registry URL for image with tag version
      • Name: Name of the registry credentials created to pull the images
    2. To create the Enterprise Message Service™ app, run the following command:
      $ kubectl create -f emsapp.json
    3. Ensure that the following values are entered in emsservice.json:
      • Kind: Service
      • App in selector: ems
      • Type: LoadBalancer
      • ExternalIPs: [ "IP Address of K8s worker node" ]
      • Name: tcp
      • Protocol: TCP
      • Port: 7222
      • targetPort: 7222
    4. To create the Enterprise Message Service™ service, run the following command:
      $ kubectl create -f emsservice.json
    Note: You must replace the localhost in GenericConnectionFactory with the IP address from external IPs of emsservice by using GEMS.
  7. For deployment with TIBCO FTL as the messaging service, perform the following steps:
    1. Ensure that the following values are entered in ftlapp.json:
      • Kind: Deployment
      • Image: Docker registry url for image with tag version
      • Name: Name of the registry credentials created to pull the images
      • args: ["--client.url","discover://"]

      Ports require two entries:

      • containerPort: 13131
      • Name: psiport
      • containerPort: 13134
      • Name: psiport1
    2. To create the TIBCO FTL app, run the following command:
      $ kubectl create -f ftlapp.json
    3. Ensure that the following values are entered in ftlservice.json:
      • Kind: Service
      • Type: LoadBalancer
      • ExternalIPs: [ "IP Address of K8s worker node" ]

      Ports require two entries:

      • Name: tcp
      • Protocol: TCP
      • Port: 13131
      • targetPort: 13131
      And,
      • Name: tcp1
      • Protocol: TCP
      • Port: 13134
      • targetPort: 13134
    4. To create the TIBCO FTL service, run the following command:
      $ kubectl create -f ftlservice.json
  8. Add the database service name, port, TIBCO FTL, and Enterprise Message Service™ service name in the psi_application.properties file. Ensure that the following values are entered in psi_application.properties:
    com.tibco.fos.psi.pooledDataSource.driverClassName=org.postgresql.Driver
    com.tibco.fos.psi.pooledDataSource.host=<db service name>
    com.tibco.fos.psi.pooledDataSource.port=5432
    com.tibco.fos.psi.pooledDataSource.database=psidb
    com.tibco.fos.psi.pooledDataSource.username=psiuser
    com.tibco.fos.psi.pooledDataSource.password=psipassword
    com.tibco.fos.psi.pooledDataSource.url=jdbc:postgresql://<db service name>:5432/psidb?currentSchema=psischema
    com.tibco.fos.psi.hibernate.dialect=org.hibernate.dialect.PostgreSQLDialect
    This property is used to enable/disable notification generation in locking use cases. Valid values are [true,false]
    com.tibco.fos.psi.notification.enable=false
    
    This property is used to set messaging provider and must be set to one of [ftl,ems] if notification generation is enabled
    com.tibco.fos.psi.notification.provider=none
    
    These properties are not used if messaging provider is either [none,ftl]
    com.tibco.fos.psi.jms.jndi.url=tibjmsnaming://localhost:7222
    
    These ftl properties are not used if messaging provider is either [none,ems]
    com.tibco.fos.psi.ftl.realmServer=http://localhost:8080
    com.tibco.fos.psi.ftl.realm.principal=admin
    com.tibco.fos.psi.ftl.realm.credentials=admin
    com.tibco.fos.psi.ftl.realm.applicationName=psiapp
    com.tibco.fos.psi.ftl.realm.endpointName=psiep
    
  9. Run the following command to create a ConfigMap:
    $ kubectl create configmap 
    configmap --from-env-file=/path/psi_application.properties --namespace=tibco-psi200
    Change the ConfigMap name in psimulticontainerapp.json and deploy TIBCO PSI multi-container pods.
  10. Ensure that the following values are entered in psimulticontainerapp.json:
    Kind: Deployment
    Replicas: 2
    Type of strategy: RollingUpdate
    maxSurge of rollingUpdate of strategy: 1
    maxUnavailable of rollingUpdate of strategy: 0
  11. You can change the rolling update strategy according to the requirement of the deployment by using <docker registry url for image with tag version>.
    You can use any one of the following values for the environment:
    /home/postgres/tibco/psi/2.1.0/EMS_LIB_HOME/lib
    /home/postgres/tibco/psi/2.1.0/FTL_LIB_HOME/lib
    Name under configMapRef: name of the created ConfigMap
    claimName under volumes: psi-pv-claim
    mountPath under persistentVolumeClaim in volumeMounts: /home/postgres/tibco/psi/2.1.0/bulkload

    Name in imagePullSecrets: name of registry credentials created to pull images

  12. Run the following command to create a psimulticontainerapp:
    $ kubectl create -f psimulticontainerapp.json
  13. Ensure that the following values are entered in psiservice.json:
    Kind: Service
    Appin selector: psi
    Type: LoadBalancer
    ExternalIPs: [ "IP Address of K8s worker node" ]
    Name: tcp
    Protocol: TCP
    Port: 8080
    targetPort: 8080
  14. Run the following command to create a TIBCO PSI service:
    $ kubectl create -f psiservice.json
    Note: psimulticontainerapp.json uses a readiness probe to check whether the containers are ready to receive traffic through the Kubernetes service. The liveness probe can also be configured in psimulticontainerapp.json using the same process for configuring the readiness probe. For more information, see Kubernetes documentation.
  15. Add the following json code to the TIBCO PSI multi-container json file for the rolling update strategy:
     "strategy": {
             "type": "RollingUpdate",
             "rollingUpdate": {
                "maxSurge": 1,
                "maxUnavailable": 0
    }
    

    You can change the maxSurge and maxUnavailable parameters according to your requirement. If you want to update a new image in an existing deployment or you have changed the ConfigMap and want to update in the existing deployment then you can patch the changes in the existing deployment. These changes reflect in new pods that are functioning according to the rolling update strategy defined.

    For patching, copy the whole content of psimulticontainerapp.json in psimultipath.json. You can change the required field in psimultipath.json. You can patch with the following kubectl command:

    $ kubectl patch deployment psimulticontainerapp --patch "$(cat psimultipatch.json)" -n tibco-psi200
  16. Perform horizontal scaling by keeping the TIBCO PSI instance as 1 for the first deployment, for example, to scale TIBCO PSI pods as 3, use the following command:
    $ kubectl scale --replicas=3 -f psimulticontainerapp.json