Deploying Multi-Pod Cluster
The following procedure describes TIBCO PSI deployment on Kubernetes:
- Procedure
- To create a namespace for working with multi-container TIBCO PSI deployment on Kubernetes, ensure that the following values are entered in
psi-namespace.json
:Kind: NamespaceName in metadata: tibco-psi200Name in labels: tibco-psi200Run the following command to create a namespace:$ kubectl create -f psi-namespace.json
- To create a storage class, ensure that the following values are entered in
psi_storageclass.json
:Kind: StorageClassNamespace: tibco-psi200Run the following command to create a storage class:$ kubectl create -f psi_storageclass.json
- To create a Persistent Volume, ensure that the following values are entered in
psi_pv.json
:Kind: PersistentVolumePath in hostpath: This path must be already present in the worker node or it is created by Kubernetes after kubectl creates-f psi_pv.json
.Name of the worker node: Name of the created worker nodeStorage: Size of the Persistent Volume (1Gi for 1 gb)Run the following command to create a Persistent Volume:$ kubectl create -f psi_pv.json
Note: Inpsi_pv.json
provide values for worker-node1 hostname and Kubernetes master hostname. - To create a Persistent Volume Claim, ensure that the following values are entered in
psi_pvc.json
:Kind: PersistentVolumeClaimStorageClassName: Name of the storage class createdStorage: Size of the Persistent Volume Claim (1Gi for 1 gb)Run the following command to create a Persistent Volume Claim:$ kubectl create -f psi_pvc.json
- Deploy the PostgreSQL or Oracle database. Database for TIBCO PSI can either be an on-premises installation on a separate physical or virtual server or deployed as a container application using the Docker image similar to other TIBCO PSI servers. In an on-premises installation, skip database deployment and proceed to the next step. If deploying the database as a container application, perform the following steps:
Option Description PostgreSQL For deploying PostgreSQL, perform the following steps: - Create the psidb Docker image and push it to the Docker registry.
- Use the kubectl command to run the database pod and the database service.
$ kubectl create -f psidb.json $ kubectl create -f psidbservice.json
- To create a psidb, ensure that the following values are entered in
psidb.json
:- Kind: Deployment
- App in matchLabels: psi (this value must be similar to the app in the selector in psidbservice.json)
- mountPath: /home/postgres/tibco/psi/2.1.0/bulkload
- claimName: psi-pv-claim
- Name in imagePullSecrets: Name of the registry credentials created to pull the images
- Run the following command to create a psidb:
$ kubectl create -f psidb.json
- To create a psidbservice, ensure that the following values are entered in
psidbservice.json
:- Kind: Service
- Type: LoadBalancer
- ExternalIPs: [ "IP Address of K8s worker node" ]
- Name: tcp
- Protocol: TCP
- Port: 5432
- targetPort: 5432
- Run the following command to create
psidbservice
:$ kubectl create -f psidbservice.json
Oracle For deploying Oracle in a container, either use an official Oracle 12.2.0.1 image or build one from the official 12.2.0.1 Dockerfile for the Oracle database. In either case, the database installation must be partitioning enabled.
- After the Oracle image is available, use the
kubectl
command to run the Oracle database pod and the database service.$ kubectl create -f psi_db_oracle.json $ kubectl create -f psioracleservice.json
- To create psi_db_oracle, ensure that the following values are entered in
psi_db_oracle.json
:- Kind: Deployment
- app in matchLabels: psioracle (this value must be similar to the app in the selector in psioracleservice.json)
- Image: Docker image registry url of official Oracle Docker image of version 12.2.0.1
- mountPath: /home/postgres/tibco/psi/2.1.0/bulkload
- claimName: psi-pv-claim
- Name in imagePullSecrets: Name of registry credentials created to pull the images
- Run the following command to create
psi_db_oracle
:$ kubectl create -f psi_db_oracle.json
The database pod is up and running. See Configuring TIBCO Product and Service Inventory for creating TIBCO PSI database schema.
Note: The Oracle directory PSI_BULK_DIR which has to be created for setting up bulk load must be created with the path /home/postgres/tibco/psi/2.1.0/bulkload. - To create psioracleservice, ensure that the following values are entered in
psioracleservice.json
:- Kind: Service
- appin selector: psioracle
- Type: LoadBalancer
- ExternalIPs: [ "IP Address of K8s worker node" ]
- Name: tcp
- Protocol: TCP
- Port: 1521
- targetPort: 1521
- Run the following command to create
psioracleservice
:$ kubectl create -f psioracleservice.json
- When deploying with Enterprise Message Service™, perform the following steps:
- Ensure that the following values are entered in the
emsapp.json
:- Kind: Deployment
- Image: Docker registry URL for image with tag version
- Name: Name of the registry credentials created to pull the images
- To create the Enterprise Message Service™ app, run the following command:
$ kubectl create -f emsapp.json
- Ensure that the following values are entered in
emsservice.json
:- Kind: Service
- App in selector: ems
- Type: LoadBalancer
- ExternalIPs: [ "IP Address of K8s worker node" ]
- Name: tcp
- Protocol: TCP
- Port: 7222
- targetPort: 7222
- To create the Enterprise Message Service™ service, run the following command:
$ kubectl create -f emsservice.json
Note: You must replace the localhost in GenericConnectionFactory with the IP address from external IPs of emsservice by using GEMS. - Ensure that the following values are entered in the
- For deployment with TIBCO FTL as the messaging service, perform the following steps:
- Ensure that the following values are entered in
ftlapp.json
:- Kind: Deployment
- Image: Docker registry url for image with tag version
- Name: Name of the registry credentials created to pull the images
- args: ["--client.url","discover://"]
Ports require two entries:
- containerPort: 13131
- Name: psiport
- containerPort: 13134
- Name: psiport1
- To create the TIBCO FTL app, run the following command:
$ kubectl create -f ftlapp.json
- Ensure that the following values are entered in
ftlservice.json
:- Kind: Service
- Type: LoadBalancer
- ExternalIPs: [ "IP Address of K8s worker node" ]
Ports require two entries:
- Name: tcp
- Protocol: TCP
- Port: 13131
- targetPort: 13131
- Name: tcp1
- Protocol: TCP
- Port: 13134
- targetPort: 13134
- To create the TIBCO FTL service, run the following command:
$ kubectl create -f ftlservice.json
- Ensure that the following values are entered in
- Add the database service name, port, TIBCO FTL, and Enterprise Message Service™ service name in the psi_application.properties file. Ensure that the following values are entered in psi_application.properties:
com.tibco.fos.psi.pooledDataSource.driverClassName=org.postgresql.Driver com.tibco.fos.psi.pooledDataSource.host=<db service name> com.tibco.fos.psi.pooledDataSource.port=5432 com.tibco.fos.psi.pooledDataSource.database=psidb com.tibco.fos.psi.pooledDataSource.username=psiuser com.tibco.fos.psi.pooledDataSource.password=psipassword com.tibco.fos.psi.pooledDataSource.url=jdbc:postgresql://<db service name>:5432/psidb?currentSchema=psischema com.tibco.fos.psi.hibernate.dialect=org.hibernate.dialect.PostgreSQLDialect This property is used to enable/disable notification generation in locking use cases. Valid values are [true,false] com.tibco.fos.psi.notification.enable=false This property is used to set messaging provider and must be set to one of [ftl,ems] if notification generation is enabled com.tibco.fos.psi.notification.provider=none These properties are not used if messaging provider is either [none,ftl] com.tibco.fos.psi.jms.jndi.url=tibjmsnaming://localhost:7222 These ftl properties are not used if messaging provider is either [none,ems] com.tibco.fos.psi.ftl.realmServer=http://localhost:8080 com.tibco.fos.psi.ftl.realm.principal=admin com.tibco.fos.psi.ftl.realm.credentials=admin com.tibco.fos.psi.ftl.realm.applicationName=psiapp com.tibco.fos.psi.ftl.realm.endpointName=psiep
- Run the following command to create a ConfigMap:
$ kubectl create configmap configmap --from-env-file=/path/psi_application.properties --namespace=tibco-psi200
Change the ConfigMap name inpsimulticontainerapp.json
and deploy TIBCO PSI multi-container pods. - Ensure that the following values are entered in
psimulticontainerapp.json
:Kind: DeploymentReplicas: 2Type of strategy: RollingUpdatemaxSurge of rollingUpdate of strategy: 1maxUnavailable of rollingUpdate of strategy: 0 - You can change the rolling update strategy according to the requirement of the deployment by using <docker registry url for image with tag version>.
You can use any one of the following values for the environment:/home/postgres/tibco/psi/2.1.0/EMS_LIB_HOME/lib/home/postgres/tibco/psi/2.1.0/FTL_LIB_HOME/libName under configMapRef: name of the created ConfigMapclaimName under volumes: psi-pv-claimmountPath under persistentVolumeClaim in volumeMounts: /home/postgres/tibco/psi/2.1.0/bulkload
Name in imagePullSecrets: name of registry credentials created to pull images
- Run the following command to create a psimulticontainerapp:
$ kubectl create -f psimulticontainerapp.json
- Ensure that the following values are entered in
psiservice.json
:Kind: ServiceAppin selector: psiType: LoadBalancerExternalIPs: [ "IP Address of K8s worker node" ]Name: tcpProtocol: TCPPort: 8080targetPort: 8080 - Run the following command to create a TIBCO PSI service:
$ kubectl create -f psiservice.json
Note: psimulticontainerapp.json uses a readiness probe to check whether the containers are ready to receive traffic through the Kubernetes service. The liveness probe can also be configured in psimulticontainerapp.json using the same process for configuring the readiness probe. For more information, see Kubernetes documentation. - Add the following json code to the TIBCO PSI multi-container json file for the rolling update strategy:
"strategy": { "type": "RollingUpdate", "rollingUpdate": { "maxSurge": 1, "maxUnavailable": 0 }
You can change the
maxSurge
andmaxUnavailable
parameters according to your requirement. If you want to update a new image in an existing deployment or you have changed the ConfigMap and want to update in the existing deployment then you can patch the changes in the existing deployment. These changes reflect in new pods that are functioning according to the rolling update strategy defined.For patching, copy the whole content of
psimulticontainerapp.json
inpsimultipath.json
. You can change the required field inpsimultipath.json
. You can patch with the followingkubectl
command:$ kubectl patch deployment psimulticontainerapp --patch "$(cat psimultipatch.json)" -n tibco-psi200
- Perform horizontal scaling by keeping the TIBCO PSI instance as 1 for the first deployment, for example, to scale TIBCO PSI pods as 3, use the following command:
$ kubectl scale --replicas=3 -f psimulticontainerapp.json