Deploying Multi-Pod Clusters
Before you begin
- To deploy TIBCO Product and Service Catalog in high-availability mode, read the hardware requirements at Hardware Requirements.
- Choose a suitable database for this deployment. TIBCO Product and Service Catalog currently supports Oracle and PostgreSQL databases running on-premises or on the cloud.
Multi-pod cluster deployment on Kubernetes consists of the following broad steps:
(i) Create a namespace,
(ii) Create Kubernetes entities (such as storage class and persistent volume),
(iii) Deploy the database, TIBCO EMS, Apache Ignite, and
(iv) Deploy with TIBCO Product and Service Catalog or TIBCO Offer and Pricing Designer. For detailed steps, see the procedure.
Before you begin
- To deploy TIBCO Product and Service Catalog in High-availability mode, read the hardware requirements at Hardware Requirements
- Choose a suitable database for this deployment. TIBCO Product and Service Catalog currently supports Oracle and PostgreSQL databases running on-premises or on the cloud.
- Procedure
- Create a namespace for working with multi-pod
TIBCO Product and Service Catalog deployment on Kubernetes.
- ensure that the following values are entered in
fc-namespace.json
:- Kind: Namespace
- Name in metadata: tibco-fc-500
- Name in labels: tibco-fc-500
- Run the command to create the namespace:
$ kubectl create -f fc-namespace.json
- ensure that the following values are entered in
- Create the following entities in the specified order:
No. Name Details i. Storage class Configuration details for fc-storageclass.json
- Kind: StorageClass
- Namespace: tibco-fc-500
Command to create a storage class
$ kubectl create -f fc-storageclass.json
ii. Persistent Volume Configuration details for fc-pv.json
- Kind: PersistentVolume
- Path in hostPath: This path must be already present in the worker node or it is created by Kubernetes after kubectl creates -f fc-pv.json. You must create the mounted directory mentioned in the fc-pv.json file and give all the permissions.
- Name of the worker node: Name of the created worker node
- Storage: Size of the Persistent Volume (10Gi for 10 gb)
Note: InCommand to create a Persistent Volumefc-pv.json
, you must provide values for worker-node1 hostname and Kubernetes master hostname.$ kubectl create -f fc-pv.json
iii. Persistent Volume Claim Configuration details for fc-pvc.json
- Kind: PersistentVolumeClaim
- StorageClassName: Name of the storage class created
- Storage: Size of the Persistent Volume Claim (upto 1 GB or more)
$ kubectl create -f fc-pvc.json
- Deploy a supported database. You need the database details mentioned later in the procedure.
Purpose Reference To deploy containerized PostgreSQL Database Deploying Containerized PostgreSQL Database To deploy on-premise PostgreSQL Database Deploying On-premise PostgreSQL Database To deploy containerized Oracle Database Deploying Containerized Oracle Database To deploy on-premise Oracle Database Deploying On-premise Oracle Database - Deploy TIBCO Enterprise Message Service (TIBCO EMS)TIBCO Product and Service Catalog integrates with on-premises TIBCO EMS as well as cloud TIBCO EMS.
Purpose Reference To deploy containerized Enterprise Message Service Deploying Containerized Enterprise Message Service To deploy on-premises Enterprise Message Service Deploying On-premises Enterprise Message Service - Create a configuration map.
- Ensure that the following values are entered in
fc-properties.properties
:Property Value MQ_MDM_DB_TYPE POSTGRES/ ORACLE MQ_MDM_DB_HOST Database host / fc db Kubernetes service MQ_MDM_DB_PORT Database port MQ_MDM_DB_NAME Database service name as configured in tnsnames MQ_MDM_DB_SCHEMA Database schema name MQ_MDM_DB_MIN_CONN_COUNT 20 MQ_MDM_DB_MAX_CONN_COUNT 250 MQ_MDM_DB_IGN_MIN_CONN_COUNT 1 MQ_MDM_DB_IGN_MAX_CONN_COUNT 10 EMS_SERVER_URL tcp://host:port
, if Enterprise Message Service is running on an on-premises setup. In this case, thehost
value is the hostname or IP address on which the Enterprise Message Service is running. If the Enterprise Message Service is containerized, then thehost
value is Enterprise Message Service service name or IP address.FC_SERVICE_HOST IP address of service where TIBCO Product and Service Catalog is deployed FC_SERVICE_PORT Port on which TIBCO Product and Service Catalog is running OPD_APP_DATA_VOLUME heavy/light - Run the following command to create a config map:
$kubectl create cm fc-config -n tibco-fc-500 --from-env-file=<path to fc-properties.properties>
- Ensure that the following values are entered in
- To create secrets for a database and Enterprise Message Service, ensure that the following values are entered in
fc-secret.json
:MQ_MDM_DB_USER=<base64 encoded database username>MQ_MDM_DB_PASSWORD=<base64 encoded database password>MQ_MDM_DB_ADMIN_USER=<base64 encoded database admin username>MQ_MDM_DB_ADMIN_USER_PWD=<base64 encoded database admin password>EMS_USER_NAME=<base 64 encoded ems user name>EMS_PASSWORD=<base 64 encoded ems password> - Deploy Apache Ignite.
- Ensure that the following values are entered in
ignite.json
:- Kind: Deployment
- App in matchLabels: fc
- Tier in matchLabels: ignite
- Image: Docker registry URL of image with latest tag
- Name in imagePullSecrets: Name of the registry credentials created to pull the images
- Run the following command to create fcdb:
$ kubectl create -f ignite.json
Note: To change the JVM parameters of the Ignite server, update the entry-point.sh file. - Ensure that the following values are entered in
- When deploying with
TIBCO Product and Service Catalog, perform the following steps:
- In the
tibco-fc-500 namespace, give the permissions in
viewClusterRole
to the service account located in the tibco-fc-500 namespace and named default. You must create rolebinding to access the TIBCO Product and Service Catalog instance in the cluster to create a Wildfly cluster.$kubectl create rolebinding default-viewer --clusterrole=view --serviceaccount=tibco-fc-500:default --namespace=tibco-fc-500
- Ensure that the following values are entered in
fc.json
:- Kind: Deployment
- App in matchLabels: fc (this value must be the same as the value of the app in selector in
fc-service.json
) - Tier in matchLabels: instance (this value must be the same as the value of the tier in selector in
fc-service.json
) - Image: Docker registry url of image with the latest tag
- Name in imagePullSecrets: Name of the registry credentials created to pull images
- Run the following command to create
TIBCO Product and Service Catalog deployment:
$kubectl create -f fc.json
Note: To change the JVM parameters of the Ignite server, other than the default ones, update the entry-point.sh file.
- In the
tibco-fc-500 namespace, give the permissions in
- Run the following command to create
TIBCO Product and Service Catalog service:
$kubectl create -f fc-service.json
- Run the following command to create the configurator service kubectl create
-f configurator-service.json
:$kubectl create -f configurator-service.json
- When deploying with Offer and Pricing Designer (OPD), perform the following steps:
- Ensure that the following values are entered in the
opd.json
:- Kind: Deployment
- App in matchLabels: fc (this value must be the same as the value of the app in selector in
opd-service.json
) - Tier in matchLabels: opd (this value must be the same as the value of the tier in selector in
opd-service.json
) - Image: Docker registry url of image with the latest tag
- Name in imagePullSecrets: name of the registry credentials created to pull images
Run the following command to create TIBCO Offer and Pricing Designer:$kubectl create -f opd.json
Note: The existing topology of TIBCO Product and Service Catalog deployment is that it supports only a single pod of OPD. If OPD pod is scaled up, OPD features do not work as expected. - Ensure that the following values are entered in the
opd-service.json
:- Kind: Service
- Type: LoadBalancer
- Ports:
[ { "name": "http-opd", "protocol": "TCP", "port": 8070, "targetPort": 8070 } ]
Run the following command to create Offer and Pricing Designer service:$kubectl create -f opd-service.json
- Ensure that the following values are entered in the