|
In this section: |
This section provides prerequisites and describes how to deploy TIBCO DQ in a Kubernetes cluster.
Kubernetes® (K8s) is an open-source system for running, managing, and orchestrating containerized applications in a cluster of servers (known as a Kubernetes cluster). Kubernetes clusters can run in any cloud environment (e.g., private, public, hybrid) or on-premises.
A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.
The size of a Kubernetes cluster (determined by the number of nodes) depends on the application workload(s) that will be running. For example, each node can represent an 8 core / 64 GB RAM system. Pods are the basic objects in a Kubernetes cluster, which consists of one or more software containers. The worker node(s) host the Pods that are the components of the application workload. Usually, only one container runs in a Pod. However, multiple containers can be run in a Pod if needed (depending on specific environment requirements). If one Pod fails, Kubernetes can automatically replace that Pod with a new instance.
Key benefits of Kubernetes include automatic scalability, efficient load balancing, high availability, failover/fault tolerance, and deploying updates across your environment without any disruptions to users.
|
In this section: |
TIBCO® DQ is a collection of microservices running as separate containers in a private cloud environment such as Kubernetes. Your Kubernetes cluster must have at least the following:
Both cases would require 500 GB of persistent volume to be available.
The following table lists the recommended CPU, memory, and volume requirements for each of the containers.
|
Container Name |
Replicas |
CPU (Per Replica) |
Memory (Per Replica) |
|||
|---|---|---|---|---|---|---|
|
Min |
Max |
Requests |
Limits |
Requests |
Limits |
|
|
tdq-address-server |
1 |
1 |
2000m |
2000m |
6Gi |
8Gi |
|
tdq-butler |
1 |
2 |
1000m |
3000m |
2Gi |
3Gi |
|
tdq-dqs |
1 |
1 |
500m |
1000m |
1Gi |
1Gi |
|
tdq-dsml-classifier |
1 |
1 |
2000m |
5000m |
2Gi |
4Gi |
|
tdq-grafana |
1 |
1 |
1000m |
2000m |
1Gi |
3Gi |
|
tdq-ksource |
1 |
1 |
1000m |
2000m |
1Gi |
2Gi |
|
tdq-postgres |
1 |
1 |
2000m |
4000m |
2Gi |
8Gi |
|
tdq-profiler |
4 |
8 |
4000m |
4000m |
2Gi |
4Gi |
|
tdq-python-services |
3 |
6 |
4000m |
4000m |
2Gi |
8Gi |
|
tdq-valet-services |
1 |
3 |
500m |
1000m |
1Gi |
2Gi |
|
tdq-valet-ui |
1 |
1 |
500m |
1000m |
1Gi |
2Gi |
|
tdq-wso2 |
1 |
1 |
2000m |
4000m |
2Gi |
3Gi |
|
tdq-log-viewer |
1 |
1 |
1000m |
1000m |
0.5Gi |
1Gi |
Download and unzip the TIB_tdq_5.x.x_container.zip file.
Download and extract a Linux installation of Loqate into the NFS server volume where the instance of TIBCO® DQ that is being deployed to a Kubernetes cluster will be utilized:
<NFS_Volume_Mount_Path>/loqate_linux
Before you deploy TIBCO DQ in a Kubernetes cluster, you must first create Docker images for the TIBCO DQ components.
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/baremetal/deploy.yaml
This command will deploy Kubernetes objects into the ingress-nginx namespace.
After deployment, run the following command to extract the mapped controller HTTPS port of the cluster, which will be required in the next step:
$ kubectl -n ingress-nginx get service ingress-nginx-controller | grep -oP '(?<=443:).*(?=/TCP)'
$ ./init_kubernetes_configuration.sh <TDQ_CLUSTER_DOMAIN> <TDQ_INGRESS_CONTROLLER_HTTPS_NODEPORT> <TDQ_CONTAINER_IMAGE_REGISTRY>
where:
$ docker-compose --build -f docker-compose-kubernetes.yaml build
For more information, see Installing TIBCO WebFOCUS® DSML Services Container Edition.
|
In this section: |
TIBCO DQ deployed in a Kubernetes cluster requires a shared PersistentVolume (PV) that is set to the ReadWriteMany access mode. In the following procedures, the NFS-volume is used as an example.
Prerequisites
This section describes how to deploy TIBCO DQ infrastructure components.
$ kubectl apply -f infra-components/tdq-namespace.yaml
This will create the namespace object tdq-ns in the Kubernetes cluster.
$ kubectl apply -f infra-components/tdq-pv.yaml
This will create a PV named tdq-nfs-pv in the tdq-ns namespace.
$ kubectl apply -f infra-components/tdq-pvc.yaml
This will create a PVC named tdq-nfs-pvc in the tdq-ns namespace.
Either create self-signed certificate using following command or pass the certificate key and certificate to create the Kubernetes secret.
The following command will create a self-signed certificate for IP 10.10.11.12 and the output files are tdq.key and tdq.cert:
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tdq.key -out tdq.cert -subj "/CN=10.10.11.12/O=10.10.11.12"
The following command will insert the tdq.key and tdq.cert created by the previous command in the Kubernetes secret and name it tdq-tls-secret in the tdq-ns namespace.
$ kubectl create secret tls tdq-tls-secret --key tdq.key --cert tdq.cert -n tdq-ns
This section describes how to create ConfigMaps for TIBCO DQ components.
$ kubectl apply -f config-maps/tdq-cfg-map-dsml-classifier.yaml
$ kubectl apply -f config-maps/tdq-cfg-map-grafana.yaml
$ kubectl apply -f config-maps/tdq-cfg-map-postgres.yaml
$ kubectl apply -f config-maps/tdq-cfg-map-profiler.yaml
$ kubectl apply -f config-maps/tdq-cfg-map-wso2-conf.yaml
$ kubectl apply -f config-maps/tdq-cfg-map-valet-services.yaml
$ kubectl apply -f config-maps/tdq-cfg-map-python-services.yaml
$ kubectl apply -f config-maps/tdq-cfg-map-shared-datasource.yaml
$ kubectl apply -f config-maps/tdq-cfg-map-shared-jasypt.yaml
$ kubectl apply -f config-maps/tdq-cfg-map-log-viewer.yaml
This section describes how to deploy TIBCO DQ services and key components.
$ kubectl apply -f tdq-postgres/tdq-postgres.yaml
$ kubectl apply -f tdq-wso2/tdq-wso2.yaml
$ kubectl apply -f tdq-ksource/tdq-ksource.yaml
$ kubectl apply -f tdq-dsml-classifier/tdq-dsml-classifier.yaml
$ kubectl apply -f tdq-python-services/tdq-python-services.yaml
$ kubectl apply -f tdq-dqs/tdq-dqs.yaml
$ kubectl apply -f tdq-grafana/tdq-grafana.yaml
$ kubectl apply -f tdq-address-server/tdq-address-server.yaml
$ kubectl apply -f tdq-profiler/tdq-profiler.yaml
$ kubectl apply -f tdq-butler/tdq-butler.yaml
$ kubectl apply -f tdq-valet-services/tdq-valet-services.yaml
$ kubectl apply -f tdq-valet-ui/tdq-valet-ui.yaml
$ kubectl apply -f tdq-log-viewer/tdq-log-viewer.yaml
This section describes how to deploy Kubernetes Horizontal Pod Autoscaler (HPA) components. HPA automatically scales the TIBCO DQ components to match demand based on observed metrics, such as average CPU usage and average memory usage.
All HPA component targets are set to be 50% of the requested CPU and memory usage. Please check the minimum and maximum replica number for each component in the deployment resource specification table (Prerequisites).
Note: For users who do not require HPA in their deployment, this section can be skipped.
$ kubectl apply -f tdq-butler/tdq-butler-hpa.yaml
$ kubectl apply -f tdq-profiler/tdq-profiler-hpa.yaml
$ kubectl apply -f tdq-python-services/tdq-python-services-hpa.yaml
$ kubectl apply -f tdq-valet-services/tdq-valet-services-hpa.yaml
This section describes how to deploy the TIBCO DQ ingress components.
$ kubectl apply -f ingress/tdq-ingress-valet-services.yaml
$ kubectl apply -f ingress/tdq-ingress-https-backend.yaml
$ kubectl apply -f ingress/tdq-ingress-valet-ui-grafana.yaml
$ kubectl apply -f ingress/tdq-ingress-log-viewer.yaml
|
In this section: |
This section describes how to verify and test TIBCO DQ in a Kubernetes cluster by confirming all required services and components are running.
To confirm TIBCO DQ services and components are running:
The following is a sample command you can use to check the status of the tdq-wso2 pod:
$ kubectl -n tdq-ns get pod -l app.kubernetes.io/component=tdq-wso2
$ kubectl -n tdq-ns describe pod -l app.kubernetes.io/component=tdq-wso2
The following is a sample command you can use to check the status of the tdq-valet-ui pod:
$ kubectl -n tdq-ns get pod -l app.kubernetes.io/components=tdq-valet-ui
$ kubectl -n tdq-ns describe pod -l app.kubernetes.io/components=tdq-valet-ui
The following is a sample command you can use to check the status of each TIBCO DQ pod:
$ kubectl -n tdq-ns get pod
The following is a sample command you can use to check the status of each TIBCO DQ service:
$ kubectl -n tdq-ns get svc
The following is a sample command you can use to check the status of each TIBCO DQ ingress:
$ kubectl -n tdq-ns get ingress
There are three certificates that need to be accepted by your browser before you can log in to the TIBCO DQ Console.
Note: You can either create and use the self-signed certificate for HTTPS or use your own certificate.
https://tdq-valet-services.<TDQ_CLUSTER_DOMAIN>:<TDQ_INGRESS_CONTROLLER_HTTPS_NODEPORT>
https://tdq-valet-ui.<TDQ_CLUSTER_DOMAIN>:<TDQ_INGRESS_CONTROLLER_HTTPS_NODEPORT>
The default login credentials (user ID and password) are:
https://tdq-log-viewer.<TDQ_CLUSTER_DOMAIN>:<TDQ_INGRESS_CONTROLLER_HTTPS_NODEPORT>
The default login credentials (user ID and password) are: