Kubernetes Deployment

In this section:

This section provides prerequisites and describes how to deploy TIBCO DQ in a Kubernetes cluster.

Kubernetes Overview

Kubernetes® (K8s) is an open-source system for running, managing, and orchestrating containerized applications in a cluster of servers (known as a Kubernetes cluster). Kubernetes clusters can run in any cloud environment (e.g., private, public, hybrid) or on-premises.

A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.

The size of a Kubernetes cluster (determined by the number of nodes) depends on the application workload(s) that will be running. For example, each node can represent an 8 core / 64 GB RAM system. Pods are the basic objects in a Kubernetes cluster, which consists of one or more software containers. The worker node(s) host the Pods that are the components of the application workload. Usually, only one container runs in a Pod. However, multiple containers can be run in a Pod if needed (depending on specific environment requirements). If one Pod fails, Kubernetes can automatically replace that Pod with a new instance.

Key benefits of Kubernetes include automatic scalability, efficient load balancing, high availability, failover/fault tolerance, and deploying updates across your environment without any disruptions to users.

Prerequisites

In this section:

TIBCO® DQ is a collection of microservices running as separate containers in a private cloud environment such as Kubernetes. Your Kubernetes cluster must have at least the following:

Both cases would require 500 GB of persistent volume to be available.

The following table lists the recommended CPU, memory, and volume requirements for each of the containers.

Container Name

Replicas

CPU (Per Replica)

Memory (Per Replica)

 

Min

Max

Requests

Limits

Requests

Limits

tdq-address-server

1

1

2000m

2000m

6Gi

8Gi

tdq-butler

1

2

1000m

3000m

2Gi

3Gi

tdq-dqs

1

1

500m

1000m

1Gi

1Gi

tdq-dsml-classifier

1

1

2000m

5000m

2Gi

4Gi

tdq-grafana

1

1

1000m

2000m

1Gi

3Gi

tdq-ksource

1

1

1000m

2000m

1Gi

2Gi

tdq-postgres

1

1

2000m

4000m

2Gi

8Gi

tdq-profiler

4

8

4000m

4000m

2Gi

4Gi

tdq-python-services

3

6

4000m

4000m

2Gi

8Gi

tdq-valet-services

1

3

500m

1000m

1Gi

2Gi

tdq-valet-ui

1

1

500m

1000m

1Gi

2Gi

tdq-wso2

1

1

2000m

4000m

2Gi

3Gi

tdq-log-viewer

1

1

1000m

1000m

0.5Gi

1Gi

Required Software

Download and unzip the TIB_tdq_5.x.x_container.zip file.

Download and extract a Linux installation of Loqate into the NFS server volume where the instance of TIBCO® DQ that is being deployed to a Kubernetes cluster will be utilized:

<NFS_Volume_Mount_Path>/loqate_linux

Building Docker Images

Before you deploy TIBCO DQ in a Kubernetes cluster, you must first create Docker images for the TIBCO DQ components.

  1. Open Windows PowerShell or any Linux shell.
  2. Change your directory to the folder where you extracted the TIB_tdq_5.x.x_container.zip file.
  3. Change your directory to the install/tdq-k8s/k8s folder.
  4. Deploy the NGINX Ingress controller using the following command:
    $ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/baremetal/deploy.yaml

    This command will deploy Kubernetes objects into the ingress-nginx namespace.

    After deployment, run the following command to extract the mapped controller HTTPS port of the cluster, which will be required in the next step:

    $ kubectl -n ingress-nginx get service ingress-nginx-controller | grep -oP '(?<=443:).*(?=/TCP)'
  5. Run the init_kubernetes_configuration.sh script to update the domain name, the ingress node port, and container image registry URL:
    $ ./init_kubernetes_configuration.sh <TDQ_CLUSTER_DOMAIN> <TDQ_INGRESS_CONTROLLER_HTTPS_NODEPORT> <TDQ_CONTAINER_IMAGE_REGISTRY>

    where:

    <TDQ_CLUSTER_DOMAIN>
    Is the cluster domain intended to be used for the HTTPS URL.
    <TDQ_INGRESS_CONTROLLER_HTTPS_NODEPORT>
    Must be set to the NGINX Ingress controller's HTTPS port (from step 4).
    <TDQ_CONTAINER_IMAGE_REGISTRY>
    Must be the URL to the container registry from where the Kubernetes cluster can retrieve the TIBCO DQ container images.
  6. Run the following command:
    $ docker-compose  --build -f docker-compose-kubernetes.yaml build
  7. Build the DSML Services Container Edition images.

    For more information, see Installing TIBCO WebFOCUS® DSML Services Container Edition.

  8. (Optional) Push the Docker images to the image registry that can be accessed by the Kubernetes cluster.

Deploying TIBCO DQ to a Kubernetes Cluster

In this section:

TIBCO DQ deployed in a Kubernetes cluster requires a shared PersistentVolume (PV) that is set to the ReadWriteMany access mode. In the following procedures, the NFS-volume is used as an example.

Prerequisites

Deploying Infrastructure Components

This section describes how to deploy TIBCO DQ infrastructure components.

  1. Change your directory to the folder where you extracted the TIB_tdq_5.x.x_container.zip file to the install/tdq-k8s/k8s/ folder.
  2. Create the namespace using the following command:
    $ kubectl apply -f infra-components/tdq-namespace.yaml

    This will create the namespace object tdq-ns in the Kubernetes cluster.

  3. Create the PersistentVolume (PV).
    1. Update the tdq-pv.yaml file with the NFS path </NFS/SHARED/PATH> and server’s IP <NFS_SERVER_IP> details.
    2. Run the following command:
      $ kubectl apply -f infra-components/tdq-pv.yaml

      This will create a PV named tdq-nfs-pv in the tdq-ns namespace.

  4. Create a Persistent Volume Claim (PVC) using the following command:
    $ kubectl apply -f infra-components/tdq-pvc.yaml

    This will create a PVC named tdq-nfs-pvc in the tdq-ns namespace.

  5. Create a secret for the ingresses.

    Either create self-signed certificate using following command or pass the certificate key and certificate to create the Kubernetes secret.

    The following command will create a self-signed certificate for IP 10.10.11.12 and the output files are tdq.key and tdq.cert:

    $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tdq.key -out tdq.cert -subj "/CN=10.10.11.12/O=10.10.11.12"

    The following command will insert the tdq.key and tdq.cert created by the previous command in the Kubernetes secret and name it tdq-tls-secret in the tdq-ns namespace.

    $ kubectl create secret  tls tdq-tls-secret --key tdq.key --cert tdq.cert -n tdq-ns

Creating ConfigMaps

This section describes how to create ConfigMaps for TIBCO DQ components.

  1. Deploy tdq-dsml-classifier ConfigMap using the following command:
    $  kubectl apply -f config-maps/tdq-cfg-map-dsml-classifier.yaml
  2. Deploy tdq-grafana ConfigMap using the following command:
    $  kubectl apply -f config-maps/tdq-cfg-map-grafana.yaml
  3. Deploy tdq-postgres ConfigMap using the following command:
    $  kubectl apply -f config-maps/tdq-cfg-map-postgres.yaml
  4. Deploy tdq-profiler ConfigMap using the following command:
    $  kubectl apply -f config-maps/tdq-cfg-map-profiler.yaml
  5. Deploy tdq-wso2 ConfigMap using the following command:
    $  kubectl apply -f config-maps/tdq-cfg-map-wso2-conf.yaml
  6. Deploy tdq-valet-services ConfigMap using the following command:
    $  kubectl apply -f config-maps/tdq-cfg-map-valet-services.yaml
  7. Deploy tdq-python-services ConfigMap using the following command:
    $  kubectl apply -f config-maps/tdq-cfg-map-python-services.yaml
  8. Deploy shared-datasource ConfigMap using the following command:
    $  kubectl apply -f config-maps/tdq-cfg-map-shared-datasource.yaml
  9. Deploy shared-jasypt ConfigMap using the following command:
    $  kubectl apply -f config-maps/tdq-cfg-map-shared-jasypt.yaml
  10. Deploy tdq-log-viewer ConfigMap using the following command:
    $  kubectl apply -f config-maps/tdq-cfg-map-log-viewer.yaml

Deploying TIBCO DQ Services and Components

This section describes how to deploy TIBCO DQ services and key components.

  1. Deploy tdq-postgres using the following command:
    $  kubectl apply -f tdq-postgres/tdq-postgres.yaml
  2. Deploy tdq-wso2 using the following command:
    $  kubectl apply -f tdq-wso2/tdq-wso2.yaml
  3. Deploy tdq-ksource using the following command:
    $  kubectl apply -f tdq-ksource/tdq-ksource.yaml
  4. Deploy tdq-dsml-classifier using the following command:
    $  kubectl apply -f tdq-dsml-classifier/tdq-dsml-classifier.yaml
  5. Deploy tdq-python-services using the following command:
    $  kubectl apply -f tdq-python-services/tdq-python-services.yaml
  6. Deploy tdq-dqs using the following command:
    $  kubectl apply -f  tdq-dqs/tdq-dqs.yaml
  7. Deploy tdq-grafana using the following command:
    $  kubectl apply -f  tdq-grafana/tdq-grafana.yaml
  8. Deploy tdq-address-server using the following command:
    $  kubectl apply -f  tdq-address-server/tdq-address-server.yaml
  9. Deploy tdq-profiler using the following command:
    $  kubectl apply -f  tdq-profiler/tdq-profiler.yaml
  10. Deploy tdq-butler using the following command:
    $  kubectl apply -f  tdq-butler/tdq-butler.yaml
  11. Deploy tdq-valet-services using the following command:
    $  kubectl apply -f  tdq-valet-services/tdq-valet-services.yaml
  12. Deploy tdq-valet-ui using the following command:
    $  kubectl apply -f tdq-valet-ui/tdq-valet-ui.yaml
  13. Deploy tdq-log-viewer using the following command:
    $  kubectl apply -f tdq-log-viewer/tdq-log-viewer.yaml

Deploying Horizontal Pod Autoscaler (HPA) Components

This section describes how to deploy Kubernetes Horizontal Pod Autoscaler (HPA) components. HPA automatically scales the TIBCO DQ components to match demand based on observed metrics, such as average CPU usage and average memory usage.

All HPA component targets are set to be 50% of the requested CPU and memory usage. Please check the minimum and maximum replica number for each component in the deployment resource specification table (Prerequisites).

Note: For users who do not require HPA in their deployment, this section can be skipped.

  1. Deploy tdq-butler HPA component using the following command:
    $  kubectl apply -f tdq-butler/tdq-butler-hpa.yaml
  2. Deploy tdq-profiler HPA component using the following command:
    $  kubectl apply -f tdq-profiler/tdq-profiler-hpa.yaml
  3. Deploy tdq-python-services HPA component using the following command:
    $  kubectl apply -f tdq-python-services/tdq-python-services-hpa.yaml
  4. Deploy tdq-valet-services HPA component using the following command:
    $  kubectl apply -f tdq-valet-services/tdq-valet-services-hpa.yaml

Deploying TIBCO DQ Ingress Components

This section describes how to deploy the TIBCO DQ ingress components.

  1. Deploy the tdq-valet-services ingress using the following command:
    $  kubectl apply -f ingress/tdq-ingress-valet-services.yaml
  2. Deploy the tdq-wso2 ingress using the following command:
    $  kubectl apply -f ingress/tdq-ingress-https-backend.yaml
  3. Deploy the tdq-valet-ui and tdq-grafana ingress using the following command:
    $  kubectl apply -f ingress/tdq-ingress-valet-ui-grafana.yaml
  4. Deploy the tdq-log-viewer ingress using the following command:
    $  kubectl apply -f ingress/tdq-ingress-log-viewer.yaml

Verifying and Testing

In this section:

This section describes how to verify and test TIBCO DQ in a Kubernetes cluster by confirming all required services and components are running.

Confirming Services and Components are Running

To confirm TIBCO DQ services and components are running:

  1. Confirm that the WSO2 Identity Server is started and running.

    The following is a sample command you can use to check the status of the tdq-wso2 pod:

    $ kubectl -n tdq-ns get pod -l app.kubernetes.io/component=tdq-wso2
    $ kubectl -n tdq-ns describe pod -l app.kubernetes.io/component=tdq-wso2
  2. Wait for approximately 15 minutes to confirm that the tdq-valet-ui service is started and running.

    The following is a sample command you can use to check the status of the tdq-valet-ui pod:

    $ kubectl -n tdq-ns get pod -l app.kubernetes.io/components=tdq-valet-ui
    $ kubectl -n tdq-ns describe pod -l app.kubernetes.io/components=tdq-valet-ui
  3. Verify that all other TIBCO DQ deployments are started and running.

    The following is a sample command you can use to check the status of each TIBCO DQ pod:

    $ kubectl -n tdq-ns get pod

    The following is a sample command you can use to check the status of each TIBCO DQ service:

    $ kubectl -n tdq-ns get svc

    The following is a sample command you can use to check the status of each TIBCO DQ ingress:

    $ kubectl -n tdq-ns get ingress

Logging in to the TIBCO DQ Console

There are three certificates that need to be accepted by your browser before you can log in to the TIBCO DQ Console.

  1. Open the WSO2 console at https://tdq-wso2.<TDQ CLUSTER DOMAIN>:<TDQ_INGRESS_CONTROLLER_HTTPS_NODEPORT> and accept the self-signed certificate.

    Note: You can either create and use the self-signed certificate for HTTPS or use your own certificate.

  2. Enter the following URL in your browser (substituting values for <TDQ CLUSTER DOMAIN> and <TDQ_INGRESS_CONTROLLER_HTTPS_NODEPORT>) and accept the certificate:
    https://tdq-valet-services.<TDQ_CLUSTER_DOMAIN>:<TDQ_INGRESS_CONTROLLER_HTTPS_NODEPORT>
  3. Enter the following URL in your browser to access the TIBCO DQ Console (substituting values for <TDQ CLUSTER DOMAIN> and <TDQ_INGRESS_CONTROLLER_HTTPS_NODEPORT>):
    https://tdq-valet-ui.<TDQ_CLUSTER_DOMAIN>:<TDQ_INGRESS_CONTROLLER_HTTPS_NODEPORT>

    The default login credentials (user ID and password) are:

    • User ID: tibcodq
    • Password: tibcodq
  4. Enter the following URL in your browser to access the log files for TIBCO DQ:
    https://tdq-log-viewer.<TDQ_CLUSTER_DOMAIN>:<TDQ_INGRESS_CONTROLLER_HTTPS_NODEPORT>

    The default login credentials (user ID and password) are:

    • User ID: tibcodq
    • Password: tibcodq