Monitoring

This section describes how to use the installed TIBCO Streaming Model Management Server (MMS) to support monitoring. How to access the services associated with the MMS components is described here.

Accessing Services

The Kubernetes services associated with the MMS components are not exposed outside of the Kubernetes cluster. To access the MMS services these general steps are required:

  1. Login to Kubernetes cluster (For more information, see AKS or OpenShift).
  2. Set Kubernetes context (For more information, see setting current context).
  3. Locate target service and get port number (For more information, see locating service).
  4. Set up a port-forward from a local workstation to target service (For more information, see logging steps have examples for port-forward).
  5. Open port-forwarded port on the local workstation in a web browser.

These steps require kubectl to be installed on the local workstation. For more information, see the general installation instructions for details on installing prerequisites.

Logging

All MMS components, including pipelines, generate log records that are stored in a logging store (ElasticSearch). These services are available for accessing the logging components:

Component Service Default Credentials (username/password)
Logging Store elasticsearch-es-http elastic/elastic
Logging Visualization kibana-kb-http elastic/elastic

Accessing Logging Store - ElasticSearch

//
//  Get elasticsearch-es-http service port
//
kubectl get services --namespace mms elasticsearch-es-http 
NAME                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
elasticsearch-es-http   ClusterIP   10.0.142.148   <none>        9200/TCP   12d

//
//  Set up port-forward to local port 9200
//
kubectl port-forward service/elasticsearch-es-http --namespace mms 9200:9200

//
//  Open browser window (macOS only)
//
open http://localhost:9200

//
//  Check if Elasticsearch is working (Linux)
//
curl -u elastic:<password> -k https://localhost:9200

Accessing Logging Visualization - Kibana

//
//  Get kibana-kb-http service port
//
kubectl get services --namespace mms kibana-kb-http 
NAME             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kibana-kb-http   ClusterIP   10.0.125.107   <none>        443/TCP   12d

//
//  Access Kibana using an ingress
//
We are using an ingress for Kibana at `kibana/<ingressDomain>`. Note that the open-source version we use does not support OAuth2, so the ingress just uses the basic auth provided by Kibana itself. You will need to provide the login and password to access Kibana.

Example URL: https://streaming-web.devsw1120snapshot.streamingaz.tibcocloud.com/kibana

Pod Logging

In addition to accessing logging records in the logging store, logging can also be accessed directly from a Pod using this command:

//
//  Follow log output - replace <pod-name> with actual Pod name
//
kubectl logs <pod-name> --namespace mms --follow

For more information, see Service Pods for instructions on getting Pod names.

Metrics

Metrics provide insights into the performance and health of the system. These services are available for accessing the metrics components:

Component Service Default Credentials (username/password)
Metrics Store prometheus None
OTel Metrics otel-collector None
Jaeger Query jaeger-query None

Accessing Metrics Store - Prometheus

//
//  Get prometheus service port
//
kubectl get services --namespace mms prometheus
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
prometheus   ClusterIP   10.0.92.15   <none>        9090/TCP   12d

//
//  Set up port-forward to local port 9090
//
kubectl port-forward service/prometheus --namespace mms 9090:9090

//
//  Open browser window
//
open http://localhost:9090

Accessing OTel Metrics - OTel Collector

//
//  Get otel-collector service port
//
kubectl get services --namespace mms otel-collector-collector-monitoring
NAME                                  TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
otel-collector-collector-monitoring   ClusterIP   10.0.189.39   <none>        8888/TCP   12d

//
//  Set up port-forward to local port 4317
//
kubectl port-forward service/otel-collector-collector-monitoring --namespace mms 8888:8888

//
//  Open browser window (macOS only)
//
open http://localhost:8888/metrics

//
//  Check if OTel Collector is working (Linux)
//
curl -k http://localhost:8888/metrics

Accessing Jaeger Query

//
//  Get jaeger-query service port
//
kubectl get services --namespace mms jaeger-query
NAME           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                         AGE
jaeger-query   ClusterIP   10.0.252.112   <none>        16686/TCP,16685/TCP,16687/TCP   12d

//
//  Access Jaeger using an ingress
//
We are using an ingress for Jaeger at `jaeger/<ingressDomain>`. No authentication is required for accessing Jaeger.

Example URL: https://streaming-web.devsw1120snapshot.streamingaz.tibcocloud.com/jaeger

Pipelines and Tasks

Pipelines are started using Tekton pipelines and tasks which are created using Helm charts.

When a pipeline is deployed a PipelineRun instance is created, along with associated TaskRun instances in the mms namespace. The PipelineRun and TaskRun instances can be used to monitor the status of running pipelines.

Note: tkn must be installed on the local workstation. For more information, see the general installation instructions for details on installing Tekton.

Running Pipelines and TasksRuns

The running pipelines and TaskRuns are displayed with:

//
//  Display all PipelineRun instances
//
tkn pipelinerun list --namespace mms

//
//  Display all TaskRun instances
//
tkn taskrun list --namespace mms

The PipelineRun naming conventions are:

  • artifact-management-* - MMS installation
  • bootstrap-* - pipelines containing third-party applications

Logging

Logging output can be displayed for both PipelineRun and TaskRun instances using these commands:

//
//  Display PipelineRun logs - replace <pipelinerun-name> with actual PipelineRun name
//
tkn pipelinerun logs --namespace mms <pipelinerun-name>

//
//  Display TaskRun logs - replace <taskrun-name> with actual TaskRun name
//
tkn taskrun logs --namespace mms <taskrun-name>

Identifying Pods

Pipelines run in a Pod. The Pod that was started by a PipelineRun to deploy the pipeline can be determined using the jobIdentifier and namespace associated with the TaskRun.

The TaskRun instances associated with a PipelineRun are displayed by the ‘describe’ command.

//
//  Display TaskRuns associated with a PipelineRun
//  Replace <pipelinerun-name> with actual PipelineRun name
//
tkn pipelinerun describe --namespace mms <pipelinerun-name>

For example:

tkn pipelinerun describe --namespace mms bootstrap
Name:              bootstrap
Namespace:         mms
Pipeline Ref:      bootstrap
Service Account:   operator-admin
Labels:
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=bootstrap
app.kubernetes.io/part-of=ep-kubernetes-installer
tekton.dev/pipeline=bootstrap
Annotations:
helm.sh/hook=post-install,post-upgrade
meta.helm.sh/release-name=mms
meta.helm.sh/release-namespace=mms

🌡️  Status

STARTED      DURATION   STATUS
1 week ago   5m37s      Succeeded

⏱  Timeouts
Pipeline:   1h0m0s

📂 Workspaces

NAME                        SUB PATH   WORKSPACE BINDING
∙ build-storage             ---        VolumeClaimTemplate
∙ maven-settings            ---        ConfigMap (config=custom-maven-settings)
∙ helm-values               ---        ConfigMap (config=helm-values)
∙ install-pipeline-source   ---        ConfigMap (config=install-pipeline-source)
∙ deploy-artifact           ---        ConfigMap (config=deploy-artifact-script)

🗂  Taskruns

NAME                                   TASK NAME                  STARTED      DURATION   STATUS
∙ bootstrap-install-pipeline-run       install-pipeline-run       1 week ago   30s        Succeeded
∙ bootstrap-install-pipeline-image     install-pipeline-image     1 week ago   38s        Succeeded
∙ bootstrap-install-pipeline-maven     install-pipeline-maven     1 week ago   25s        Succeeded
∙ bootstrap-install-pipeline-prepare   install-pipeline-prepare   1 week ago   11s        Succeeded
∙ bootstrap-nexus-helm-index           nexus-helm-index           1 week ago   5s         Succeeded
∙ bootstrap-jaeger                     jaeger                     1 week ago   8s         Succeeded
∙ bootstrap-opentelemetry              opentelemetry              1 week ago   9s         Succeeded
∙ bootstrap-ingress                    ingress                    1 week ago   22s        Succeeded
∙ bootstrap-prometheus                 prometheus                 1 week ago   40s        Succeeded
∙ bootstrap-cert-manager               cert-manager               1 week ago   15s        Succeeded
∙ bootstrap-elasticsearch              elasticsearch              1 week ago   13s        Succeeded
∙ bootstrap-deploy-artifacts           deploy-artifacts           1 week ago   2m5s       Succeeded
∙ bootstrap-tools-image                tools-image                1 week ago   1m16s      Succeeded
∙ bootstrap-tools-prepare              tools-prepare              1 week ago   5s         Succeeded
∙ bootstrap-nexus-repositories         nexus-repositories         1 week ago   19s        Succeeded
∙ bootstrap-nexus                      nexus                      1 week ago   5s         Succeeded
∙ bootstrap-tidy-up                    tidy-up                    1 week ago   6s         Succeeded

This command is then used to display the jobIdentifier and namespace:

//
//  Describe TaskRun details - replace taskrun-name with actual name
//
tkn taskrun describe --namespace mms <taskrun-name>

For example:

tkn taskrun describe --namespace mms artifact-management-artifact-management-image
Name:              artifact-management-artifact-management-image
Namespace:         mms
Task Ref:          kaniko
Service Account:   operator-admin
Timeout:           1h0m0s
Labels:
app.kubernetes.io/managed-by=tekton-pipelines
app.kubernetes.io/part-of=mms
app.kubernetes.io/version=0.1
tekton.dev/memberOf=tasks
tekton.dev/pipeline=artifact-management
tekton.dev/pipelineRun=artifact-management
tekton.dev/pipelineTask=artifact-management-image
tekton.dev/task=kaniko
Annotations:
meta.helm.sh/release-name=mms
meta.helm.sh/release-namespace=mms
pipeline.tekton.dev/affinity-assistant=affinity-assistant-72ebea6e38
pipeline.tekton.dev/release=a22f812
tekton.dev/pipelines.minimumVersion=0.12.1
tekton.dev/tags=image-build

🌡️  Status

STARTED      DURATION    STATUS
2 days ago   1m37s       Succeeded

⚓ Params

NAME        VALUE
∙ IMAGE     artifact-management:11.2.0-SNAPSHOT
∙ CONTEXT   artifact-management/target/container-source

📝 Results

NAME             VALUE
∙ IMAGE-DIGEST   sha256:0a73a690ee79b34e9d0e6d79accfd620e9892d6e0900c860dfc603ca30fed0a7

📂 Workspaces

NAME       SUB PATH   WORKSPACE BINDING
∙ source   ---        PersistentVolumeClaim (claimName=pvc-c0111ad049)

🦶 Steps

NAME               STATUS
∙ build-and-push   Completed
∙ write-digest     Completed

This output indicates that this Kafka Data Source image is deployed in the mms namespace and has a jobIdentifier of artifact-management-artifact-management-image.

Finally the associated Pod can be found using this command:

//
//  Get all pods in namespace and filter by job-identifier prefix
//  Replace <namespace> and <job-identifier> with values found above
//
kubectl get pods --namespace <namespace> | grep <job-identifier>

Pod names started by TaskRuns are prefixed by the jobIdentifier.

For example:

kubectl get pods --namespace mms | grep kafka-source-image
artifact-management-kafka-source-image-pod                       0/2     Completed   0               7d6h

Components

The components that make up MMS are all installed into the mms (configurable) and tekton-pipelines namespaces.

In addition, a container registry is required to store built container images. MMS uses a container registry provided by the cloud infrastructure. It is configured during installation.

A brief description of each of these components and their associated service name is summarized in the table below.

Components Table

Component Service Description
Artifact Management artifact-management Artifact (flows, pipelines, and models) management, editing, governance, and deployment
Artifact Repository artifact-repository Maven, Python, and Helm artifact repository
Git git Git repository server
Logging Store elasticsearch-es-http ElasticSearch logging store
Logging Visualization kibana-kb-http Kibana log visualization
Distributed Tracing Jaeger Distributed tracing for monitoring and troubleshooting microservices
Metrics Store prometheus Prometheus metrics store
Tekton N/A Tekton CI/CD pipeline definition and execution. Tekton resources such as Pipelines, Tasks, PipelineRuns, and TaskRuns are managed within the Kubernetes cluster.

Setting Current Context

The current context is set using these commands:

//
// Get current context
//
kubectl config current-context

//
// Get all available contexts
//
kubectl config get-contexts

//
// Set current context - replace <context> with actual context name
//
kubectl config use-context <context>

Locating Services

Use the service name in the components table to determine the target service name and then display the service using this command:

//
// Display service - replace <service-name> with actual service name
//
kubectl get services --namespace mms <service-name>

//
// Display all MMS services
//
kubectl get services --namespace mms

Service Pods

Use the service name in the components table to determine the target service name and then display Pods associated with the service using these commands:

//
// Display selector for service - replace <service-name> with actual service name
//
kubectl get services \
    --namespace mms <service-name> \
    --output custom-columns=SELECTOR:.spec.selector

//
// Display service Pods - replace <selector-value> with the
// selector value above replacing ":" with "="
//
kubectl get pods --namespace mms --selector <selector-value>

For example, finding the Pod for the artifact-management service would require these commands:

//
//  Get selector for scheduling-server
//
kubectl get services \
    --namespace mms artifact-management \
    --output custom-columns=SELECTOR:.spec.selector
SELECTOR
map[app:artifact-management]


//
//  Display pods using the selector
//
//  Notice that the selector name from map[app:artifact-management]
//  is app=artifact-management
//
kubectl get pods --namespace mms --selector app=artifact-management
NAME                                  READY   STATUS    RESTARTS   AGE
artifact-management-98cd74b9b-hc8n6   2/2     Running   0          5h14m
artifact-management-98cd74b9b-xr6gt   2/2     Running   0          5h14m