Provisioning Considerations
-
A Pulsar server group is composed of multiple Kubernetes Statefulsets, each with multiple pods. Therefore, plan to have sufficient Kubernetes node resources available before provisioning a new TIBCO Messaging Quasar capability.
-
The message data storage class plays an important role in the overall server performance.
-
For better performance, use separate journal data storage for production instances and if the message data storage is subject to latency spikes. In general, the journal data persistent volume must be a high-performance, low-latency StorageClass. These volumes are much smaller than the message data volumes, especially when large message backlogs are possible. These volumes are used for the write-ahead logs. If you have separate volumes for message and journal data, it can enhance write performance significantly.
-
If the Capability is for production checkbox is selected, provisioning a large Pulsar server group requires relatively large underlying Kubernetes nodes. Trying to provision Apache Pulsar to a Kubernetes cluster with only small nodes results in pods stuck in the Pending state.
-
Using shared log storage simplifies support and administration, as the debug logs for all three servers can be viewed from the toolset pod. However, it does not work with all storage classes.
Some examples of storage supporting multi-pod usage are as follows:
-
Network File System (NFS version 4.1)
-
Server Message Block (SMB2/SMB3)
-
Elastic File Store (EFS)
Examples of storage that do not work include:
-
Elastic Block Store (EBS)
-
Node local disk
-
Setting Resources Explicitly
You can override the default resource requests and limits for StatefulSet pods using the optional Custom Configuration tab and a YAML file that contains custom values. For more information, see Provisioning the TIBCO Messaging Quasar Capability.
If you do not define any resources for a StatefulSet, the default values are applied. When you override only some resources for a StatefulSet, the unspecified resources are not set. For example, if only CPU and memory requests are defined, no limits are applied.
toolset:
resources:
requests:
cpu: 1
memory: 1Gi
limits:
cpu: 4
memory: 4Gi
bookkeeper:
resources:
requests:
cpu: 500m
memory: 500Mi
limits:
cpu: 2
memory: 2Gi
proxy:
resources:
requests:
cpu: 500m
memory: 500Mi
limits:
cpu: 1
memory: 1Gi
broker:
resources:
requests:
cpu: 200m
memory: 512Mi
limits:
cpu: 4
memory: 4Gi
zookeeper:
resources:
requests:
cpu: 200m
memory: 512Mi
limits:
cpu: 4
memory: 4Gi
autorecovery:
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 1000m
memory: 1024Mi