Distributed BPM System
In a production environment, it is good practice to use a distributed configuration for ActiveMatrix BPM (along with a suitable underlying architecture).
This configuration can provide the following advantages:
- Scalability: ActiveMatrix BPM software provides specialization and horizontal scalability capabilities. You can:
- add BPM logical nodes (of type Client, Server or BPM) to boost the capacity of the BPM system.
- distribute BPM logical nodes to different TIBCO Host instances and physical machines as required.
See "TIBCO ActiveMatrix BPM Logical Nodes and Services" in BPM Concepts for more information about the different types of BPM logical node and their uses.
- High availability and fault tolerance: ActiveMatrix BPM software provides active/active clustering capabilities. Adding a second BPM logical node (of type
BPM) provides high availability and fault tolerance. In the event of a system-affecting failure on one node, load is automatically switched to the remaining node.
Note: It is good practice to use the active/active clustering capabilities of ActiveMatrix BPM to provide high availability and fault tolerance is best practice. However, you can also use third-party solutions - see Using Third-Party Solutions to Configure a High Availability, Fault Tolerant ActiveMatrix BPM System for more information.
It is good practice to host the ActiveMatrix Administrator server independently from ActiveMatrix BPM, using its own set of TIBCO Host instances. You should create the ActiveMatrix Administrator server (if it does not already exist) before the distributed BPM system. See Creating an ActiveMatrix Administrator Server (Single or Replicated) for more information about how to do this.
See Creating a Distributed ActiveMatrix BPM System for more information.
If you are running on Microsoft systems with a large number of logical cores (>64), running a single BPM node on this type of system results in not all the CPUs being used by the BPM node. Windows splits these cores into groups (each of which is treated as a single scheduling entity). Each group contains a subset of the cores available, up to a maximum of 64. Groups are numbered starting with 0. Systems with fewer than 64 logical processors always have a single group, Group 0.