Multi-Node Multi-Instance Topology

The multi-node multi-instance, or horizontal scaling deployment topology, can be achieved by running multiple instances of micro-services on multiple nodes (machines). This is mainly done to increase the overall processing capacity of the Fulfillment Order Management server side components namely omsServer and jeoms. The default micro-service instance is duplicated on the main and other nodes to run multiple instances.

Once a node is configured, other nodes will download the configuration from the database. However, this feature is only available if the database configuration is selected; it is selected by default. If disabled, the configuration will have to be manually copied.

Multi-node Multi-instance Topology


Note: The explanation for NODE_ID and DOMAIN_ID and the note for consumer counts given in Single Node Single Instance section holds true here as well.

The following steps help to use this deployment topology.

Prerequisites

  • The setup for Single-node Multi-instance deployment topology is already in place and working on the main node on which Fulfillment Order Management is installed.
  • All the required underlying software such as Java and the libraries such as Oracle JDBC driver, EMS libraries which are being referred in the micro-services are already available on the additional nodes. All the corresponding environment variables are set and exported.

Creating New Cluster Members

  1. Access Fulfillment Order Management Configurator GUI application in a supported browser through the HTTP interface of the default Configurator micro-service instance on the main node using the URL http://<HOST>:<PORT>.
  2. Create additional members to be run on additional nodes by cloning the existing members. The easiest way is to create one clone of each existing member to have one new member and change the member name. As the new member needs to be run on a different node, the ports can be kept as-is.

    For example, assume that there are 10 existing members namely Member1, Member2...Member10 in Single-node Single-instance topology. Clone member Member1 and rename the cloned member to create Member11.

  3. The port numbers need not be changed for Member11 in Configurator. As it will run on altogether different node, the ports, although same, will not conflict with Member1. Follow the same procedure for the remaining instances.
  4. After the required number of additional members to be run on other nodes is created in the configuration files, other nodes will download the configuration from the database if the database configuration is selected.

    1. Make as many copies of the services ($AF_HOME/roles/omsServer, or ope, or aopd, and so on) needed to configure the number of instances at any location, mention the admin database details in the configDBrepo.properties file, and change the port number accordingly.

    2. Set the $AF_CONFIG_HOME environment variable to point to the directory path of the copied config directory AF_HOME/roles/configurator/standalone/config. This step is only required for the configurator service. The other services do not need this variable set.

Adding Cluster Members to the Database

The entries for additional members to be run on other nodes must add into DOMAINMEMBERS table in the same way as explained in Single-node Multi-instance topology earlier.

Creating Additional Micro-Service Instances

  1. Create additional micro-service instances on other nodes by copying the existing micro-service directories from the main node.

    The easiest way is to copy each directory and just change the member’s suffix number in the directory name.

  2. The port numbers in $AF_HOME/roles/omsServer/standalone/config/application.properties do not need to be changed for Member1. As it will run on a different node, the ports, although same, do not conflict with Member1. Follow the same procedure for the remaining instances.
  3. In case of static allocation of member IDs to nodes, set the system properties NODE_ID and DOMAIN_ID. If dynamic allocation is required, it is not required to set these variables.

Sanity Test

The sanity test for Multi-node Multi-instance topology is exactly same as explained in Single-node Multi-instance topology. Here the additional number of Orchestrator instances running on other nodes joins the ORCH-DOMAIN cluster. Any one instance among all act as the Cluster Manager and all others act as Workers.