======================================= LiveView High Availability Table Sample ======================================= Overview: This sample demonstrates LiveView's high availability table feature that allows multiple LiveView instances to be in a high-availability setup. To provide HA and recovery, this sample uses the open-source Apache Kafka as the publishing bus. The LiveView Orders table is configured to use persistence and it has a publisher interface with a Kafka subscriber. For a more detailed look at using Kafka in LiveView, see the Kafka sample or the LiveView recovery protocol documentation in: LiveView Development Guide -> EventFlow Publish Applications or at: https://docs.tibco.com/pub/sb-cep/x.y.z/doc/html/lv-devel/lv-publisher.html The cluster is bootstrapped with the log-based recovery instance, and once bootstrapped only needs peer-based recovery instances. The front end is a services-only instance that performs the HA table setup. It exposes all Orders tables in the cluster as HA_Orders, which can be queried as a normal LiveView table. For this sample, you can use an existing Kafka server or, if you need a Kafka implementation, download the Kafka bundle from: http://kafka.apache.org/downloads.html Note that your system requires at least 16GB for the complete demo. This sample was developed using the Scala 2.12 based 0.10.2.1 version of Kafka, but other versions may work. Follow the instructions to install your server. To run the server, use the following instructions. The files required to run on either Windows or Linux are supplied. The two operating systems require slightly different commands. Start three terminal windows and cd to the directory where you installed Kafka. The installation includes both the Kafka server and a ZooKeeper. The ZooKeeper must be started first: In the first terminal, type: (Linux)$ bin/zookeeper-server-start.sh config/zookeeper.properties (Windows)> bin/windows/zookeeper-server-start config/zookeeper.properties Now in the second terminal, start the Kakfa server: (Linux)$ bin/kafka-server-start.sh config/server.properties (Windows)> bin/windows/kafka-server-start config/server.properties If you have not already done so, in the third terminal create a new topic for this item: (Linux)$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic Orders (Windows)> bin/kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic Orders The topic must be created once each time the Kafka server and ZooKeeper are reset, as described below. To confirm the topic is created: (Linux)$ bin/kafka-topics.sh --list --zookeeper localhost:2181 (Windows)> bin/windows/kafka-topics --list --zookeeper localhost:2181 In the sample base directory (the lv_sample_hatables project in Studio), build everything and start the publisher in an empty cluster: $ mvn install -DskipTests=true $ epadmin install node --nodename A.publisher --nodedirectory /tmp/nodes --application `ls deploy_apps/publish_deploy/target/*.zip` $ epadmin --servicename A.publisher start node All other samples can then be started in Studio or via the command line. The front end can be started at any time. These services must be in the same cluster but not in the publishing cluster or else they will hang. $ epadmin install node --nodename Log.ldm_cluster --nodedirectory /tmp/nodes --application `ls deploy_apps/log_bootstrap_deploy/target/*.zip` $ epadmin --servicename Log.ldm_cluster start node $ epadmin install node --nodename Front-ag_fe.ldm_cluster --nodedirectory /tmp/nodes --application `ls deploy_apps/frontend_deploy/target/*.zip` $ epadmin --servicename Front-ag_fe.ldm_cluster start node $ epadmin install node --nodename Peer1.ldm_cluster --nodedirectory /tmp/nodes --application `ls deploy_apps/peer_deploy/target/*.zip` $ epadmin --servicename Peer1.ldm_cluster start node More peers can be added and removed at will, and the HA_Orders table will automatically balance queries across all. Removing will cause queries issued on the falling node to be moved to an up node. $ epadmin install node --nodename PeerN.ldm_cluster --nodedirectory /tmp/nodes --application `ls deploy_apps/peer_deploy/target/*.zip` $ epadmin --servicename PeerN.ldm_cluster start node $ epadmin --servicename PeerN.ldm_cluster stop node Connect a LiveView Desktop or included LiveView Web (at http://localhost10080) to the LiveView Server running on port 10080 and confirm you can query the HA_Orders table. To restart the broker and ZooKeeper from scratch and delete any saved messages, terminate both servers using the following commands: (Linux)$ bin/kafka-server-stop.sh (Linux) (Linux)$ bin/zookeeper-server-stop.sh (Linux) (Windows)> bin/windows/kafka-server-stop (Windows)> bin/windows/zookeeper-server-stop and delete the configured log directories (/tmp/zookeeper and /tmp/kafka-logs by default). Then restart the servers. The recovery data is stored in /tmp/lvout/data and must be removed between runs. This directory is part of the log-based node and can be replaced by using the substitution LIVEVIEW_LOG_DIR. Under Windows, stop the lv-server (not the MesssagePublisher) using your preferred method (Task Manager or an epadmin remove with the installpath to simulate a crash) and then restart it as above.