======================================= LiveView High Availability Table Sample ======================================= Overview: This sample demonstrates LiveView's high availability table feature that allows multiple LiveView instances to be in a high-availability setup. To provide HA and recovery, this sample uses the open-source Apache Kafka as the publishing bus. The LiveView Orders table is configured to use persistence and it has a publisher interface with a Kafka subscriber. For a more detailed look at using Kafka in LiveView, see the Kafka sample or the LiveView recovery protocol documentation in: LiveView Development Guide -> EventFlow Publish Applications or at: https://docs.tibco.com/pub/sb-cep/x.y.z/doc/html/lv-devel/lv-publisher.html The cluster uses peer-based recovery instances. The front end is a services-only instance that performs the HA table setup. It exposes all Orders tables in the cluster as HA_Orders, which can be queried as a normal LiveView table. For this sample, you can use an existing Kafka server or, if you need a Kafka implementation, download the Kafka bundle from: http://kafka.apache.org/downloads.html Note that your system requires at least 16GB for the complete demo. This sample was developed using the Scala 2.12 based 0.10.2.1 version of Kafka, but other versions may work. Follow the instructions to install your server. To run the server, use the following instructions. The files required to run on either Windows or Linux are supplied. The two operating systems require slightly different commands. Start three terminal windows and cd to the directory where you installed Kafka. The installation includes both the Kafka server and a ZooKeeper. The ZooKeeper must be started first: In the first terminal, type: (Linux)$ bin/zookeeper-server-start.sh config/zookeeper.properties (Windows)> bin/windows/zookeeper-server-start config/zookeeper.properties Now in the second terminal, start the Kakfa server: (Linux)$ bin/kafka-server-start.sh config/server.properties (Windows)> bin/windows/kafka-server-start config/server.properties If you have not already done so, in the third terminal create a new topic for this item: (Linux)$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic Orders (Windows)> bin/kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic Orders The topic must be created once each time the Kafka server and ZooKeeper are reset, as described below. To confirm the topic is created: (Linux)$ bin/kafka-topics.sh --list --zookeeper localhost:2181 (Windows)> bin/windows/kafka-topics --list --zookeeper localhost:2181 In the sample base directory (the lv_sample_hatables project in Studio), build everything and start the publisher in an empty cluster: $ mvn install -DskipTests=true $ epadmin install node --nodename A.publisher --nodedirectory /tmp/nodes --application `ls deploy_apps/publish_deploy/target/*.zip` $ epadmin --servicename A.publisher start node All other samples can then be started in Studio or via the command line. Make sure at least one peer back-end node is started and loaded successfully before starting the front-end node. These services must be in the same cluster but not in the publishing cluster or else they will hang. $ epadmin install node --nodename Peer1.ldm_cluster --nodedirectory /tmp/nodes --application `ls deploy_apps/peer_deploy/target/*.zip` $ epadmin --servicename Peer1.ldm_cluster start node $ epadmin install node --nodename Front-ag_fe.ldm_cluster --nodedirectory /tmp/nodes --application `ls deploy_apps/frontend_deploy/target/*.zip` $ epadmin --servicename Front-ag_fe.ldm_cluster start node More peers can be added and removed at will, and the HA_Orders table will automatically balance queries across all. Removing will cause queries issued on the falling node to be moved to an up node. $ epadmin install node --nodename PeerN.ldm_cluster --nodedirectory /tmp/nodes --application `ls deploy_apps/peer_deploy/target/*.zip` $ epadmin --servicename PeerN.ldm_cluster start node $ epadmin --servicename PeerN.ldm_cluster stop node Connect a LiveView Desktop or included LiveView Web (at http://localhost:10080) to the LiveView Server running on port 10080 and confirm you can query the HA_Orders table. To restart the broker and ZooKeeper from scratch and delete any saved messages, terminate both servers using the following commands: (Linux)$ bin/kafka-server-stop.sh (Linux) (Linux)$ bin/zookeeper-server-stop.sh (Linux) (Windows)> bin/windows/kafka-server-stop (Windows)> bin/windows/zookeeper-server-stop and delete the configured log directories (/tmp/zookeeper and /tmp/kafka-logs by default). Then restart the servers. The recovery data is stored in /tmp/lvout/data and must be removed between runs. Under Windows, stop the lv-server (not the MesssagePublisher) using your preferred method (Task Manager or an epadmin remove with the installpath to simulate a crash) and then restart it as above. To run it within Studio, follow previous instruction to set up zookeeper and kafka service, then 1. Start publisher application: 1) Go to Run Configurations..., and add a new launch configuration for Eventflow Fragment. 2) Click on Browse..., and select MessagePublisher.sbapp under project lv_sample_ha_kafka_message_publisher as Main Eventflow module. Make sure configuration files are SELECTED. 3) Go to tab Node, select "Use name", and enter "publisher" for cluster name, and enter "A$(SeqNum:_)" for Node name. Make sure publisher is running in a different cluster from back-end and front-end nodes. 4) Apply and run it. 2. Start a back-end node: 1) Go to Run Configurations..., and add a new launch configuration for LiveView Fragment. 2) Click on Browse..., and select project lv_sample_ha_peers as LiveView project. 3) Select "Any available" for StreamBase port, and specify 10081 for LiveView port. Make sure it is not running on any ports that are already taken by other processes. 4) Make sure ONLY engine.conf is selected for configuration files. 5) Go to tab Node, select "Use name", and enter "ldm_cluster" for cluster name, and enter "Peer1$(SeqNum:_)" for the Node name. 6) Apply and run it. 7) Connect to localhost:10081, data will be available from table Orders. 3. Start the front-end node: Please launch at least one back-end node before starting the front end. 1) Go to Run Configurations..., and add a new launch configuration for LiveView Fragment. 2) Click on Browse..., and select project lv_sample_ha_frontend as LiveView project. 3) Please use default ports for both StreamBase and LiveView engine ports. 4) Make sure ONLY engine.conf is selected for configuration files. 5) Go to tab Node, select "Use name", and enter "ldm_cluster" for cluster name, and enter "FE$(SeqNum:_)" for the Node name. Make sure the cluster name is the same with any running back-end nodes. 6) Apply and run it. 7) Connect to localhost:10080, data will be available from table HA_Orders. More back-end nodes can be added as described above. Please make sure each back-end node has a different node name, and all the back-end nodes are running in the same cluster.