Contents
This sample uses the LiveView adapters to query, publish, delete, and create tables on a running LiveView server from an EventFlow application. This sample assumes you are running the Hello World LiveView sample on localhost. There is a module parameter named LIVEVIEW_URI that you can change so all the adapters point to a different LiveView server.
In StreamBase Studio, import this sample and the companion Hello LiveView sample with the following steps:
-
From the top-level menu, select
> . -
Open the TIBCO LiveView category.
-
Hold Ctrl (Windows) or command (Mac) to make multiple selections, then select the Hello LiveView and LiveView Adapters samples.
-
Click
.
StreamBase Studio creates a separate project for each sample.
-
First, run the Hello LiveView sample:
-
When this sample loads, it opens the LiveView Project Viewer, showing four icons. To start the sample, click the Run button in the upper right corner of the Project Viewer (or select the project's name in the Project Explorer view, right-click, and select > ).
-
The Console view shows several messages as the LiveView Server compiles the project and starts. Wait for the console message
All tables have been loaded
before proceeding to the next step.
-
-
In the Project Explorer view, navigate back to the
sample_adapter_embedded_lv-sbd
folder. -
Open the
src/main/eventflow/
folder.packageName
-
Double-click to open the
lv2sbd.sbapp
module. -
Make sure the module is the currently active tab in the EventFlow Editor, then click the Run button in Studio's top-level menu. This opens the SB Test/Debug perspective and starts the module.
-
In the Output Streams view, observe tuples emitted by various streams.
-
The ReadyStatus, QueryStatus, PublishStatus, DeleteStatus, and AlertStatus streams all show the LiveView Server status as CONNECTED.
-
The QueryStatus stream shows the status of the two preconfigured queries:
select * from ItemsSales where category=='book' select avg(LastSoldPrice) as AvgSoldPrice, category from ItemsSales where true group by category
-
The QueryOut stream shows the output from the first query.
-
The JSONQueryOut stream shows the output from the second query.
-
-
Using the QueryIn, PublishIn, DeleteIn, CreateDropIn streams you can register additional queries, publish data, delete data, or create or drop tables on the configured LiveView server. See the individual adapter documentation for what the individual fields mean on these input streams.
-
Adapters with the suffix
RuntimeURI
have theUse Runtime URIs
feature enabled. They will not connect to LiveView on startup and instead will wait for a URI to be sent through their command input ports. If Hello LiveView is running onlv://localhost:11080
, you can send a tuple withConnect
in theControlAction
field andlv://localhost:11080
in theURI
field. This will tell the adapter to connect to the LiveView server you designate. Adapters which share a commonConnection Key
will also share a connection. This sample features a Query and Publish adapter which share a connection, and a Delete adapter which has a differentConnection Key
value and will not share a connection. Sending a tuple withDisconnect
as theControlAction
will disconnect all adapters in a shared connection pool. -
When done:
-
Press F9 or click the Terminate EventFlow Fragment button to stop the EventFlow fragment.
-
Press Ctrl+F9 (Windows), or command+F9 (Mac), or click the Terminate LiveView Fragment button to stop the Hello LiveView server.
-
The run this sample, you must also run the Recovery, Kafka sample, then edit and run it separately. Prepare to run the Reliable Publish sample with the following steps:
-
From the top-level menu, click
> . -
Open the TIBCO LiveView category.
-
Holding Ctrl (Windows) or command (Mac) to make multiple selections, select the LiveView Adapters and Recovery, Kafka samples.
-
Click
.
StreamBase Studio creates separate projects for each sample. The Recovery, Kafka sample comes in as seven separate Studio projects.
The Reliable publish sample demonstrates one aspect of the LiveView Client Reliable publish interface. The sample demonstrates how a long-lived publisher can reliably deliver data to a LiveView server that can experience a server interruption — either a communications failure or a server crash — and the publisher can ensure that all the data it sent is present in the recovered LiveView table. The publisher must store tuples it sends until they are acknowledged by the server. This task is often accomplished by some form of persistent message bus, but this sample only uses a feed simulation data source and an in-memory Query Table to hold the tuples waiting to be acknowledged.
For more information on reliable publishing and LiveView server recovery from failure, see LiveView Data Recovery.
-
First, prepare the Recovery, Kafka sample.
-
Of the seven project folders imported by this sample, you only deal with the one named
lv_sample_kafka_recovery
. -
To avoid the dependencies required by Kafka, and to simplify this Reliable Publish sample, delete the
LVPublisher.lvconf
file from thesrc/main/liveview
folder in thelv_sample_kafka_recovery
project. -
If the red marks remain on any of the seven folders, right-click on any of them and run
> . Select the six project folders that begin withlv_sample_kafka
and click -
Select the
lv_sample_kafka_recovery
project's name in the Project Explorer view, right-click and select > . -
The Console view shows several messages as the LiveView Server compiles the project and starts. Wait for the console message
All tables have been loaded
before proceeding to the next step.
-
-
Using the
lv-client
command line tool, query the Orders table:lv-client -p 10087 live "select * from Orders"
-
In the Project Explorer view, navigate to the
sample_adapter_embedded_lv-sbd
folder. -
Open the
src/main/eventflow/
folder.packageName
Select the
ReliablePublish.sbapp
module, right-click and select > -
The publisher begins publishing to the LiveView server. In the Console view, notice that the OrderID field is an incrementing long value. The
OrderID
number matches theRow#
field in the output of the lv-client command. -
In the Debug view, select the
lv_sample_kafka_recovery
project, right-click and select Terminate. This disconnects and stops output from thelv-client
command.Note
Do not use the usual Terminate LiveView Fragment button to stop the server for this test, which would remove the entire node directory for the LiveView project.
-
Re-run the
lv_sample_kafka_recovery
sample as a LiveView Fragment. -
When the LiveView recovery sample is ready, the Publish adapter name ReliablePublisher connects to the restarted LiveView server, identifies the highest sequence number it has stored, and replays all missing tuples from that point forward.
You can start and stop the LiveView recovery sample many times, or you can also use the
lv-client killsession
command to kill the publisher. In all cases, the result is that rows are never never dropped. One easy way to validate this is that the highest OrderID number in the Console view should match the total number of rows in the table shown by the lv-client command. -
When done:
-
Press F9 or click the Terminate EventFlow Fragment button to stop the EventFlow fragment.
-
Press Ctrl+F9 (Windows), or command+F9 (Mac), or click the Terminate LiveView Fragment button to stop the LiveView server.
-
When you load the sample into StreamBase Studio, Studio copies the sample project's files to your Studio workspace, which is normally part of your home directory, with full access rights.
Important
Load this sample in StreamBase Studio, and thereafter use the Studio workspace copy of the sample to run and test it, even when running from the command prompt.
Using the workspace copy of the sample avoids permission problems. The default workspace location for this sample is:
studio-workspace
/sample_adapter_embedded_lv-sbd
See Default Installation Directories for the default location of studio-workspace
on your system.