Multiple LiveView servers can be configured to share metadata resources when they use the same metadata store and are in the
same cluster. Upon server startup, the first deployed LiveView node containing a JSON-formatted resource file in its src/main/resources
folder is read, initialized, and loaded into the metadata store. Other nodes never load the metadata; the metadata is loaded
to the metadata store only once in the cluster's lifetime.
A LiveView node may contain at most one JSON-formatted resource file in its resources directory. The default file name is
resources.json
.
LiveView supports the following metadata store types:
Store type | Resource sharing supported | Store default | Supports resource porting after the LiveView server is up and running via API | Resource loading during startup via at-startup resource (resources.json) file supported |
---|---|---|---|---|
H2 | No | Yes | Yes | No |
transactional memory | Yes | No | Yes | Yes |
JDBC | Yes | No | Yes | No |
Resource sharing requires:
-
Configuring a common metadata store for LiveView servers participating in the same cluster. The metadata store must be a JDBC database or transactional memory; the default H2 store does not support this capability. For more about JDBC-compliant metadata stores, see the Supported Configurations page.
-
Enabling user privileges to allow importing and exporting of resources between LiveView servers.
-
Using LiveView lv-client commands to import and export resources between LiveView servers (for non at-startup memory option).
-
Specifying an optional system property if renaming the default
resources.json
file that LiveView looks for.
Shared resources between LiveView servers include:
-
LiveView alerts.
-
LiveView workspaces.
-
LiveView Web resources. For supported resources, refer to the LiveView Web documentation.
The following describes workflow options for sharing resources when the LiveView cluster is configured for the same metadata store.
- Exporting data out of LiveView
-
Given a server,
LiveView1
, use the export command to export resources into a file (for example, a file calledrandom.json
). Configuration data from LiveView1 is then serialized and stored. - Importing the data using a command (option 1)
-
Given another server,
LiveView2
, use the import command to importrandom.json
to LiveView2. Note that this option works once the LiveView2 is up and running. - Importing the data using at-startup memory resources (option 2)
-
Before the LiveView server starts: this option runs during startup. The LiveView server first initializes the data and then comes up.
Set a system property to specify the file name per your if desired. The LiveView server will look for the specified resource file. The default value of this property is
resources.json
. Unless you are explicitly changing the default name, setting the system property is not required.Place the specified file in the
src/main/resources
folder of the LiveView fragment that will become LiveView2.
Setting up a common metadata store requires the following configuration files of HOCON type. Place these configuration files
in the src/main/configurations
folder of your node's LiveView project.
- LiveView Engine
-
Use to:
-
set the
storeType
property for the LiveView nodes in the cluster. -
optionally specify a system property to rename the default resource file.
-
- EventFlow JDBC Data Source Group
-
Use to define the JDBC data store type when the LiveView Engine configuration file's
storeType
property is set to JDBC. - Role to Privileges Mappings
-
Use to map a user role to the necessary privileges to allow importing and exporting resources between LiveView servers.
You must configure each server with the same storeType
type in a LiveView Engine configuration file.
The example below defines the storeType
property with a value of JDBC and uses the jdbcDataSource
property to define a source called myJDBCSource
. If you define a JDBC data source in the engine configuration file, you must use the same JDBC data source value in a JDBC
configuration file of HOCON type (see below).
Setting the storeType
to TRANSACTIONAL_MEMORY or H2 does not require additional metadata store configuration.
name = "myldmengine"
version = "1.0.0"
type = "com.tibco.ep.ldm.configuration.ldmengine"
configuration = {
LDMEngine = {
ldm = {
metadataStore = {
storeType = "JDBC"
jdbcDataSource = "myJDBCSource"
jdbcMetadataTableNamePrefix = "LV_CLUSTER1_"
}
}
}
}
To rename the default resources.json
file, set the liveview.metadata.resource.file.name
system property as shown in the following example:
name = "myldmengine"
version = "1.0.0"
type = "com.tibco.ep.ldm.configuration.ldmengine"
configuration = {
LDMEngine = {
ldm = {
...
systemProperties = {
"liveview.metadata.resource.file.name" = "yourfile.json" }
...
Configure a EventFlow JDBC Data Source Group configuration file when the LiveView engine configuration file specifies a JDBC storeType
(see above).
A JDBC configuration file can define multiple JDBC data sources. To enable resource sharing between LiveView servers, your
JDBC configuration file must contain at least one data source, and the value must match the value defined in the engine configuration
file. This example defines one JDBC source called myJDBCSource
.
name = "mydatasource"
version = "1.0.0"
type = "com.tibco.ep.streambase.configuration.jdbcdatasource"
configuration = {
JDBCDataSourceGroup = {
associatedWithEngines = [ "javaengine", "otherengine[0-9]" ]
jdbcDataSources = {
myJDBCSource = {
The Role to Privileges Mappings configuration file is required to allow importing and exporting resources between LiveView nodes. Your configuration file must define a user that includes the following privileges:
- LiveViewMetadataImport
-
Enables importing data using lv-client into the LiveView server.
- LiveViewMetadataExport
-
Enables exporting data using lv-client from the LiveView server.
In practice you often define users with multiple LiveView privileges, depending on user role. The following example defines a user with all LiveView privileges, which includes both privileges described above.
name = "my-role-mappings" version = "1.0.0" type = "com.tibco.ep.dtm.configuration.security" configuration = { RoleToPrivilegeMappings = { privileges = { admin = [ { privilege = "LiveViewAll" } ] } } }
Two lv-client commands are required to take periodic backups or port data from one node in one cluster to another node in other cluster (regardless of store type):
- importmetadata
-
Imports resources to a LiveView server.
- exportmetadata
-
Exports resources from a LiveView server.
The transactional memory metadata store type is only supported on homogeneous
Live Datamart clusters (meaning the cluster cannot contain a mix of LiveView and StreamBase fragments when using a transactional
memory metadata store). TIBCO recommends using either the JDBC or H2 metadata store type when your cluster contains both LiveView
and StreamBase fragments.