TIBCO EBX®
Documentation > Administration Guide > Distributed Data Delivery (D3)
Navigation modeDocumentation > Administration Guide > Distributed Data Delivery (D3)

D3 administration

Quick start

This section introduces the configuration of a basic D3 architecture with two TIBCO EBX® instances. Before starting, please check that each instance can work properly with its own repository.

Note

Deploy EBX® on two different web application containers. If both instances are running on the same host, ensure that all communication TCP ports are distinct.

Declare an existing dataspace on the primary node

The objective is to configure and broadcast an existing dataspace from a primary node.

This configuration is performed on the entire D3 infrastructure (primary and replica nodes included).

Update the ebx.propertiesprimary node configuration file with:

  1. Define D3 mode as primary in key ebx.d3.mode.

Note

The primary node can be started after the configuration.

After authenticating as a built-in administrator, navigate within the administration tab:

  1. Prerequisite: Check that the node is configured as a primary node (in the 'Actions' menu use 'System information' and check 'D3 mode').

  2. Open the '[D3] Primary configuration' administration feature.

  3. Add the dataspace to be broadcast to the 'Delivery dataspaces' table, and declare the allowed profile.

  4. Add the delivery profile to the 'Delivery profiles' table (it must correspond to a logical name) and declare the delivery mode. Possible values are: cluster mode or federation mode.

  5. Map the delivery dataspace with the delivery profile into the 'Delivery mapping' table.

Note

The primary node is now ready for the replica node(s) registration on the delivery profile.

Check that the D3 broadcast menu appears in the 'Actions' menu of the dataspace or one of its snapshots.

Configure replica node for registration

The objective is to configure and register the replica node based on a delivery profile and communications settings.

Update the ebx.properties replica node configuration file with:

  1. Define D3 mode as replica in key ebx.d3.mode.

  2. Define the delivery profile set on the primary node in key ebx.d3.delivery.profiles (delivery profiles must be separated by a comma and a space).

  3. Define the primary node user authentication (must have the built-in administrator profile) for node communications in ebx.d3.master.username and ebx.d3.master.password.

  4. Define HTTP/TCP protocols for primary node communication, by setting a value for the property key ebx.d3.master.url

    (for example http://localhost:8080/ebx-dataservices/connector).

  5. Define the replica node user authentication (must have the built-in administrator profile) for node communications in ebx.d3.slave.username and ebx.d3.slave.password.

  6. Define HTTP/TCP protocols for replica node communication, by setting a value for the property key ebx.d3.slave.url

    (for example http://localhost:8090/ebx-dataservices/connector).

Note

The replica node can be started after the configuration.

After authenticating as a built-in administrator, navigate inside the administration tab:

  1. Prerequisite: Check that the node is configured as the replica node (in the 'Actions' menu use 'System information' and check 'D3 mode').

  2. Open the '[D3] Replica configuration' administration feature.

  3. Check the information on the 'Primary information' screen: No field should have the 'N/A' value.

Note

Please check that the model is available before broadcast (from data model assistant, it must be published).

The replica node is then ready for broadcast.

Configuring D3 nodes

Runtime configuration of primary and hub nodes through the user interface

The declaration of delivery dataspaces and delivery profiles is done by selecting the '[D3] Primary configuration' feature from the 'Administration' area, where you will find the following tables:

Delivery dataspaces

Declarations of the dataspaces that can be broadcast.

Delivery profiles

Profiles to which replica nodes can subscribe. The delivery mode must be defined for each delivery profile.

Delivery mapping

The association between delivery dataspaces and delivery profiles.

Note

The tables above are read-only while some broadcasts are pending or in progress.

Configuring primary, hub and replica nodes

This section details how to configure a node in its EBX® main configuration file.

See also

Primary node

In order to act as a primary node, an instance of EBX® must declare the following property in its main configuration file.

Sample configuration for ebx.d3.mode=master node:

##################################################################
## D3 configuration
##################################################################
##################################################################
# Configuration for master, hub and slave
##################################################################
# Optional property.
# Possibles values are single, master, hub, slave
# Default is single meaning the server will be a standalone instance.
ebx.d3.mode=master
See also

Hub node

In order to act as a hub node (combination of primary and replica node configurations), an instance of EBX® must declare the following property in its main configuration file.

Sample configuration for ebx.d3.mode=hub node:

##################################################################
## D3 configuration
##################################################################
##################################################################
# Configuration for master, hub and slave
##################################################################
# Optional property.
# Possibles values are single, master, hub, slave
# Default is single meaning the server will be a standalone instance.
ebx.d3.mode=hub

##################################################################
# Configuration dedicated to hub or slave
##################################################################
# Profiles to subscribe to
# Mandatory property if ebx.d3.mode=hub or ebx.d3.mode=slave
ebx.d3.delivery.profiles=

# User and password to be used to communicate with the master. 
# Mandatory properties if ebx.d3.mode=hub or ebx.d3.mode=slave
ebx.d3.master.username=
ebx.d3.master.password=

# User and password to be used by the master to communicate with the hub or slave. 
# Mandatory property if ebx.d3.mode=hub or ebx.d3.mode=slave
ebx.d3.slave.username=
ebx.d3.slave.password=
See also

Replica node

In order to act as a replica node, an instance of EBX® must declare the following property in its main configuration file.

Sample configuration for ebx.d3.mode=slave node:

##################################################################
## D3 configuration
##################################################################
##################################################################
# Configuration for master, hub and slave
##################################################################
# Optional property.
# Possibles values are single, master, hub, slave
# Default is single meaning the server will be a standalone instance.
ebx.d3.mode=slave

##################################################################
# Configuration dedicated to hub or slave
##################################################################
# Profiles to subscribe to
# Mandatory property if ebx.d3.mode=hub or ebx.d3.mode=slave
ebx.d3.delivery.profiles=

# User and password to be used to communicate with the master. 
# Mandatory properties if ebx.d3.mode=hub or ebx.d3.mode=slave
ebx.d3.master.username=
ebx.d3.master.password=

# User and password to be used by the master to communicate with the hub or slave. 
# Mandatory property if ebx.d3.mode=hub or ebx.d3.mode=slave
ebx.d3.slave.username=
ebx.d3.slave.password=
See also

Configuring the network protocol of a node

This section details how to configure the network protocol of a node in its EBX® main configuration file.

See also

HTTP(S) and socket TCP protocols

Sample configuration for ebx.d3.mode=hub or ebx.d3.mode=slave node with HTTP(S) network protocol:

##################################################################
# HTTP(S) and TCP socket configuration for D3 hub and slave
##################################################################
# URL to access the data services connector of the master 
# Mandatory property if ebx.d3.mode=hub or ebx.d3.mode=slave and JMS for D3 is not activated.
# This property will be ignored if JMS for D3 is activated.
# The URL must follow this pattern: [protocol]://[master_host]:[master_port]/ebx-dataservices/connector
# Where the possible values of 'protocol' are 'http' or 'https'.
ebx.d3.master.url=

# URL to access the data services connector of the slave 
# Mandatory property if ebx.d3.mode=hub or ebx.d3.mode=slave and JMS for D3 is not activated.
# This property will be ignored if JMS for D3 is activated.
# The URL must follow this pattern: [protocol]://[slave_host]:[slave_port]/ebx-dataservices/connector
# Where the possible values of 'protocol' are 'http' or 'https'.
ebx.d3.slave.url=

# Minimum port to use to transfer archives on TCP mode.
# Must be a positive integer above zero and below 65535.
# If not set, a random port will be used.
#ebx.d3.slave.socket.range.min=

# Max port to use on TCP mode to transfer archives.
# Must be a positive integer above ebx.d3.slave.socket.range.min and below 65535.
# Mandatory if ebx.d3.slave.socket.range.min is set.
#ebx.d3.slave.socket.range.max=

JMS protocol

If JMS is activated, the following properties can be defined in order to enable JMS functionalities for a D3 node.

Sample configuration for all D3 nodes with JMS network protocol:

##################################################################
## JMS configuration for D3
##################################################################
# Taken into account only if Data Services JMS is configured properly
##################################################################
# Configuration for master, hub and slave
##################################################################
# Default is false, activate JMS for D3
## If activated, the deployer must ensure that the entries 
## 'jms/EBX_D3ReplyQueue', 'jms/EBX_D3ArchiveQueue' and 'jms/EBX_D3CommunicationQueue' 
## are bound in the operational environment of the application server.
## On slave or hub mode, the entry 'jms/EBX_D3MasterQueue' must also be bound.
ebx.jms.d3.activate=false

# Change the default timeout when using reply queue.
# Must be a positive integer that does not exceed 3600000.
# Default is 10000 milliseconds.
#ebx.jms.d3.reply.timeout=10000

# Time-to-live message value expressed in milliseconds.
# This value will be set on each message header 'JMSExpiration' that defines the
# countdown before the message deletion managed by the JMS broker.
# Must be a positive integer equal to 0 or above the value of 'ebx.jms.d3.reply.timeout'.
# The value 0 means that the message does not expire.
# Default is 3600000 (one hour). 
#ebx.jms.d3.expiration=3600000

# Archive maximum size in KB for the JMS body message. If exceeds, the message
# is transferred into several sequences messages in a same group, where each one does
# not exceed the maximum size defined.
# Must be a positive integer equals to 0 or above 100.
# Default is 0 that corresponds to unbounded.
#ebx.jms.d3.archiveMaxSizeInKB=

##################################################################
# Configuration dedicated to hub or slave
##################################################################
# Master repository ID, used to set a message filter for the concerned master when sending JMS message
# Mandatory property if ebx.jms.d3.activate=true and if ebx.d3.mode=hub or ebx.d3.mode=slave
#ebx.jms.d3.master.repositoryId=

Services on primary nodes

Services to manage a primary node are available in the 'Administration' area of the replica node under '[D3] Primary node configuration' and also in the 'Delivery dataspaces' and 'Registered replica nodes' tables. The services are:

Relaunch replays

Immediately relaunch all replays for waiting federation deliveries.

Delete replica node delivery dataspace

Delete the delivery dataspace on chosen replica nodes and/or unregister it from the configuration of the D3 primary node.

To access the service, select a delivery dataspace from the 'Delivery dataspaces' table on the primary node, then launch the wizard.

Fully resynchronize

Broadcast the full content of the last broadcast snapshot to the registered replica nodes.

Subscribe a replica node

Subscribe a set of selected replica nodes.

Deactivate replica nodes

Remove the selected replica nodes from the broadcast scope and switch their states to 'Unavailable'.

Note

The "in progress" broadcast contexts are rolled back.

Unregister replica nodes

Disconnects the selected replica nodes from the primary node.

Note

The "in progress" broadcast contexts are rolled back.

Note

The primary node services above are hidden while some broadcasts are pending or in progress.

Services on replica nodes

Services are available in the 'Administration' area under [D3] Configuration of replica node to manage its subscription to the primary node and perform other actions:

Register replica node

Re-subscribes the replica node to the primary node if it has been unregistered.

Unregister replica node

Disconnects the replica node from the primary node.

Note

The "in progress" broadcast contexts are rolled back.

Close and delete snapshots

Clean up a replica node delivery dataspace.

To access the service, select a delivery dataspace from the 'Delivery dataspaces' table on the replica node, then follow the wizard to close and delete snapshots based on their creation dates.

Note: The last broadcast snapshot is automatically excluded from the selection.

Supervision

The last broadcast snapshot is highlighted in the snapshot table of the dataspace, it is represented by an icon displayed in the first column.

Primary node management console

Several tables make up the management console of the primary node, located in the 'Administration' area of the primary node, under '[D3] Primary node configuration'. They are as follows:

Registered replica nodes

Replica nodes registered with the primary node. From this table, several services are available on each record.

Broadcast history

History of broadcast operations that have taken place.

Replica node registration log

History of initialization operations that have taken place.

Detailed history

History of archive deliveries that have taken place. The list of associated delivery archives can be accessed from the tables 'Broadcast history' and 'Initialization history' using selection nodes.

Primary node supervision services

Available in the 'Administration' area of the primary node under '[D3] Primary node configuration'. The services are as follows:

Check replica node information

Lists the replica nodes and related information, such as the replica node's state, associated delivery profiles, and delivered snapshots.

Clear history content

Deletes all records in all history tables, such as 'Broadcast history', 'Replica node registration log' and 'Detailed history'.

Replica node monitoring through the Java API

A replica node monitoring class can be created to implement actions that are triggered when the replica node's status switches to either 'Available' or 'Unavailable'. To do so, it must implement the NodeMonitoring interface. This class must be outside of any EBX® module and accessible from the class-loader of 'ebx.jar' and its full class name must be specified under '[D3] Replica node configuration'.

See also

Primary node notification

A D3 administrator can set up mail notifications to receive broadcast events:

The mail contains a table of events with optional links to further details.

To enable notifications, open the '[D3] Primary node configuration' dataspace from the 'Administration' area and configure the 'Notifications' group under 'Global configuration'.

The 'From email' and 'URL definition' options should also be configured by using the 'Email configuration' link.

Log supervision

The technical supervision can be done through the log category 'ebx.d3', declared in the EBX® main configuration file. For example:

ebx.log4j.category.log.d3= INFO, Console, ebxFile:d3

Temporary files

Some temporary files, such as exchanged archives, SOAP messages, broadcast queue, (...), are created and written to the EBX® temporary directory. This location is defined in the EBX® main configuration file:

#################################################
## Directories for temporary resources.
#################################################
# When set, allows specifying a directory for temporary files different from java.io.tmpdir.
# Default value is java.io.tmpdir
ebx.temp.directory = ${java.io.tmpdir}

# Allows specifying the directory containing temporary files for cache.
# If unset, the used directory is ${ebx.temp.directory}/ebx.platform.
#ebx.temp.cache.directory = ${ebx.temp.directory}/ebx.platform

# When set, allows specifying the directory containing temporary files for import.
# If unset, the used directory is ${ebx.temp.directory}/ebx.platform.
#ebx.temp.import.directory = ${ebx.temp.directory}/ebx.platform
Documentation > Administration Guide > Distributed Data Delivery (D3)