Migrating Persistence Data to Backing Store : Migrating Data from Persistence Database to Oracle Backing Store

Migrating Data from Persistence Database to Oracle Backing Store
You must first migrate from the Persistence database to the Oracle backing store database, and then to the JDBC backing store database.
The Oracle-only backing store was deprecated in the 4.0 release. If you are setting up a backing store for the first time, use the JDBC backing store. There is no direct migration path from Persistence OM database to JDBC backing store.
You can change your object management (OM) method from Persistence to Cache with backing store. To do so, you configure the Cache OM options as explained in this guide, and you can optionally migrate the data in your persistence database to a backing store.
This migration utility migrates data from the persistence to the Oracle only backing store.
When you start up your newly configured system, the data from the backing store is loaded into the cache.
This section explains how to migrate your data from the persistence database, or databases if you have a multi-BAR project, to the backing store. Each rule session (BAR) uses a different partition number, which is stored in the CacheID column of the backing store.
For each BAR (inference agent) in the project, the steps are as follows:
When all the data is migrated, and the Cache OM features are fully configured, start the system.
The migration utility supports export from persistence databases in BusinessEvents 1.4 and higher. The utility can then import the data to a 2.x and higher backing store (but not to a persistence database).
You can use also use the migration utility to export ontology object data from a persistence database, and then import the files into spread sheets for validating, analyzing or reporting. See Persistence Migration Export Reference Tables.
Before you Begin
Prepare Property Files
You must add information to the be-migration.tra file before executing the utility commands.
1.
BE_HOME/bin/be-migration.tra
2.
In the tibco.env.CUSTOM_EXT_PREPEND_CP property, add the path to your JDBC driver (if it is not already there). For example:

 
# JDBC Driver libraries
tibco.env.CUSTOM_EXT_PREPEND_CP C:/myHome/jdbc/lib/ojdbc14.jar

 
3.
In the JDBC drivers property, java.property.jdbc.drivers, add the correct driver string. For example:

 
# JDBC drivers
java.property.jdbc.drivers oracle.jdbc.OracleDriver

 
4.
be.migration.import.multithreads: Default value is true
be.migration.import.threads: Allocates JVM threads to be used by the migration utility. Default value is 20. If be.migration.import.multithreads is false, this property is not used.
5.
As needed, configure the be.migration.oracle.poolSize property. This property allocates the connection pool size to be used for importing ontology objects into the backing store. Default value is 10.
6.
As needed, configure the be.migration.oracle.retryInterval property. This property specifies the interval in seconds. The migration utility tries to reconnect to the backing store database at the specified interval, in case the connection is lost. Default value is 5.
7.
Note that options set on the command line take precedence over values set in the property file.
8.
Allocates the connection pool size to be used for importing ontology objects into the backing store.
Allocates JVM threads to be used by the migration utility. If be.migration.import.multithreads is false, this property is not used.
Export Data from the Persistence Database
When you execute the commands below, the be-migration utility reads persistence files from persistence_db_dir and writes their data to comma-delimited text files in the location specified, using information in the specified EAR file.
1.
2.
BE_HOME/bin/be-migration -export -bdb -input persistence_db_path -output text_files_path -ear EAR_path or repo_path
3.
Review the export log file to ensure that the data export was successful. The summary at the end of the log file provides useful information.
4.
Import Data to the Oracle Only Backing Store
Before you can import files to a backing store you must create the schema.
If the project has multiple BARs, that is, multiple rule sessions (inference agents), each BAR requires a separate backing store. Repeat the tasks below once for each BAR.
 
Task A Create Backing Store Schema
Complete all the procedures required to set up your Oracle-only backing store database schema. See TIBCO BusinessEvents Administration for details.
Task B Import Ontology Object Data from Files to Database
Run the be-migration utility with the import command:

 
-import -db -input text_files_path -conn "connection_string" -ear EAR_path or repo_path -partition BAR_Name:partition_id

 
See Persistence Migration Utility Usage and Parameters for details on each of the parameters.
Review the import log file to ensure that the data import was successful.
Run your project in a test environment to test if data recovery is successful before deploying to the production environment.