You can change your object management (OM) method from Persistence to Cache with backing store. To do so, you configure the Cache OM options as explained in this guide, and you can optionally migrate the data in your persistence database to a backing store. When you start up your newly configured system, the data from the backing store is loaded into the cache.This section explains how to migrate your data from the persistence database, or databases if you have a multi-BAR project, to the backing store. Each rule session (BAR) uses a different partition number, which is stored in the CacheID column of the backing store.
● First you set up the backing store database schema, following standard procedures given in this guide.The migration utility supports export from persistence databases in BusinessEvents 1.4 and higher. The utility can then import the data to a 2.x and higher backing store (but not to a persistence database).You can use also use the migration utility to export ontology object data from a persistence database, and then import the files into spread sheets for validating, analyzing or reporting. See Migration Export Reference Tables.
● As with any procedure that modifies your data, ensure that you have made backups before you begin.You must add information to the be-migration.tra file before executing the utility commands.BE_HOME/bin/be-migration.tra
2. In the tibco.env.CUSTOM_EXT_PREPEND_CP property, add the path to your JDBC driver (if it is not already there). For example:
3. In the JDBC drivers property, java.property.jdbc.drivers, add the correct driver string. For example:
− be.migration.import.multithreads: Default value is true
− be.migration.import.threads: Allocates JVM threads to be used by the migration utility. Default value is 20. If be.migration.import.multithreads is false, this property is not used.
5. As needed, configure the be.migration.oracle.poolSize property. This property allocates the connection pool size to be used for importing ontology objects into the backing store. Default value is 10.
6. As needed, configure the be.migration.oracle.retryInterval property. This property specifies the interval in seconds. The migration utility tries to reconnect to the backing store database at the specified interval, in case the connection is lost. Default value is 5.
7. As desired, configure any command-line options you want to set in the properties file. See Table 7, Persistence Database Migration Utility Parameters (Sheet 1 of 3) for property names.Note that options set on the command line take precedence over values set in the property file.
Used by the migration utility, for migrating from the Persistence to the Cache OM option. See Migrating Data from Persistence Database to Backing Store Allocates the connection pool size to be used for importing ontology objects into the backing store.Used by the migration utility, for migrating from the Persistence to the Cache OM option. See Migrating Data from Persistence Database to Backing Store. Allocates JVM threads to be used by the migration utility. If be.migration.import.multithreads is false, this property is not used.Used by the migration utility, for migrating from the Persistence to the Cache OM option. See Migrating Data from Persistence Database to Backing Store.When you execute the commands below, the be-migration utility reads persistence files from persistence_db_dir and writes their data to comma-delimited text files in the location specified, using information in the specified EAR file.
Each rule session (inference agent) requires a separate database. Repeat the procedure for each database.BE_HOME/bin/be-migration -export -bdb -input persistence_db_path -output text_files_path -ear EAR_path or repo_path
3. Review the export log file to ensure that the data export was successful. The summary at the end of the log file provides useful information.
4. If the project has multiple BARs, that is multiple rule sessions (inference agents), repeat the procedure once for each BAR.
If the project has multiple BARs, that is, multiple rule sessions (inference agents), each BAR requires a separate backing store. Repeat the tasks below once for each BAR.Task A Create Backing Store SchemaComplete all the procedures required to set up your backing store database schema. See Backing Store Database Configuration Tasks in TIBCO BusinessEvents User’s Guide.Run the be-migration utility with the import command:
-import -db -input text_files_path -conn "connection_string" -ear EAR_path or repo_path -partition BAR_Name:partition_id
See Migration Utility Usage and Parameters for details on each of the parameters.Run your project in a test environment to test if data recovery is successful before deploying to the production environment.
Copyright © TIBCO Software Inc. All Rights Reserved.