Cloud Software Group, Inc. EBX®
TIBCO EBX® Documentation

EBX® 5.9 to EBX® 6 Migration Guide

Introduction

This chapter provides the main information to migrate from EBX® 5.9 to EBX® 6. Additional details are provided in the Version upgrade section of EBX® 6 release note, for instance the changes in supported environments and the backward compatibility issues.

Hardware requirements

Memory requirements

The overall amount of memory can remain the same as for the previous version, but it must be dispatched differently between the application server and the operating system. See the Performance and tuning section for more information.

Disk space requirements

This new version offloads the indexes from memory to store them in a persistent way. Additional disk space is needed. See the Disk requirements section for more information.

Other hardware requirements

Carefully read the other sections in Performance and tuning > Performance and tuning.

Custom Java code performance

The new scalable architecture affects the performance of existing custom Java code. Some scenarios, which, in EBX® 5, were favorable, might not work as well on this version, and might require customer action. This is particularly true for middle-sized repositories where all the data could be cached into memory and the garbage collector could work smoothly.

Repeated lookup by primary key

Consider a small table whose data could be fully contained in the cache within our custom EBX® in-memory indexes: it was then cheap to access its records repeatedly by means of random table lookups (although this is generally a discouraged access pattern for databases and information retrieval). In this new version, these accesses must be rewritten, taking advantage of the new SQL features. For example, for Java code performing a 'nested loops' join (iterating on a RequestResult from table A, and then looking up its osd:tableRef in table B): this is now better handled by using a Query using a SQL join.

Repeatedly creating similar requests

EBX® now embeds the Apache Calcite framework, in particular for optimizing the submitted instances of Query and Request. However, optimizing a query has a cost in itself. Although this overhead is generally low (and well paid back) compared to that of executing the query, it might add up in some cases, particularly in the case where a loop generated numerous queries differing by only one parameter. In that case, use parameterized requests (see Request.setXPathParameter) or parameterized queries (see Query.setParameter).

Programmatic access rules

Programmatic access rules concern the implementations of the Java interface AccessRule. On query execution, setting such a rule on a table implies executing the method getPermission for each record. Such an execution defeats the purpose of optimizing against an index.

Where possible, replace these programmatic access rules with the new scripted record permission rules.

Programmatic labels

Programmatic labels concern the implementations of the Java interfaces TableRefDisplay, UILabelRenderer and ConstraintEnumeration. On query execution, setting such a rule on a foreign key field implies executing the method displayOccurrence for each record. Such an execution defeats the purpose of optimizing against an index. Where possible, use a pattern string.

Programmatic filters

Programmatic filters concern the implementations of the Java interface AdaptationFilter. On query execution, using this filter implies executing the method accept for each record. Such an execution defeats the purpose of optimizing against an index. Where possible, replace programmatic filters with the new SQL queries.

Constraints with unknown dependencies

Avoid constraints with unknown dependencies on large datasets, because they are checked on each validation request. See Performance and tuning for more information.

Unnecessary index refresh

For a better performance of transactions (Java implementations of Procedure and also of TableTrigger), any unnecessary index refresh should be avoided. For more information, see Performance and tuning.

Inherited fields

As explained in the Search limitations, inherited fields cannot benefit from the new index optimizations. Wherever possible, it is recommendable to convert these fields into linked fields.

Upgrading your EBX® 5 repository

In this release, the relational database internal persistence formats and structures have been redesigned to support larger volumes of data, and large numbers of dataspaces and snapshots. Although the repository migration is automatic on server startup, it requires beforehand cleaning the data models used.

To upgrade your EBX® repository, perform the steps detailed in the following table.

Attention

Migrating a repository to EBX® 6.x is final: once a repository has started on version 6, it will be forbidden to start on a 5.9 version. This implies that a backup of the 5.9 repository must be made prior to migrating.

Attention

Migration from a 5.9.x repository is the only supported case. If your repository is on an older version, you must first deploy a 5.9.x EBX® and run it on your repository, so that the following steps can be performed.

Step 1 – Server shutdown

Shut down your EBX® 5.9 application server instance.

Step 2 – Repository backup

Back up your EBX® 5.9 repository, namely the relational database objects that have names starting with the current repository prefix (ebx.persistence.table.prefix).

To facilitate the next steps, back up and delete the EBX® logs, or move them to a distinct folder.

Step 3 – Server restart

Restart your EBX® 5.9 application server instance.

Step 4 – Dataspace and workflows cleanup

Closed dataspaces are not migrated (that is, they are deleted). If you intend to keep such dataspaces, you must re-open them prior to migrating. If you do not intend to keep them, it is a good practice to delete and purge them.

To decrease the duration of the migration, consider deleting all of the completed workflows and clean up the workflow history.

Step 5 – Data models cleanup

The automatic migration requires that all data models used by the repository datasets compile without errors. This strict policy has been adopted to prevent unintended loss of data, and to reach a consistent repository state at the end of the server startup.

The goal of this step is to ensure that all data models used by at least one dataset in the repository have no error.

To ensure that this is the case, first you must read the data model compilation reports:

  1. The data model compilation reports are displayed in the 'kernel' log, from the server startup up to the line containing:

    ******** EBX(R) started and initialized. ********

  2. In this part of the log, any data model compilation report begins with:

    ****** Schema

  3. Any error message begins with:

    error [

Note: Data model compilation reports are also available through Web access, on the page Administration > Technical configuration > Modules and data models. Each report must be expanded individually.

In case a data model is in error, you must correct it. Alternatively, you can delete the datasets based on the data model in error if these datasets are no longer used (but note that you must delete the datasets in all dataspaces defining a dataset based on the data model; an alternative is to use a custom property when deploying on EBX® 6, which will globally exclude all datasets based on a schema or a list of schemas; see step 11).

After you determine that all cleanup is done, delete the log, and then return to step 3 (restart the application server), to make sure that no data models are still in error.

Note: If the DAMA add-on is used: the APIs marked as deprecated in DAMA 1.8.x are no longer available in DAMA 2.0.0. If a data model declares these deprecated APIs, then to prevent compilation errors, update the data model to use the most recent APIs.

Refer to the Digital Asset Manager documentation for the list of deprecated APIs and their replacements.

Step 6 – Cleanup if add-ons are used

If your EBX® 5.9 environment contains deployed add-ons, an additional manual intervention is necessary. It is detailed in the next section.

Step 7 – Server shutdown

Shut down your EBX® 5.9 application server instance.

Step 8 – Repository backup ("clean" repository)

Repeat backing up your EBX® 5.9 repository, as described in step 2.

Step 9 – Java compilation against EBX® 6

Due to the conversion of some classes to interfaces in EBX® 6, all custom Java code should be recompiled against the new version. (Otherwise, the java.lang.IncompatibleClassChangeError error occurs at runtime.)

This step provides the opportunity to assess some of the backward compatibility issues documented in the release note and those coming from the add-ons.

Step 10 – EBX® 6 deployment with your code

Replace all ebx*.war and ebx.jar files deployed on your application server with the new artefacts provided with EBX® 6. See Jakarta EE deployment for more information.

If you use EBX® add-ons, then replace all ebx-addon-*.war files and ebx-addons.jar files with those provided with the new bundle complying with EBX® 6.

Step 11 – EBX® 6 startup and automatic migration

Restart the application server instance, now deployed with EBX® 6.

In the case where at least a data model is still in error, the automatic migration is automatically stopped and a log details which datasets are concerned by these errors. To ease the corrections, the logs also provide a suggestion feeding the property ebx.migration.5to6.excludeAllDatasetsOnDataModels with the locations of the failing schemas, in the expected format. You then have two options:

  • If you accept these datasets to be excluded from the migration, copy this property to ebx.properties.

  • If you do not accept this suggestion, you must return to step 3.

Attention

In both cases, make sure to delete the directory ${ebx.repository.directory}/indexes-(...)/, so that it does not interfere with the next migration attempt.

If no data model is in error, then the database schema and data are automatically migrated. This process might take time, depending on the volume of data.

Upgrading your EBX® 5.9 repository with add-ons

This section details the step 6 above. This step ensures that no custom data models depend on retired add-ons.

Attention

The following add-ons are no longer supported in EBX® 6:

  • TIBCO EBX Match and Cleanse (codename daqa): It is renamed Match and Merge and is replaced by a new add-on (codename mame).

  • TIBCO EBX Rules Portfolio Add-on (codename rpfl): It is progressively replaced with new core scripting.

  • TIBCO EBX Information Governance Add-on (codename igov).

  • TIBCO EBX Add-on for Oracle Hyperion EPM (codename hmfh).

  • TIBCO EBX Graph View Add-on (codename gram).

  • TIBCO EBX Activity Monitoring Add-on (codename mtrn).

Attention

The final step of the automatic migration deletes the datasets created by the retired add-ons listed above. It also deletes their dataspace when this dataspace was also created by the add-on. (Usually, these dataspaces are accessed in the Administration area of the user interface.)

Data model includes

This sub-step ensures that all custom data models no longer include a retired add-on data model.

To our knowledge, this issue occurs only for the daqa add-on, so it is used as an example in this section.

To look for data model includes, you must read the data model compilation reports:

  1. The data model compilation reports are all displayed in the 'kernel' log, from the server startup up to the line containing:

    ******** EBX(R) started and initialized. ********

  2. In this part of the log, any data model compilation report begins with:

    ****** Schema

  3. Inclusion of a data model belonging to daqa displays the following text in its report:

    Include: Module: ebx-addon-daqa

Note: Data model compilation reports are also available through Web access, on the page Administration > Technical configuration > Modules and data models. Each report must be expanded individually.

In case a data model is using the module ebx-addon-daqa, you must take the following steps:

  1. Remove the include directive specifying the data model of module ebx-addon-daqa (/WEB_INF/ebx/schema/ebx-addon-daqa-types.xsd).

  2. Remove elements based on the data types it defines (mainly DaqaMetaData).

  3. Modify other data model features that used the removed elements (for example, indexes).

Data models modified by an add-on

A data model can be enriched by a specific add-on if it defines the property osd:addon under schema/annotation/appinfo. As a consequence this property should be removed if a custom data model refers to an add-on that is no longer supported (if this is not done a warning will be added to the compilation report of the data model).

Data models using add-ons Java classes

In case a custom data model uses a Java extension provided by a retired add-on, this extension must be removed. Here is a non-exhaustive list: