This chapter provides the main information to migrate from EBX® 5.9 to EBX® 6. Additional details are provided in the Version upgrade section of EBX® 6 release note, for instance the changes in supported environments and the backward compatibility issues.
Memory requirements | The overall amount of memory can remain the same as for the previous version, but it must be dispatched differently between the application server and the operating system. See the Performance and tuning section for more information. |
Disk space requirements | This new version offloads the indexes from memory to store them in a persistent way. Additional disk space is needed. See the Disk requirements section for more information. |
Other hardware requirements | Carefully read the other sections in Performance and tuning > Performance and tuning. |
The new scalable architecture affects the performance of existing custom Java code. Some scenarios, which, in EBX® 5, were favorable, might not work as well on this version, and might require customer action. This is particularly true for middle-sized repositories where all the data could be cached into memory and the garbage collector could work smoothly.
Repeated lookup by primary key | Consider a small table whose data could be fully contained in the cache within our custom EBX® in-memory indexes: it was then cheap to access its records repeatedly by means of random table lookups (although this is generally a discouraged access pattern for databases and information retrieval). In this new version, these accesses must be rewritten, taking advantage of the new SQL features. For example, for Java code performing a 'nested loops' join (iterating on a |
Repeatedly creating similar requests | EBX® now embeds the Apache Calcite framework, in particular for optimizing the submitted instances of |
Programmatic access rules | Programmatic access rules concern the implementations of the Java interface Where possible, replace these programmatic access rules with the new scripted record permission rules. |
Programmatic labels | Programmatic labels concern the implementations of the Java interfaces |
Programmatic filters | Programmatic filters concern the implementations of the Java interface |
Constraints with unknown dependencies | Avoid constraints with unknown dependencies on large datasets, because they are checked on each validation request. See Performance and tuning for more information. |
Unnecessary index refresh | For a better performance of transactions (Java implementations of |
Inherited fields | As explained in the Search limitations, inherited fields cannot benefit from the new index optimizations. Wherever possible, it is recommendable to convert these fields into linked fields. |
In this release, the relational database internal persistence formats and structures have been redesigned to support larger volumes of data, and large numbers of dataspaces and snapshots. Although the repository migration is automatic on server startup, it requires beforehand cleaning the data models used.
To upgrade your EBX® repository, perform the steps detailed in the following table.
Migrating a repository to EBX® 6.x is final: once a repository has started on version 6, it will be forbidden to start on a 5.9 version. This implies that a backup of the 5.9 repository must be made prior to migrating.
Migration from a 5.9.x repository is the only supported case. If your repository is on an older version, you must first deploy a 5.9.x EBX® and run it on your repository, so that the following steps can be performed.
Step 1 – Server shutdown | Shut down your EBX® 5.9 application server instance. |
Step 2 – Repository backup | Back up your EBX® 5.9 repository, namely the relational database objects that have names starting with the current repository prefix ( To facilitate the next steps, back up and delete the EBX® logs, or move them to a distinct folder. |
Step 3 – Server restart | Restart your EBX® 5.9 application server instance. |
Step 4 – Dataspace and workflows cleanup | Closed dataspaces are not migrated (that is, they are deleted). If you intend to keep such dataspaces, you must re-open them prior to migrating. If you do not intend to keep them, it is a good practice to delete and purge them. To decrease the duration of the migration, consider deleting all of the completed workflows and clean up the workflow history. |
Step 5 – Data models cleanup | The automatic migration requires that all data models used by the repository datasets compile without errors. This strict policy has been adopted to prevent unintended loss of data, and to reach a consistent repository state at the end of the server startup. The goal of this step is to ensure that all data models used by at least one dataset in the repository have no error. To ensure that this is the case, first you must read the data model compilation reports:
Note: Data model compilation reports are also available through Web access, on the page Administration > Technical configuration > Modules and data models. Each report must be expanded individually. In case a data model is in error, you must correct it. Alternatively, you can delete the datasets based on the data model in error if these datasets are no longer used (but note that you must delete the datasets in all dataspaces defining a dataset based on the data model; an alternative is to use a custom property when deploying on EBX® 6, which will globally exclude all datasets based on a schema or a list of schemas; see step 11). After you determine that all cleanup is done, delete the log, and then return to step 3 (restart the application server), to make sure that no data models are still in error. Note: If the DAMA add-on is used: the APIs marked as deprecated in DAMA 1.8.x are no longer available in DAMA 2.0.0. If a data model declares these deprecated APIs, then to prevent compilation errors, update the data model to use the most recent APIs. Refer to the Digital Asset Manager documentation for the list of deprecated APIs and their replacements. |
Step 6 – Cleanup if add-ons are used | If your EBX® 5.9 environment contains deployed add-ons, an additional manual intervention is necessary. It is detailed in the next section. |
Step 7 – Server shutdown | Shut down your EBX® 5.9 application server instance. |
Step 8 – Repository backup ("clean" repository) | Repeat backing up your EBX® 5.9 repository, as described in step 2. |
Step 9 – Java compilation against EBX® 6 | Due to the conversion of some classes to interfaces in EBX® 6, all custom Java code should be recompiled against the new version. (Otherwise, the This step provides the opportunity to assess some of the backward compatibility issues documented in the release note and those coming from the add-ons. |
Step 10 – EBX® 6 deployment with your code | Replace all If you use EBX® add-ons, then replace all |
Step 11 – EBX® 6 startup and automatic migration | Restart the application server instance, now deployed with EBX® 6. In the case where at least a data model is still in error, the automatic migration is automatically stopped and a log details which datasets are concerned by these errors. To ease the corrections, the logs also provide a suggestion feeding the property
Attention In both cases, make sure to delete the directory If no data model is in error, then the database schema and data are automatically migrated. This process might take time, depending on the volume of data. |
This section details the step 6 above. This step ensures that no custom data models depend on retired add-ons.
The following add-ons are no longer supported in EBX® 6:
TIBCO EBX Match and Cleanse (codename daqa): It is renamed Match and Merge and is replaced by a new add-on (codename mame).
TIBCO EBX Rules Portfolio Add-on (codename rpfl): It is progressively replaced with new core scripting.
TIBCO EBX Information Governance Add-on (codename igov).
TIBCO EBX Add-on for Oracle Hyperion EPM (codename hmfh).
TIBCO EBX Graph View Add-on (codename gram).
TIBCO EBX Activity Monitoring Add-on (codename mtrn).
The final step of the automatic migration deletes the datasets created by the retired add-ons listed above. It also deletes their dataspace when this dataspace was also created by the add-on. (Usually, these dataspaces are accessed in the Administration area of the user interface.)
This sub-step ensures that all custom data models no longer include a retired add-on data model.
To our knowledge, this issue occurs only for the daqa add-on, so it is used as an example in this section.
To look for data model includes, you must read the data model compilation reports:
The data model compilation reports are all displayed in the 'kernel' log, from the server startup up to the line containing:
******** EBX(R) started and initialized. ********
In this part of the log, any data model compilation report begins with:
****** Schema
Inclusion of a data model belonging to daqa displays the following text in its report:
Include: Module: ebx-addon-daqa
Note: Data model compilation reports are also available through Web access, on the page Administration > Technical configuration > Modules and data models. Each report must be expanded individually.
In case a data model is using the module ebx-addon-daqa
, you must take the following steps:
Remove the include
directive specifying the data model of module ebx-addon-daqa
(/WEB_INF/ebx/schema/ebx-addon-daqa-types.xsd
).
Remove elements based on the data types it defines (mainly DaqaMetaData
).
Modify other data model features that used the removed elements (for example, indexes).
A data model can be enriched by a specific add-on if it defines the property osd:addon
under schema/annotation/appinfo
. As a consequence this property should be removed if a custom data model refers to an add-on that is no longer supported (if this is not done a warning will be added to the compilation report of the data model).
In case a custom data model uses a Java extension provided by a retired add-on, this extension must be removed. Here is a non-exhaustive list:
rpfl: Using the data model extension com.orchestranetworks.addon.rpfl.DefaultSchemaExtension
.
tese: Using the table filter com.orchestranetworks.addon.tese.SearchTableFilter
.
dqid: Using the trigger com.orchestranetworks.addon.dqid.controller.DQIdTrigger
.
igov: Using com.orchestranetworks.addon.igov.IGovLabelingSchemaDocumentation
.
global: Some toolbars defined by the data models can use add-ons user services.