Cloud Software Group, Inc. EBX®
Documentation > Administration Guide > Installation & configuration
Navigation modeDocumentation > Administration Guide > Installation & configuration

Performance and tuning

Environment

Memory management

Memory monitoring

Indications of EBX® load activity are provided by monitoring the underlying database, and also by the 'monitoring' logging category.

If the numbers for cleared and built objects remain high for a long time, this is an indication that EBX® is swapping on the application server. In that case, the memory allocated to the application server should be increased.

Garbage collector

Tuning the garbage collector can also benefit overall performance. This tuning should be adapted to the use case and specific Java Runtime Environment used.

CPU

The number of CPUs available for the application server must be defined considering the number of concurrent HTTP requests to be served, the complexity (CPU cost) of the implied tasks, and the background activities, including the Java garbage collection.

Large imports and more generally large transactions involving many creates, updates and deletes will be completed faster if the difference between the server load and the number of available processors allows the indexing to be efficiently run in parallel within the transaction. The persistence log category contains the following entries:

The difference mentioned above is assessed every ten seconds and is computed using the methods getSystemLoadAverage() and getAvailableProcessors() in the Java class java.lang.management.OperatingSystemMXBean. Both numbers are written at the end of the log entry above.

Using the native LZ4 library

The LZ4 library is used to store data to and retrieve data from the database. To speed up data access, it is required to perform a ebx-lz4.jar native installation.

See Data compression library for more information.

Scanning on server startup

To speed up the web applications server startup, the JAR files scanner should be configured.

Database

Reorganizing database tables

As with any database, inserting and deleting large volumes of data may lead to fragmented data, which can deteriorate performance over time. To resolve the issue, reorganizing the impacted database tables is necessary. See Monitoring and cleanup of the relational database.

A specificity of EBX® is that creating dataspaces and snapshots adds new entries to tables GRS_DTR and GRS_SHR. When poor performance is experienced, it may be necessary to schedule a reorganization of these tables, for large repositories in which many dataspaces are created and deleted.

Data modeling

Aggregated lists

In a data model, when an element's cardinality constraint maxOccurs is greater than 1 and no osd:table is declared on this element, it is implemented as a Java List. This type of element is called an aggregated list, as opposed to a table.

It is important to consider that there is no specific optimization when accessing aggregated lists, in terms of iterations, user interface display, etc. Besides performance concerns, aggregated lists are limited with regard to many functionalities that are supported by tables. See tables introduction for a list of these features.

Attention

For the reasons stated above, aggregated lists should be used only for small volumes of simple data (one or two dozen records), with no advanced requirements for their identification, lookups, permissions, etc. For larger volumes of data (or more advanced functionalities), it is recommended to use osd:table declarations.

Inherited fields

It is possible in a data model to use inherited fields for an advanced inheritance based on relationships opposed to dataset inheritance.

As inherited fields cannot benefit from index optimizations they should be used with caution or avoided since they behave as computed fields to resolve their values. As a consequence all operations (querying, validation, data comparison in resolved mode, etc.) that will be performed on inherited fields won't be optimized and their performances can be strongly impacted.

Data validation

The internal validation framework will optimize the work required during successive requests to update the validation report of a dataset or a table. The incremental validation process behaves as follows:

Certain constraints are systematically re-validated, even if no updates have occurred since the last validation. These are the constraints with unknown dependencies. An element has unknown dependencies if:

Consequently, on large tables, it is recommended to:

The following properties can be used to minimize the impact on performance when logging a validation report:

Accessing tables

Functionalities

Tables are commonly accessed through EBX® UI, data services and also through the Request and Query APIs. This access involves a unique set of functions, including a dynamic resolution process. This process behaves as follows:

Query on tables

Architecture and design

In order to improve the speed of operations on tables, persistent Lucene indexes are managed by the EBX® engine.

Attention

Faster access to tables is ensured if indexes are ready and maintained in the OS memory cache. As mentioned above, it is important for the OS to have enough space allocated.

Performance considerations

The query optimizer favors the use of indexes when computing a request result. If a query cannot take advantage of the indexes, it will be resolved in Java memory, and experience poor performance on large volumes. The following guidelines apply:

Attention

  • Only XPath predicates and SQL queries can benefit from index optimization.

  • Some fields and some datasets cannot be indexed, as described in section Limitations.

  • XPath predicates using the osd:label function cannot benefit from index optimization

If indexes have not yet been built, additional time is required to build and persist the indexes, on the first access to the table.

Accessing the table data blocks is required when the query cannot be computed against any index (whether for resolving a rule, filter or sort), as well as for building the index. If the table blocks are not present in memory, additional time is needed to fetch them from the database.

Apart from that, there are other considerations that can impact requests / queries performance:

It is possible to get information through the memory monitoring and request logging categories.

Accessing and modifying a table

The following access lead to poor performance, and must be avoided:

Other operations on tables

The new records creations or record insertions depend on the primary key index. Thus, a creation becomes almost immediate if this index is already loaded.

REST built-in and business objects

When using select operations with business objects, EBX® utilizes a temporary folder to handle large response contents. Administrators can set a size limit through the ebx.dataservices.rest.bo.maxResponseSizeInKB configuration parameter. Adjust this setting based on the server's available filesystem space dedicated to temporary content and considering the maximum number of request processing threads.

See ebx.dataservices.rest.bo.maxResponseSizeInKB and Setting temporary files directories for more information.

REST access to history table

The merge information in history table (the merge_info field) has a potentially high access cost. To improve performance and if the client code does not need this field, the includeMergeInfo parameter must be set to false.

See History for more information.

Setting a fetch size

In order to improve performance, a fetch size should be set according to the expected size of the result of the request on a table. If no fetch size is set, the default value will be used.

Performance checklist for other Java customizations

While TIBCO EBX® is designed to support large volumes of data, several common factors can lead to poor performance. Addressing the key points discussed in this section will solve the usual performance bottlenecks.

Expensive programmatic extensions

For reference, the table below details the programmatic extensions that can be implemented.

Use case

Programmatic extensions that can be involved

Validation

Table access

EBX® content display

Data update

For large volumes of data, using algorithms of high computational complexity has a serious impact on performance. For example, the complexity of a constraint's algorithm is O(n 2 ). If the data size is 100, the resulting cost is proportional to 10 000 (this generally produces an immediate result). However, if the data size is 10 000, the resulting cost will be proportional to 10 000 000.

Another reason for slow performance is calling external resources. Local caching usually solves this type of problem.

If one of the use cases above displays poor performance, it is recommended to track the problem, either by code analysis or by using a Java profiling tool.

Unnecessary index refresh

Refreshing a Lucene index takes time. It should be avoided whenever possible.

When does a refresh happen?

In the context of a transaction, an index refresh occurs when the table has been modified and one of the conditions below occurs:

  1. For a lookup by primary key, the refresh is always triggered if the searched key has been "touched" (created, modified or deleted) in the current Procedure (or TableTrigger).

  2. For a standard Query (or Request), an index refresh is always performed if the table has been modified in the current Procedure (or TableTrigger).

Coding recommendations

  1. To avoid triggering a refresh through a lookup by primary key, the developer must register the Adaptation object returned from the last call to doCreateOccurrence or doModifyContent, and reuse this object instead of performing the lookup.

  2. Avoid any lookup by primary key on a record that has been deleted in the current procedure.

  3. In the case of a query triggering the refresh, the developer must ask the following question: can this query be avoided in my procedure?

Transaction threshold for mass updates

It is generally not advised to use a single transaction when the number of atomic updates in the transaction is beyond the order of 10 5 . Large transactions require a lot of resources, in particular, memory, from EBX® and from the underlying database.

To reduce the transaction size, it is possible to:

On the other hand, specifying a very small transaction size can also hinder performance, due to the persistent tasks that need to be done for each commit.

Note

If intermediate commits are a problem because transactional atomicity is no longer guaranteed, it is recommended to execute the mass update inside a dedicated dataspace. This dataspace will be created just before the mass update. If the update does not complete successfully, the dataspace must be closed, and the update reattempted after correcting the reason for the initial failure. If it succeeds, the dataspace can be safely merged into the original dataspace.

Triggers

If required, triggers can be deactivated using the method ProcedureContext.setTriggerActivation.

Directory integration

Authentication and permissions management involve the user and roles directory.

If a specific directory implementation is deployed and accesses an external directory, it can be useful to ensure that local caching is performed. In particular, one of the most frequently called methods is Directory.isUserInRole.

Documentation > Administration Guide > Installation & configuration