Caching Limitations

TDV can cache data from many different sources to many different cache targets. Occasionally, there are subtle limitations when dealing with so many systems. For data caching, those limitations are discussed here.

The view being cached must NOT have a column called cachekey.
If the cache target does not support a data type available in the source, then any attempt to store the unsupported data type in the cache results in an error. However, it may be possible to cast the unsupported data type to a supported one (for example, cast a BOOLEAN to an INTEGER type) before sending it to the cache target.
When caching to a table, ARRAY data types are not supported.
The following resources cannot be cached without being wrapped in a view or procedure:
Procedures with no outputs; that is, no data to cache.
Procedures with input parameters that require the use of cache policies.
XML files that have been introspected.
System tables.
Non-data sources such as folders, and definition sets.

Incremental Caching Limitation

Cache policies cannot be used with incremental caching.

Multi-Table Caching Limitation

Multi-table caching is available for tables and views, not for procedures.

Apache Hive Caching Limitations

When hive is used as cache target data source, using a different data source for cache_status/cache_tracking is recommended.

Hive does not support data types BLOB, CLOB. Hence when using Hive as a cache target, tables with these data types cannot be cached.

Microsoft SQL Server Caching Limitations

When using Microsoft SQL Server with the bcp utility, you might get data type mismatch errors for BLOB or CLOB data types.

For SQL Server data sources, the DBO schema must be selected and introspected as a resource prior to attempting to cache data.

Temporary data can build up in buffer files that are located in the <TDV_install_dir>/cacheloading/sqlserver folder. If there are no cache refresh processes running, it is safe to delete the contents of this folder.

MySQL Caching Limitations

MySQL sets names to lowercase if it is running on Windows, even if you enclose the mixed case string in double quotes. When using MySQL tables as cache targets, either use the TDV browse option to choose the table, or make sure that you assign lowercase names to tables.

The TDV native load option cannot load binary data.

When using MySQL as a cache target, note the data type limitations, by data source, listed in the following table. Also when MySQL 5.5 is the cache target, time and time stamp data types lose precision for milliseconds and other fractional seconds.

Cache Data Source

Data Types Not Supported

Cache Target

DB2 9.5

BLOB

MySQL 5.5

Oracle 11g

BLOB

LONGRAW

MySQL 5.5

Sybase 15

BINARY, IMAGE, VARBINARY, or TIMESTAMP

MySQL 5.5

SQL Server 2008

BINARY, IMAGE, VARBINARY, or TIMESTAMP

MySQL 5.5

SQL Server 2012

IMAGE

MySQL 5.1

Oracle Caching Limitations

If the same Oracle database instance is acting as the source of and target for your cached data:

LONG and LONG RAW data types. SQL*Plus is unable to SELECT a LONG RAW column. To work around the issue, you can convert the LONG RAW data types to BLOB.
INTERVAL DAY TO SECOND and INTERVAL YEAR TO MONTH lose precision when they are cached using INSERT and SELECT statements.

SAP HANA Caching Characteristics

Cache target tables in SAP HANA are created as column-store tables, rather than the row-store tables more commonly used in relational databases. If partitions are used, SAP HANA uses round-robin partitioning to equally distribute rows to partitions. The table used for caching does not have to have primary keys.

Teradata Caching Limitations

Teradata has a known issue that affects data caching for TDV. Because of this issue, you might be unable to cache data, and you might get incorrect query results against Teradata when Ignore Trailing Spaces is set to FALSE in TDV. The issue is caused by the Teradata driver’s management of character data when using UTF-8 character sets.

To solve both the caching and the query problems, you can do one of the following:

Change the global server setting for Ignore Trailing Spaces to true. (In Studio, choose Administration > Configuration. Locate and select Ignore Trailing Spaces, and click True for Value.)
Change the Teradata connection string to use UTF-16 by substituting CHARSET=UTF16 for CHARSET=UTF8, and save the data source. Then recreate the cache_status table and refresh the cache.
Change the Teradata connection string to use ASCII by substituting CHARSET=ASCII for CHARSET=UTF8, and save the data source. Then recreate the cache_status table and refresh the cache. This solution does not work for data that contains multi-byte international characters because the characters are not saved or retrieved correctly.

To solve just the caching problem, cache data in a different data source (instead of Teradata).

To solve just the query problem, provide query hints (see Specifying Query Hints ) on queries against Teradata where filters are on CHAR columns:

{ OPTION IGNORE_TRAILING_SPACES="True" } 

Teradata Multi-Table Caching Limitations

The Teradata Fast Export and Fast Load features are supported for caching with the TDV multi-table caching option. The cache must have no duplicate rows of data and be configured as specified in Configuring Teradata for Use as a Multi-Table Cache Target.

Teradata FastLoad requires that the target table be empty.
Teradata has limitation on how many concurrent FastLoad and FastExport tasks can run in parallel. Parallelism is controlled by the MaxLoadTasks and MaxLoadAWT parameters. Any FastLoad task exceeding the limitation is rejected.
For Teradata, the maximum session for each FastLoad or FastExport job is limited to the number of AMPs of the Teradata database. Typically, eight sessions work well for most scenarios.
For Teradata, a row fetch size bigger than 64 KB causes a Teradata error. Teradata big objects can be configured using Teradata to return data in differed transfer mode. Refer to your Teradata documentation to determine the best solution for you if you have data rows that return 64 KB or greater of data.
The following data type and function support restrictions exist.

Data Source

Cache Target

Data Types Not Supported

Functions Not Supported

Oracle

Teradata

BLOB, CLOB, LONG, LONGRAW, NCLOB

No results returned after refreshing the cache against INTERVALDAYTOSECOND and INTERVALYEARTOMONTH.

SQL Server 2008

Teradata

BINARY, IMAGE, NTEXT, TEXT, VARBINARY

 

Sybase

Teradata

BINARY, IMAGE, TEXT, VARBINARY

 

Teradata

Teradata

BYTE, BLOB, CLOB, LONGVARCHAR

 

Vertica 5.0 and 6.1

Teradata

BINARY, VARBINARY

 

Vertica Caching Limitations

Because of Vertica length limits, mapping of any data type (BINARY, CHAR, VARCHAR, BLOB, and so on) to Vertica cache with length greater than 65000 results in an error, regardless of the data source.

Similarly, Vertica only supports precision up to 15 digits. Any data that you have past 16 digits will get rounded by Vertica. This precision limitation is particularly noticeable when working with REAL, FLOAT, and DOUBLE data types.