Teradata Caching Limitations
Teradata has a known issue that affects data caching for TDV. Because of this issue, you might be unable to cache data, and you might get incorrect query results against Teradata when Ignore Trailing Spaces is set to FALSE in TDV. The issue is caused by the Teradata driver’s management of character data when using UTF-8 character sets.
To solve both the caching and the query problems, you can do one of the following:
| • | Change the global server setting for Ignore Trailing Spaces to true. (In Studio, choose Administration > Configuration. Locate and select Ignore Trailing Spaces, and click True for Value.) |
| • | Change the Teradata connection string to use UTF-16 by substituting CHARSET=UTF16 for CHARSET=UTF8, and save the data source. Then recreate the cache_status table and refresh the cache. |
| • | Change the Teradata connection string to use ASCII by substituting CHARSET=ASCII for CHARSET=UTF8, and save the data source. Then recreate the cache_status table and refresh the cache. This solution does not work for data that contains multi-byte international characters because the characters are not saved or retrieved correctly. |
To solve just the caching problem, cache data in a different data source (instead of Teradata).
To solve just the query problem, provide query hints (see the Section Specifying Query Hints in the User Guide) on queries against Teradata where filters are on CHAR columns:
{ OPTION IGNORE_TRAILING_SPACES="True" }
Teradata Multi-Table Caching Limitations
The Teradata Fast Export and Fast Load features are supported for caching with the TDV multi-table caching option. The cache must have no duplicate rows of data and be configured as specified in See the Section Configuring Teradata for Use as a Multi-table Cache Target in the User Guide
| • | Teradata FastLoad requires that the target table be empty. |
| • | For Teradata, the maximum session for each FastLoad or FastExport job is limited to the number of AMPs of the Teradata database. Typically, eight sessions work well for most scenarios. |
| • | For Teradata, a row fetch size bigger than 64 KB causes a Teradata error. Teradata big objects can be configured using Teradata to return data in differed transfer mode. Refer to your Teradata documentation to determine the best solution for you if you have data rows that return 64 KB or greater of data. |
| • | The following data type and function support restrictions exist. |
|
Data Source |
Cache Target |
Data Types Not Supported |
Functions Not Supported |
|
Oracle |
Teradata |
BLOB, CLOB, LONG, LONGRAW, NCLOB |
No results returned after refreshing the cache against INTERVALDAYTOSECOND and INTERVALYEARTOMONTH. |
|
SQL Server 2008 |
Teradata |
BINARY, IMAGE, NTEXT, TEXT, VARBINARY |
|
|
Sybase |
Teradata |
BINARY, IMAGE, TEXT, VARBINARY |
|
|
Teradata |
Teradata |
BYTE, BLOB, CLOB, LONGVARCHAR |
|
|
Vertica 5.0 and 6.1 |
Teradata |
BINARY, VARBINARY |
|
TDV Native Loading Option Teradata Limitation
If the TDV native load option is active and the data identified to be moved to the cache has one or more duplicate rows, Teradata 13 will allow the duplicate rows.