|
Authentication
|
Various
|
Select BASIC or Kerberos authentication method, where offered.
See the TDV Administration Guide for more information about Kerberos authentication.
|
|
Authentication Domain
|
Composite
|
The domain where TDV is installed.
|
|
Authentication Type
|
Drill, Compute DB
|
The authetication type used for connecting to the data source.
(BASIC, KERBEROS, NTLM, or NEGOTIATE)
|
|
Character Set
|
Microsoft Access
|
See Supported Character Encoding Types, page 185.
|
|
Various DB2
|
ASCII or EBCDIC.
|
|
Create tables with this number of partitions...
|
SAP HANA
|
The number of partitions you want used when tables are created.
This number affects whether and how table partitioning is done when TDV creates new tables in SAP HANA, for example as cache targets.
|
•
|
If you specify a positive number x (3 to 5 recommended per SAP HANA node), the CREATE TABLE DDL will contain PARTITION BY ROUNDROBIN PARTITIONS x. |
|
•
|
If you specify zero, the CREATE TABLE DDL will not include a PARTITION BY clause. |
Ultimately, the number of partitions affects performance when querying the resulting table, so you should optimize it for your SAP HANA instance and usage.
|
|
Complete connection URL string
|
Apache Drill, TIBCO ComputeDB
|
A URL to connect to the physical data source. TDV does not validate modifications. The data source adapter might not validate changes.
jdbc:drill:drillbit=<hostIP>;schema=<scemaname>
jdbc:snappydata://<hostIP>
|
|
Database
|
DataDirect Mainframe
|
For DB2 DBMS types, name of the underlying data source. For other DBMS types, enter NONE.
|
|
Database Name
|
All except Composite and Netezza
|
Name or alias of the underlying data source. TDV Server uses this name to find and connect to the data source.
|
|
Composite
|
The name of the published TDV database.
|
|
Netezza
|
The name of the Netezza catalog. The Netezza catalog is equivalent to a database name and allows view of the databases in the catalog.
|
|
Leader Node Port
|
TIBCO ComputeDB
|
The Lead node port used for submitting ComputeDB jobs. The default port is 8090.
|
|
Schema
|
Apache Drill
|
Name or alias of the underlying data source.
|
|
DBMS Type
|
DataDirect Mainframe
|
DataDirect SQL access keyword that specifies the means by which the adapter accesses the data. Most of the storage schemes listed below do not allow or are not optimized for SQL push. To use more than one DBMS type, create more data source connections in Studio.
Supported DBMS type values are:
|
•
|
ADABAS—A very fast transaction processing mainframe database. |
|
•
|
DATA—May be used to access both VSAM and sequential files. |
|
•
|
DB2—Provides the broadest coverage of SQL-99 functionality, and the most efficient SQL push to the DataDirect Mainframe. Requires a database name. |
|
•
|
IMSDB—Provides access to multi-dimensional databases. Appearance of query results differs from conventional relational database results. |
|
•
|
VSAM—A read-only storage system. |
|
•
|
VSAM/CICS—Transaction processing data system for on-line and batch systems that allows both read and write access. |
|
|
DSN
|
Microsoft Access,
|
The Data Source Name. You might need to create a new User or System DSN using the ODBC Data Source Administrator utility (available with Windows Administrative Tools).
|
|
Enable SSL
|
Redshift
|
|
|
Host
Hostname
|
All
|
Name or IP address of the machine hosting the data source.
Oracle data sources: you cannot enter an Oracle database link name in this field.
|
|
Instance Number, if applicable
|
SAP HANA
|
Instance number of the SAP HANA system.
|
|
Login, User, Password
|
All
|
User name and password required to access the data source. When the data source is used as a target for cache tables or data ship, the user must also have permission to create tables, execute DDL, and perform other required tasks. Refer to the individual data source descriptions for details.
|
|
Net Service Name
|
Oracle with OCI Driver
|
The TNS name that is set up through Oracle Net Configuration Assistant.
|
|
Pass-through Login
|
All
|
Disabled (default)—This allows automated provisioning of a connection pool. Open connection threads can be used by authorized users after the validation query verifies connection status. If pass-through login is disabled, the Save Password check box is not available.
Enabled—A new connection to the data source uses the credentials supplied by the client when data is requested from that data source for the first time. Subsequent requests by the same user reuse the existing connection. When another user attempts to connect to a data source, a new connection is created.
See “Managing Security for TDV Resources” in the TDV Administration Guide for details.
|
|
Oracle
|
To use Kerberos tokens, Oracle data sources must enable pass-through login. In addition, an enabled Kerberos security module must be running on TDV so that Kerberos-authenticated users with session tokens can use it when submitting queries.
|
|
Sybase
|
If you choose Kerberos authentication for Sybase, do not enable pass-through login.
|
|
Plan
|
DataDirect Mainframe
|
Data isolation plan. The default value, SDBC1010, specifies cursor stability. Other values specify repeatable reads, read stability, or uncommitted reads. See the Shadow RTE Client/adapters Installation and Administration Guide for more information.
|
|
Port
|
All
|
Port number for the data source to connect with the host. The default port number can depend on data source type. For example:
Composite—9401 DataDirect Mainframe—1200 DB2—no default DB2 z/OS—446 Greenplum—5432 HBase—2181 Hive—10000 Informix—1526 Microsoft SQL Server—1433 MySQL—3306* NeoView—18650 Netezza—5480 Oracle—1521 PostgreSQL—5432 SAP HANA —30015 Sybase—4100** Sybase ASE—5000** Sybase IQ—2638** Teradata—1025 Vertica—5433
* Default is 3306 or the base port setting plus eight, depending on the MySQL instance.
** The port number used for communicating with TDV is the TCP/IP port set in the <Sybase_name>.cfg configuration file.
|
|
Save Password (check box)
|
All
|
By default, the login and password are saved to create a reusable TDV Server system connection pool, usable only by the resource owner, explicitly authorized groups and users, and TDV administrators. They can:
|
•
|
Introspect or reintrospect the current data source |
|
•
|
Add or remove data source resources |
|
•
|
Perform queries, updates, and inserts on tables |
|
•
|
Invoke a stored procedure |
|
•
|
Refresh a cached view based on data source resources |
|
•
|
Use the query optimizer to gather statistics |
To disable saving of the password, enable Pass-through Login.
|
|
Server
|
SAP HANA
|
Name or IP address of the machine hosting the data source.
|
|
Service Name
|
Oracle
|
Name of the database service.
|
|
Service Principal Name
|
DB2 and DB2 z/OS
Sybase
MS SQL Server
Greenplum
|
This field is available only if you choose Kerberos authentication.
|
|
Kerberos Server Name
|
Greenplum
|
This field is available only if you choose Kerberos authentication.
|
|
Include Realm
|
Greenplum
|
This field is available only if you choose Kerberos authentication.
|
|
Keytab File
|
MS SQL Server
Greenplum
|
This field is available only if you choose Kerberos authentication.
Use to enable Kerberos security through keytab files. Type the full path to the keytab file.
|
|
Ticket Cache (Oracle Thin Driver with 11g data source)
|
Oracle
|
Specify when using Kerberos authentication.
|
|
Transaction Isolation
|
All
|
The degree to which transactions are isolated from data modifications made by other transactions. Netezza and Oracle have only Read Committed (default) and Serializable.
Read Uncommitted—Dirty reads, nonrepeatable reads, and phantom reads can occur.
Read Committed—Nonrepeatable reads and phantom reads can occur.
Repeatable Read—Only phantom reads can occur.
Serializable—Dirty reads, nonrepeatable reads, and phantom reads are prevented.
None
|