TIBCO Cloud™ Spotfire® User Guide

Connector for Amazon Redshift — Features and settings

You can connect to and access data from Amazon Redshift with the connector for Amazon Redshift. On this page, you can find information about the capabilities, available settings, and things to keep in mind when you work with data connections to Amazon Redshift.

Page content

Connector features

The following functionality is available when you access data with the connector for Amazon Redshift.

Feature Supported?
Load methods
  • Import (in-memory)
  • External (in-database)
  • On-demand
Custom queries Yes
Stored procedures Yes
Custom connection properties No
Single sign-on with identity provider No
Authoring in web client No
Included in Spotfire distribution on Linux Yes

Data source properties

The following are the supported data source properties that you can configure when you create a data connection with the connector for Amazon Redshift.

Option Description
Server The name of the server where your data is located. To include a port number, add it directly after the name preceded by colon.

Example with port number: MyDatabaseServer:5439

Authentication method The authentication method you want to use to log in to Amazon Redshift. Select from the following options:
  • Username and password

    Log in to Amazon Redshift with username and password. Sometimes referred to as standard authentication in Amazon Redshift.

  • AWS Profile
  • Log in to Amazon Redshift with IAM credentials that you have stored as a profile in your AWS credentials file.
    Note: When you select AWS Profile, the Username field controls the connection setting Dbuser.
    Note: An AWS credentials file must be available and configured on all computers running Spotfire clients where you want to use the connection. If you use Kerberos authentication in your Spotfire Environment, you might have to configure an AWS credentials file per user on computers that run web player services.
  • AWS IAM Credentials
    Log in to Amazon Redshift with AWS IAM credentials. Enter your AWS IAM credentials with the settings AccessKeyID and SecretAccessKey in the Settings for authentication method table.
    Note: If you select AWS IAM Credentials and you do not enter AccessKeyID and SecretAccessKey, the Amazon Redshift ODBC driver attempts to connect using the default AWS Profile from your AWS credentials file, if you have one on your computer.
  • AD FS
    Log in to Amazon Redshift using Active Directory Federation Services. Enter the following information in the Settings for authentication method table:
    • IdP_Host: The address to your AD FS service.
  • Azure AD
    Log in to Amazon Redshift using Azure AD. Enter the following information in the Settings for authentication method table:
    • IdP_Tenant: The tenant ID of your Azure AD application.
    • Client_ID: The client ID in Azure for your Redshift cluster.
    • Client_Secret: The client secret in Azure for your Redshift cluster.
  • Okta
    Log in to Amazon Redshift with AWS IAM credentials from Okta. Enter the following information in the Settings for authentication method table:
    • IdP_Host: The address to your Okta service.
    • App_ID: The Application ID in Okta of your Redshift cluster.
    • DbUser: The database user in Amazon Redshift to use.
  • PingFederate
    Log in to Amazon Redshift with AWS IAM credentials from PingFederate. Enter the following information in the Settings for authentication method table:
    • IdP_Host: The address to your PingFederate service.
    • IdP_Port: The port number for your PingFederate service.
    • DbUser: The database user in Amazon Redshift to use.
Settings for authentication method Settings for the selected authentication method. Based on what authentication method you select, different settings are added to the table. You can also add other relevant connection settings. To configure a setting, fill in the Value column. Then use the Save check box to select if you want the value you entered to be stored in the connection, or if you want the user to fill in their own value when they open the connection.

Save the setting value or ask the user to fill it in?

In the table, you can choose if you want to save the value that you enter for a setting. By default, Save is not selected.
  • If you do not select Save, the setting value that you enter is not stored in the connection. The setting is treated like a username or password, so when a user opens the connection, Spotfire asks them to enter a value for the setting.
    Important: In the Data Source Settings dialog, if you select Yes, save credentials with connection data source, all settings are saved with the data source.
  • If you select Save, the setting value that you enter is stored in the connection. When a user opens the connection, the same value is used to connect to the Amazon Redshift cluster.
    CAUTION: When you select Save, the value you enter is stored unencrypted in the analysis file. Do not select Save for secret information, such as usernames and passwords.
Adding a setting

To add a new setting, enter the name of the setting in the Setting column on the final row in the table. Then enter the value in the Value column. When you enter a name and value for a setting, a new row is added where you can add more settings.

Deleting a setting

To remove a connection property that you have added, click the leftmost cell in the row to select the entire row. Then press the Delete key.

Note: If your keyboard does not have a Delete key, use the corresponding keyboard shortcut instead. On a Mac computer, for example, the keyboard shortcut is Fn + Backspace.

What settings can I use?

There is no guarantee that any specific connection setting that you add works in Spotfire. For security reasons, some settings are not allowed. Also, you can not add settings that there are already dedicated controls or input fields for.

SSL mode Specifies whether SSL should be used when connecting to the Amazon Redshift database. disable SSL is not used, and the connection is not encrypted.
  • allow

    SSL encryption is used only if the server requires it.

  • prefer

    SSL should be used if the server supports it.

  • require

    SSL must always be used.

  • verify-ca

    SSL must be used, and the certificate must be verified.

  • verify-full

    SSL must be used, and the server certificate and host name must be verified.

See the official Amazon Redshift documentation for more information about how to use SSL mode.

Database The name of the database where your data is located.
Use unicode If this check box is selected, the Amazon Redshift ODBC driver will return CHAR as SQL_WCHAR, VARCHAR as SQL_WVARCHAR, and TEXT as SQL_WLONGVARCHAR.

If this check box is not selected, the Amazon Redshift ODBC driver will return CHAR as SQL_CHAR, VARCHAR as SQL_VARCHAR, and TEXT as SQL_LONGVARCHAR.

Connection timeout (s) Specifies the maximum number of seconds allowed for a connection to the database to be established.
Note: If you set the connection timeout to zero, it will be interpreted as no timeout. That is, there will be no upper limit for trying to connect. This is generally not recommended.
Command timeout (s) Specifies the maximum number of seconds allowed for a command to be executed.
Note: If you set the command timeout to zero, it will be interpreted as no timeout. That is, there will be no upper limit for trying to execute the command. This is generally not recommended.
Settings profile The name of a settings profile to use when you open the connection in a web client or in Automation Services.

A settings profile is a way to store settings from the Settings for authentication method table, without saving the values in the analysis file. Settings profiles are stored in the configuration on Automation Services and web player services.

Additional settings
CAUTION: All the settings and values that you add in the Additional settings table are stored unencrypted in analysis files. Do not add secret information like usernames and passwords. In the Settings for authentication method table on the General tab, you can add settings without storing values in the analysis file.

Add and use additional connection settings.

In the Amazon Redshift ODBC driver, there are many connection settings that there are no controls for in Spotfire. The table enables you to use more of those settings. To learn about what connection settings are available, see the documentation for the Amazon Redshift ODBC driver.

Adding a setting

In the Additional settings table, each row represents a connection setting. To add a new setting, enter the name of the setting in the Setting column, and enter the value in the Value column. When you enter a name and value for a setting, a new row is added where you can add more settings.

Deleting a setting

To remove a connection property that you have added, click the leftmost cell in the row to select the entire row. Then press the Delete key.
Tip: If your keyboard does not have a Delete key, use the corresponding keyboard shortcut instead. On a Mac computer, for example, the keyboard shortcut is Fn + Backspace.

What settings can I use?

There is no guarantee that any specific connection setting that you add works in Spotfire. For security reasons, some settings are not allowed. Also, you can not add settings that there are already dedicated controls or input fields for.

Supported data types

When you are setting up a connection to an external data source, Spotfire needs to map the data types in the data source to data types in Spotfire. The following are the data types that the Amazon Redshift connector supports.

Database data type Spotfire data type
GEOMETRY Binary
BOOLEAN Boolean/Bool
DATE Date
TIMESTAMP DateTime
TIMESTAMPTZ DateTime
INTEGER Integer/Int
SMALLINT Integer/Int
BIGINT LongInteger/Long
FLOAT Real
DOUBLE PRECISION Real/Double
REAL SingleReal/Float
CHAR String
VARCHAR String
DECIMAL(p,s) When p <= 9 and s = 0: Integer/Int

When 9 < p <= 18 and s = 0: LongInteger/Long

When p <= 15: Real/Double

Else: Currency/Decimal

Supported functions

Supported functions are the functions that you can use when you work with in-database data tables, for example for calculated columns and custom expressions.

Note: Some supported functions might not be possible to use with your database. This depends on what functions are available in the database, which often differs between database versions and types.

The following are the functions that the Amazon Redshift connector supports.

Function type Functions supported
Aggregation ApproximateCount, Bit_And, Bit_Or, Bool_And, Bool_Or, CountIf, ListAgg
Binning

Binning is supported.

For binning, you can use the Auto-bin column functionality.

You can also use the following Spotfire binning functions: BinByDateTime, BinByEvenIntervals, BinBySubstring

The custom function Width_Bucket is also available:

Width_Bucket([Column],Min,Max,Buckets)

Date and Time Add_Months, AtTimeZone, Convert_Timezone, Date, DateAdd, DateDiff, Date_Cmp, Date_Cmp_Timestamp, Date_From_Unix_Date, Date_Part, Date_Part_Year, Date_Trunc, Day, DayOfMonth, DayOfWeek, DayOfYear, Extract, Hour, Interval_Cmp, Last_Day, Millennium, Millisecond, Minute, Month, Months_Between, Next_Day, Quarter, Second, Timestamp_Cmp, Timestamp_Cmp_Date, Timestamp_Micros, Timestamp_Millis, Timestamp_Seconds, Timestamp_Trunc, Timezone, To_Timestamp, Trunc, Unix_Date, Unix_Micros, Unix_Millis, Unix_Seconds, Week, Year
Conversion SN, To_Char, To_Date, To_Number
Math Abs, ACos, ACosH, ASin, ASinH, ATan, ATan2, ATanH, Bit_Count, Cbrt, Ceil, Ceiling, Checksum, Cos, CosH, Cot, Degrees, DExp, Div, DLog1, DLog10, Exp, Floor, Fnv_Hash, From_Hex, Greatest, IEEE_Divide, Is_Inf, Is_NaN, Least, Ln, Log, Mod, Pi, Power, Radians, Random, Round, Safe_Divide, SHA, SHA1, SHA2, Sign, Sin, SinH, Sqrt, Tan, TanH, To_Hex
Operators +, -, *, /, %, &
Spatial GeometryType, ST_AddPoint, ST_Area, ST_AsBinary, ST_AsEWKB, ST_AsEWKT, ST_AsGeoJSON, ST_AsText, ST_Azimuth, ST_Contains, ST_CoveredBy, ST_Covers, ST_Dimension, ST_Disjoint, ST_Distance, ST_DistanceSphere, ST_DWithin, ST_EndPoint, ST_Envelope, ST_Equals, ST_GeometryN, ST_GeometryType, ST_GeomFromEWKB, ST_GeomFromText, ST_GeomFromWKB, ST_Intersects, ST_IsClosed, ST_IsCollection, ST_IsEmpty, ST_Length, ST_Length2D, ST_LineFromMultiPoint, ST_MakeLine, ST_MakePoint, ST_MakePolygon, ST_MemSIze, ST_NPoints, ST_NRings, ST_NumGeometries, ST_NumInteriorRings, ST_NumPoints, ST_Perimeter, ST_Perimeter2D, ST_Point, ST_PointN, ST_Polygon, ST_RemovePoint, ST_SetSRID, ST_SRID, ST_StartPoint, ST_Touches, ST_Within, ST_X, ST_XMax, ST_XMin, ST_Y, ST_YMax, ST_YMin
Statistical Approximate_Percentile_Disc, Avg, Corr, Count, Covar_pop, Covar_samp, First, Last, Max, Median, Min, Percentile_Cont, Stddev_Pop, Stddev_Samp,Sum, UniqueCount, Var_Pop, Var_Samp
Text BpCharCmp, BTrim, Bttext_Pattern_Cmp, CharIndex, Char_Length, Chr,Concat, CRC32, Ends_With, Initcap, Is_Valid_JSON, Is_Valid_JSON_Array, JSON_Array_Length, JSON_Extract_Array_Element_Text, JSON_Extract_Path_Text, LeftStr, Length, Lower, Lpad, LTrim, Md5, Octet_Length, Position, Quote_Ident, Quote_Literal, Regexp_Contains, Regexp_Count, Regexp_Instr, Regexp_Replace, Regexp_Substr, Repeat, Replace, Replicate, Reverse, RightStr, Rpad, RTrim, Split_Part, Starts_With, Strpos, Strtol, Substring, Translate, Trim, Upper