Gateway Configuration Parameters
The [gateway]
defines the connection to the target data warehouse. For example, Azure Synapse Analytics or Google BigQuery.
[gateways] "gateway".name = "gateway_bigquery" "gateway".connection_string = "Driver=/opt/simba/googlebigqueryodbc/lib/64/libgooglebigqueryodbc_sb64.so;Server=__HOST__;Port=__PORT__;Database=__DATABASE__;Uid=__USER__;Pwd=__PASSWORD__" "gateway".host = "servername" "gateway".port = 1433 "gateway".Ipv = 4 "gateway".type = bigquery "gateway".version = "100000" "gateway".default_table_type = "multiset" "gateway".db_interface = rest_api "gateway".parallel_conversion_available_memory_size = 3200MB "gateway".max_tdf_size = 100MB "gateway".max_tdf_rows = 10000000
Parameter | Type | Description |
---|---|---|
| Boolean | If set to
To learn more, see Enable or Disable Index Not Supported Error Messages |
| String | Specifies the authentication method to use. When set to |
| String | To emulate Teradata catalog emulation in BigQuery, Hyper-Q constructs a UNION of values that make up a Teradata DBC views (for example, DBC.COLUMNSV and DBC.TABLESV). As the number of objects in the database grows, so too does the number of UNION operations in the SQL output. BigQuery has a limit on the number of objects that can be referenced in a query. To accommodate this, Hyper-Q breaks apart the large UNION operation and materializes them into temporary tables. You can specify the chunk size of these tables using the If this parameter is specified with too high a setting, the size of the row chunks in each UNION operation will be quite large. This results in less UNION operations, however, the larger the UNION operation the slower their execution. If the For these reasons, the optimum value to specify for your BigQuery environment will be somewhere between these high and low settings, and will differ for each workload and the data in your system. Datometry recommends experimenting with different chunk size settings to determine the optimum value for your environment. To begin tuning your BigQuery environment, experiment with the following values and observe the performance of your workload: 10, 25, 50, 100, 200, 300, 450. The value to use for a given workload will be somewhere between these values. To use
|
| Boolean | Specifies that Hyper-Q use parameterized data types on columns for the data types STRING, BYTES, NUMERIC, and BIGNUMERIC. A data type that is declared with parameters is called a parameterized data type. You can only use parameterized data types with columns. A column with a parameterized data type is a parameterized column. Parameterized type constraints are enforced when writing a value to a parameterized column. To learn more, see Parameterized data types in the Google BigQuery documentation. To enable If enabled, CREATE TABLE table_name (column1 CHAR(10), column2 NUMERIC, column3 NUMERIC(32,1), column4 BYTE(100)); CREATE TABLE `project_id`.`dataset_id`.`table_name` ( `column1` STRING (10) , `column2` NUMERIC (5, 0) , `column3` BIGNUMERIC (32, 1) , `column4` BYTES (100) ) If CREATE TABLE `project_id`.`dataset_id`.`table_name` ( `column1` STRING , `column2` NUMERIC , `column3` BIGNUMERIC , `column4` BYTES ) NoteHyper-Q will only serialize a column data type. Currently, BigQuery does not support parameterized data types in CAST expressions. For example, the SQL query will return an error: SELECT CAST ('''12.3''' AS NUMERIC(3,1)); Parameterized types are not allowed in CAST expressions. ImportantDatometry recommends that new Hyper-Q deployments from version 3.87 onwards enable |
| String | Specifies the name case Hyper-Q uses when creating or looking up metadata objects. The
ImportantYou must enclose the values you assign to
|
| String | Name of the Google Cloud Storage bucket from which to bulk load data. For example:
To learn more about Google Cloud Storage buckets and loading data into BigQuery, see the following articles in the Google Cloud documentation: |
| Integer | Specifies the maximum file size of each intermediate Teradata MultiLoad or FastLoad data file. When the specified file size is reached, the file is uploaded to the database and a new file is created. This value is reflected in the number of times Hyper-Q issues the upload command to the BigQuery upload tool and may affect job performance. For example, if the value is too small, Hyper-Q issues several commands, and uploads a small file each time. If the value is too large, Hyper-Q will wait for a long time before any data is uploaded to the destination database. Accepted values are composed of a number and a unit. Acceptable units are: TB, GB, MB, KB and B. For example 10MB indicates a max intermediate file size of 10 MB. The default value is 1GB.
|
| String | To enable the generation 2 version of bulk load error handling, specify the configuration parameter
To disable generation 2 bulk load error handling, specify
When set to
To learn more, see Enable Generation 2 Bulk Load Error Handling. |
| String | Specifies the field separator to use during bulk loading. A comma (,) is the terminator for a field.
Also see the configuration parameter |
| Integer | Specifies the maximum number of times Hyper-Q will attempt to identify a row causing an error. The default value is 0. Hyper-Q uses a slow error handling approach that may require executing a query log several times to discover the exact row causing an error. For databases with a large number of rows, this may cause performance issues due to excessive query logging. If detailed error reporting is not required, specify a value of 0.
Hyper-Q 3.93 introduced |
| String | Specifies the row separator to use during bulk loading. A
Also see the configuration parameter |
| String | Specifies the method through which to upload data to BigQuery. The options for bulk loading data are The
The
|
| String | Specifies the database connection. "gateway".connection_string = "Driver=__Driver__; Server=__HOST__;Port=__PORT__; Database=__DATABASE__;Uid=__USER__; Pwd=__PASSWORD__" |
| String | ImportantBeginning with Hyper-Q 3.104, the Specifies that Hyper-Q directly access Google BigQuery using the Google REST API or an external Python application and the Google Python API. The
NoteCurrently, Hyper-Q does not support multi-statement requests or parameterized queries through the Google REST API. In these cases Hyper-Q uses the external Python program and API. |
| String | Specifies that Hyper-Q emulate either Teradata MULTISET or SET tables for CREATE TABLE statements that do not specify a table type. The default is to create MULTISET tables. The following example specifies that Hyper-Q emulate Teradata SET tables.
|
| String | Host name of the cloud-based data warehouse server. |
| Boolean | Specifies that Hyper-Q only create error tables if and when bulk load errors are encountered. By default, Hyper-Q creates error tables and drops them if they remain empty when the bulk load job completes (in which case no errors were encountered during bulk loading). By setting
|
| Boolean | Specifies that Hyper-Q ignore statistics for missing named indexes. The ignored indexes are primary indexes. Secondary indexes are correctly recorded. To enable
To learn more, see Ignore Statistics for Missing Named Indexes |
| Integer | Specifies the type of IP address to use with the cloud-based data warehouse, which is either IPv4 or IPv6. |
| Integer | Hyper-Q receives the result from the backend and splits the result into tabular data format (tdf) chunks which are sent for conversion. Specify the chunk size using the number of rows. For example: CautionBefore modifying this parameter, contact Datometry Support to see if it is compatible with your environment. |
| Integer | Hyper-Q receives the result from the backend and splits the result into tabular data format (tdf) chunks which are sent for conversion. Specify the size of the chunk size in bytes. For example:
CautionBefore modifying this parameter, contact Datometry Support to see if it is compatible with your environment. |
| String | Specifies the name by which the gateway is called by other objects in the Hyper-Q configuration file. |
| Boolean | Enables the mapping of the Teradata NUMERIC data type to the BigQuery INT64 data type. To enable the mapping of Teradata NUMERIC to BigQuery INT64, set
CautionThe Data Definition Language (DDL) commands in SQL are used to create the database schema and to define the type and structure of the data that will be stored in the database. For this reason, once you execute DDL commands on your system, you can not enable and then disable To learn more about |
| Integer | Specifies the maximum memory that can be used by the Hyper-Q Parallel Conversion Process per session, which controls the degree of paralleism. The default value is 1 MB. To calculate the recommended value for your deployment, multiply the value of the
Supported memory units are B (byte), KB (kilobyte), MB (megabyte), GB (gigabyte), TB (terabyte). For example: 3200MB CautionBefore modifying this parameter, contact Datometry Support to see if it is compatible with your environment. |
| Integer | Specifies the network port of the cloud-based data warehouse server. |
| String | Specifies the schema in which to create regular tables that emulate temporary tables. BigQuery supports datasets (schemas) with an auto-expiration option. You must create the schema with the appropriate configuration before specifying it using temp_table_schema, as the specified schema stores regular tables that expire after the expiration time is passed. To learn more, see Updating default table expiration times in the Google documentation. |
| Boolean | Specifies that Hyper-Q map Teradata TIMESTAMP to BigQuery DATETIME instead of BigQuery TIMESTAMP. By default, this configuration parameter is set to TIMESTAMP WITH TIMEZONE is not affected by the You must use this setting across the entirety of your BigQuery project, and it must remain unchanged once you determine you want to convert Teradata TIMESTAMP to BigQuery DATETIME. You cannot specify the conversion for some SQL statements and not for others. For example, if you were to turn on Teradata TIMESTAMP to BigQuery DATETIME emulation for table creation, and then turn it off to insert data, there would be an error returned due to a data type mismatch. |
| String | Specifies the cloud-based data warehouse vendor. Acceptable values are: |
| Integer | Specifies that the endpoint treat the list of schemas as the default schema. In the following example, Hyper-Q treat another_schema as the default schema.
|