MySQL connector
The MySQL connector allows querying and creating tables in an external MySQL instance. This can be used to join data between different systems like MySQL and Hive, or between two different MySQL instances.
Requirements
To connect to MySQL, you need:
- MySQL 5.7, 8.0 or higher.
- Network access from the Trino coordinator and workers to MySQL. Port 3306 is the default port.
Configuration
To configure the MySQL connector, create a catalog properties file in
etc/catalog
named, for example, mysql.properties
, to mount the MySQL
connector as the mysql
catalog. Create the file with the following
contents, replacing the connection properties as appropriate for your
setup:
connector.name=mysql
connection-url=jdbc:mysql://example.net:3306
connection-user=root
connection-password=secret
The connection-url
defines the connection information and parameters
to pass to the MySQL JDBC driver. The supported parameters for the URL
are available in the MySQL Developer
Guide.
For example, the following connection-url
allows you to configure the
JDBC driver to interpret time values based on UTC as a timezone on the
server, and serves as a workaround for a known
issue.
connection-url=jdbc:mysql://example.net:3306?serverTimezone=UTC
The connection-user
and connection-password
are typically required
and determine the user credentials for the connection, often a service
user. You can use secrets to avoid actual
values in the catalog properties files.
Connection security
If you have TLS configured with a globally-trusted certificate installed
on your data source, you can enable TLS between your cluster and the
data source by appending a parameter to the JDBC connection string set
in the connection-url
catalog configuration property.
For example, with version 8.0 of MySQL Connector/J, use the sslMode
parameter to secure the connection with TLS. By default the parameter is
set to PREFERRED
which secures the connection if enabled by the
server. You can also set this parameter to REQUIRED
which causes the
connection to fail if TLS is not established.
You can set the sslMode
paremeter in the catalog configuration file by
appending it to the connection-url
configuration property:
connection-url=jdbc:mysql://example.net:3306/?sslMode=REQUIRED
For more information on TLS configuration options, see the MySQL JDBC security documentation.
Multiple MySQL servers
You can have as many catalogs as you need, so if you have additional
MySQL servers, simply add another properties file to etc/catalog
with
a different name, making sure it ends in .properties
. For example, if
you name the property file sales.properties
, Trino creates a catalog
named sales
using the configured connector.
General configuration properties
The following table describes general catalog configuration properties for the connector:
Property name | Description | Default value |
---|---|---|
case-insensitive-name-matching | Support case insensitive schema and table names. | false |
case-insensitive-name-matching.cache-ttl | 1m | |
case-insensitive-name-matching.config-file | Path to a name mapping configuration file in JSON format that allows Trino to disambiguate between schemas and tables with similar names in different cases. | null |
case-insensitive-name-matching.refresh-period | Frequency with which Trino checks the name matching configuration file for changes. | 0 (refresh disabled) |
metadata.cache-ttl | Duration for which metadata, including table and column statistics, is cached. | 0 (caching disabled) |
metadata.cache-missing | Cache the fact that metadata, including table and column statistics, is not available | false |
metadata.cache-maximum-size | Maximum number of objects stored in the metadata cache | 10000 |
write.batch-size | Maximum number of statements in a batched execution. Do not change this setting from the default. Non-default values may negatively impact performance. | 1000 |
Procedures
system.flush_metadata_cache()
Flush JDBC metadata caches. For example, the following system call flushes the metadata caches for all schemas in the
example
catalogUSE example.myschema;
CALL system.flush_metadata_cache();
Case insensitive matching
When case-insensitive-name-matching
is set to true
, Trino is able to
query non-lowercase schemas and tables by maintaining a mapping of the
lowercase name to the actual name in the remote system. However, if two
schemas and/or tables have names that differ only in case (such as
"customers" and "Customers") then Trino fails to query them due to
ambiguity.
In these cases, use the case-insensitive-name-matching.config-file
catalog configuration property to specify a configuration file that maps
these remote schemas/tables to their respective Trino schemas/tables:
{
"schemas": [
{
"remoteSchema": "CaseSensitiveName",
"mapping": "case_insensitive_1"
},
{
"remoteSchema": "cASEsENSITIVEnAME",
"mapping": "case_insensitive_2"
}],
"tables": [
{
"remoteSchema": "CaseSensitiveName",
"remoteTable": "tablex",
"mapping": "table_1"
},
{
"remoteSchema": "CaseSensitiveName",
"remoteTable": "TABLEX",
"mapping": "table_2"
}]
}
Queries against one of the tables or schemes defined in the mapping
attributes are run against the corresponding remote entity. For example,
a query against tables in the case_insensitive_1
schema is forwarded
to the CaseSensitiveName schema and a query against case_insensitive_2
is forwarded to the cASEsENSITIVEnAME
schema.
At the table mapping level, a query on case_insensitive_1.table_1
as
configured above is forwarded to CaseSensitiveName.tablex
, and a query
on case_insensitive_1.table_2
is forwarded to
CaseSensitiveName.TABLEX
.
By default, when a change is made to the mapping configuration file,
Trino must be restarted to load the changes. Optionally, you can set the
case-insensitive-name-mapping.refresh-period
to have Trino refresh the
properties without requiring a restart:
case-insensitive-name-mapping.refresh-period=30s
Non-transactional INSERT
The connector supports adding rows using
INSERT statements </sql/insert>
. By default, data insertion is
performed by writing data to a temporary table. You can skip this step
to improve performance and write directly to the target table. Set the
insert.non-transactional-insert.enabled
catalog property or the
corresponding non_transactional_insert
catalog session property to
true
.
Note that with this property enabled, data can be corrupted in rare cases where exceptions occur during the insert operation. With transactions disabled, no rollback can be performed.
Type mapping
Because Trino and MySQL each support types that the other does not, this connector modifies some types when reading or writing data.
MySQL to Trino read type mapping
This connector supports reading the following MySQL types and performs conversion to Trino types with the detailed mappings as shown in the following table.
MySQL database type | Trino type | Notes |
---|---|---|
BIT | BOOLEAN | |
BOOLEAN | TINYINT | |
TINYINT | TINYINT | |
SMALLINT | SMALLINT | |
INTEGER | INTEGER | |
BIGINT | BIGINT | |
DOUBLE PRECISION | DOUBLE | |
FLOAT | REAL | |
REAL | REAL | |
DECIMAL(p, s) | DECIMAL(p, s) | See MySQL DECIMAL type handling |
CHAR(n) | CHAR(n) | |
VARCHAR(n) | VARCHAR(n) | |
TINYTEXT | VARCHAR(255) | |
TEXT | VARCHAR(65535) | |
MEDIUMTEXT | VARCHAR(16777215) | |
LONGTEXT | VARCHAR | |
BINARY , VARBINARY , TINYBLOB ,BLOB , MEDIUMBLOB , LONGBLOB | VARBINARY | |
DATE | DATE | |
TIME(n) | TIME(n) | |
DATETIME(n) | DATETIME(n) | |
TIMESTAMP(n) | TIMESTAMP(n) |
No other types are supported.
Trino to MySQL write type mapping
This connector supports writing the following Trino types and performs conversion to MySQL types with the detailed mappings as shown in the following table.
Trino type | MySQL type | Notes |
---|---|---|
BOOLEAN | TINYINT | |
TINYINT | TINYINT | |
SMALLINT | SMALLINT | |
INTEGER | INTEGER | |
BIGINT | BIGINT | |
REAL | REAL | |
DOUBLE | DOUBLE PRECISION | |
DECIMAL(p, s) | DECIMAL(p, s) | MySQL DECIMAL type handling |
CHAR(n) | CHAR(n) | |
VARCHAR(n) | VARCHAR(n) | |
DATE | DATE | |
TIME(n) | TIME(n) | |
TIMESTAMP(n) | TIMESTAMP(n) |
No other types are supported.
Decimal type handling
DECIMAL
types with precision larger than 38 can be mapped to a Trino
DECIMAL
by setting the decimal-mapping
configuration property or the
decimal_mapping
session property to allow_overflow
. The scale of the
resulting type is controlled via the decimal-default-scale
configuration property or the decimal-rounding-mode
session property.
The precision is always 38.
By default, values that require rounding or truncation to fit will cause
a failure at runtime. This behavior is controlled via the
decimal-rounding-mode
configuration property or the
decimal_rounding_mode
session property, which can be set to
UNNECESSARY
(the default), UP
, DOWN
, CEILING
, FLOOR
,
HALF_UP
, HALF_DOWN
, or HALF_EVEN
(see
RoundingMode).
Type mapping configuration properties
The following properties can be used to configure how data types from the connected data source are mapped to Trino data types and how the metadata is cached in Trino.
Property name | Description | Default value |
---|---|---|
| Configure how unsupported column data types are handled:
The respective catalog session property is |
|
jdbc-types-mapped-to-varchar | Allow forced mapping of comma separated lists of data types to convert to unbounded VARCHAR |
Querying MySQL
The MySQL connector provides a schema for every MySQL database. You
can see the available MySQL databases by running SHOW SCHEMAS
:
SHOW SCHEMAS FROM mysql;
If you have a MySQL database named web
, you can view the tables in
this database by running SHOW TABLES
:
SHOW TABLES FROM mysql.web;
You can see a list of the columns in the clicks
table in the web
database using either of the following:
DESCRIBE mysql.web.clicks;
SHOW COLUMNS FROM mysql.web.clicks;
Finally, you can access the clicks
table in the web
database:
SELECT * FROM mysql.web.clicks;
If you used a different name for your catalog properties file, use that
catalog name instead of mysql
in the above examples.
SQL support
The connector provides read access and write access to data and metadata in the MySQL database. In addition to the globally available and read operation statements, the connector supports the following statements:
- INSERT
- DELETE
- TRUNCATE
- CREATE TABLE
- CREATE TABLE
- DROP TABLE
- CREATE SCHEMA
- DROP SCHEMA
SQL DELETE
If a WHERE
clause is specified, the DELETE
operation only works if
the predicate in the clause can be fully pushed down to the data source.
Performance
The connector includes a number of performance improvements, detailed in the following sections.
Table statistics
The MySQL connector can use table and column statistics for cost based optimizations, to improve query processing performance based on the actual data in the data source.
The statistics are collected by MySQL and retrieved by the connector.
The table-level statistics are based on MySQL's
INFORMATION_SCHEMA.TABLES
table. The column-level statistics are based
on MySQL's index statistics INFORMATION_SCHEMA.STATISTICS
table. The
connector can return column-level statistics only when the column is the
first column in some index.
MySQL database can automatically update its table and index statistics. In some cases, you may want to force statistics update, for example after creating new index, or after changing data in the table. You can do that by executing the following statement in MySQL Database.
ANALYZE TABLE table_name;
MySQL and Trino may use statistics information in different ways. For this reason, the accuracy of table and column statistics returned by the MySQL connector might be lower than than that of others connectors.
Improving statistics accuracy
You can improve statistics accuracy with histogram statistics (available since MySQL 8.0). To create histogram statistics execute the following statement in MySQL Database.
ANALYZE TABLE table_name UPDATE HISTOGRAM ON column_name1, column_name2, ...;
Refer to MySQL documentation for information about options, limitations and additional considerations.
Pushdown
The connector supports pushdown for a number of operations:
- Pushdown
- Pushdown
- Pushdown
Aggregate pushdown for the following functions:
avg
count
max
min
sum
stddev
stddev_pop
stddev_samp
variance
var_pop
var_samp
Cost-based join pushdown
The connector supports cost-based join-pushdown
to make intelligent
decisions about whether to push down a join operation to the data
source.
When cost-based join pushdown is enabled, the connector only pushes down
join operations if the available /optimizer/statistics
suggest that
doing so improves performance. Note that if no table statistics are
available, join operation pushdown does not occur to avoid a potential
decrease in query performance.
The following table describes catalog configuration properties for join pushdown:
Property name | Description | Default value |
---|---|---|
join-pushdown.enabled | Enable join pushdown <join-pushdown> . Equivalent catalog session property <session-properties-definition> isjoin_pushdown_enabled . | true |
join-pushdown.strategy | Strategy used to evaluate whether join operations are pushed down. Set to AUTOMATIC to enable cost-based join pushdown, orEAGER to push down joins whenever possible. Note thatEAGER can push down joins even when table statistics are unavailable, which may result in degraded query performance. Because of this, EAGER is only recommended for testing and troubleshooting purposes. | AUTOMATIC |
Predicate pushdown support
The connector does not support pushdown of any predicates on columns
with textual types <string-data-types>
like CHAR
or VARCHAR
. This
ensures correctness of results since the data source may compare strings
case-insensitively.
In the following example, the predicate is not pushed down for either
query since name
is a column of type VARCHAR
:
SELECT * FROM nation WHERE name > 'CANADA';
SELECT * FROM nation WHERE name = 'CANADA';