ClickHouse connector
The ClickHouse connector allows querying tables in an external Yandex ClickHouse server. This can be used to query data in the databases on that server, or combine it with other data from different catalogs accessing ClickHouse or any other supported data source.
Requirements
To connect to a ClickHouse server, you need:
- ClickHouse (version 21.3 or higher) or Altinity (version 20.8 or higher).
- Network access from the Trino coordinator and workers to the ClickHouse server. Port 8123 is the default port.
Configuration
The connector can query a ClickHouse server. Create a catalog properties
file that specifies the ClickHouse connector by setting the
connector.name
to clickhouse
.
For example, to access a server as clickhouse
, create the file
etc/catalog/clickhouse.properties
. Replace the connection properties
as appropriate for your setup:
connector.name=clickhouse
connection-url=jdbc:clickhouse://host1:8123/
connection-user=exampleuser
connection-password=examplepassword
Trino uses the new ClickHouse
driver(com.clickhouse.jdbc.ClickHouseDriver
) by default, but the new
driver only supports ClickHouse server with version >= 20.7.
For compatibility with ClickHouse server versions \< 20.7, you can
temporarily continue to use the old ClickHouse
driver(ru.yandex.clickhouse.ClickHouseDriver
) by adding the following
catalog property: clickhouse.legacy-driver=true
.
Connection security
If you have TLS configured with a globally-trusted certificate installed
on your data source, you can enable TLS between your cluster and the
data source by appending a parameter to the JDBC connection string set
in the connection-url
catalog configuration property.
For example, with version 2.6.4 of the ClickHouse JDBC driver, enable
TLS by appending the ssl=true
parameter to the connection-url
configuration property:
connection-url=jdbc:clickhouse://host1:8123/?ssl=true
For more information on TLS configuration options, see the Clickhouse JDBC driver documentation
Multiple ClickHouse servers
If you have multiple ClickHouse servers you need to configure one catalog for each server. To add another catalog:
- Add another properties file to
etc/catalog
- Save it with a different name that ends in
.properties
For example, if you name the property file sales.properties
, Trino
uses the configured connector to create a catalog named sales
.
General configuration properties
The following table describes general catalog configuration properties for the connector:
Property name | Description | Default value |
---|---|---|
case-insensitive-name-matching | Support case insensitive schema and table names. | false |
case-insensitive-name-matching.cache-ttl | 1m | |
case-insensitive-name-matching.config-file | Path to a name mapping configuration file in JSON format that allows Trino to disambiguate between schemas and tables with similar names in different cases. | null |
case-insensitive-name-matching.refresh-period | Frequency with which Trino checks the name matching configuration file for changes. | 0 (refresh disabled) |
metadata.cache-ttl | Duration for which metadata, including table and column statistics, is cached. | 0 (caching disabled) |
metadata.cache-missing | Cache the fact that metadata, including table and column statistics, is not available | false |
metadata.cache-maximum-size | Maximum number of objects stored in the metadata cache | 10000 |
write.batch-size | Maximum number of statements in a batched execution. Do not change this setting from the default. Non-default values may negatively impact performance. | 1000 |
Procedures
system.flush_metadata_cache()
Flush JDBC metadata caches. For example, the following system call flushes the metadata caches for all schemas in the
example
catalogUSE example.myschema;
CALL system.flush_metadata_cache();
Case insensitive matching
When case-insensitive-name-matching
is set to true
, Trino is able to
query non-lowercase schemas and tables by maintaining a mapping of the
lowercase name to the actual name in the remote system. However, if two
schemas and/or tables have names that differ only in case (such as
"customers" and "Customers") then Trino fails to query them due to
ambiguity.
In these cases, use the case-insensitive-name-matching.config-file
catalog configuration property to specify a configuration file that maps
these remote schemas/tables to their respective Trino schemas/tables:
{
"schemas": [
{
"remoteSchema": "CaseSensitiveName",
"mapping": "case_insensitive_1"
},
{
"remoteSchema": "cASEsENSITIVEnAME",
"mapping": "case_insensitive_2"
}],
"tables": [
{
"remoteSchema": "CaseSensitiveName",
"remoteTable": "tablex",
"mapping": "table_1"
},
{
"remoteSchema": "CaseSensitiveName",
"remoteTable": "TABLEX",
"mapping": "table_2"
}]
}
Queries against one of the tables or schemes defined in the mapping
attributes are run against the corresponding remote entity. For example,
a query against tables in the case_insensitive_1
schema is forwarded
to the CaseSensitiveName schema and a query against case_insensitive_2
is forwarded to the cASEsENSITIVEnAME
schema.
At the table mapping level, a query on case_insensitive_1.table_1
as
configured above is forwarded to CaseSensitiveName.tablex
, and a query
on case_insensitive_1.table_2
is forwarded to
CaseSensitiveName.TABLEX
.
By default, when a change is made to the mapping configuration file,
Trino must be restarted to load the changes. Optionally, you can set the
case-insensitive-name-mapping.refresh-period
to have Trino refresh the
properties without requiring a restart:
case-insensitive-name-mapping.refresh-period=30s
Non-transactional INSERT
The connector supports adding rows using
INSERT statements </sql/insert>
. By default, data insertion is
performed by writing data to a temporary table. You can skip this step
to improve performance and write directly to the target table. Set the
insert.non-transactional-insert.enabled
catalog property or the
corresponding non_transactional_insert
catalog session property to
true
.
Note that with this property enabled, data can be corrupted in rare cases where exceptions occur during the insert operation. With transactions disabled, no rollback can be performed.
Querying ClickHouse
The ClickHouse connector provides a schema for every ClickHouse
database. run SHOW SCHEMAS
to see the available ClickHouse
databases:
SHOW SCHEMAS FROM myclickhouse;
If you have a ClickHouse database named web
, run SHOW TABLES
to view
the tables in this database:
SHOW TABLES FROM myclickhouse.web;
Run DESCRIBE
or SHOW COLUMNS
to list the columns in the clicks
table in the web
databases:
DESCRIBE myclickhouse.web.clicks;
SHOW COLUMNS FROM clickhouse.web.clicks;
Run SELECT
to access the clicks
table in the web
database:
SELECT * FROM myclickhouse.web.clicks;
If you used a different name for your catalog properties file, use that
catalog name instead of myclickhouse
in the above examples.
Table properties
Table property usage example:
CREATE TABLE default.trino_ck (
id int NOT NULL,
birthday DATE NOT NULL,
name VARCHAR,
age BIGINT,
logdate DATE NOT NULL
)
WITH (
engine = 'MergeTree',
order_by = ARRAY['id', 'birthday'],
partition_by = ARRAY['toYYYYMM(logdate)'],
primary_key = ARRAY['id'],
sample_by = 'id'
);
The following are supported ClickHouse table properties from
https://clickhouse.tech/docs/en/engines/table-engines/mergetree-family/mergetree/
Property Name | Default Value | Description |
---|---|---|
|
| Name and parameters of the engine. |
| (none) | Array of columns or expressions to concatenate to create the sorting key. Required if |
| (none) | Array of columns or expressions to use as nested partition keys. Optional. |
| (none) | Array of columns or expressions to concatenate to create the primary key. Optional. |
| (none) | An expression to use for sampling. Optional. |
Currently the connector only supports Log
and MergeTree
table
engines in create table statement. ReplicatedMergeTree
engine is not
yet supported.
Type mapping
The data type mappings are as follows:
ClickHouse | Trino | Notes |
---|---|---|
Int8 | TINYINT | TINYINT , BOOL , BOOLEAN andINT1 are aliases of Int8 |
Int16 | SMALLINT | SMALLINT and INT2 are aliases ofInt16 |
Int32 | INTEGER | INT , INT4 and INTEGER are aliases of Int32 |
Int64 | BIGINT | BIGINT is an alias of Int64 |
UInt8 | SMALLINT | |
UInt16 | INTEGER | |
UInt32 | BIGINT | |
UInt64 | DECIMAL(20,0) | |
Float32 | REAL | FLOAT is an alias of Float32 |
Float64 | DOUBLE | DOUBLE is an alias of Float64 |
Decimal | DECIMAL | |
FixedString | VARBINARY | Enabling clickhouse.map-string-as-varchar config property changes the mapping to VARCHAR |
String | VARBINARY | Enabling clickhouse.map-string-as-varchar config property changes the mapping to VARCHAR |
Date | DATE | |
DateTime | TIMESTAMP | |
IPv4 | IPADDRESS | |
IPv6 | IPADDRESS | |
Enum8 | VARCHAR | |
Enum16 | VARCHAR | |
UUID | UUID |
Type mapping configuration properties
The following properties can be used to configure how data types from the connected data source are mapped to Trino data types and how the metadata is cached in Trino.
Property name | Description | Default value |
---|---|---|
| Configure how unsupported column data types are handled:
The respective catalog session property is |
|
jdbc-types-mapped-to-varchar | Allow forced mapping of comma separated lists of data types to convert to unbounded VARCHAR |
SQL support
The connector provides read and write access to data and metadata in a ClickHouse catalog. In addition to the globally available and read operation statements, the connector supports the following features:
- INSERT
- TRUNCATE
- sql-schema-table-management
ALTER SCHEMA
The connector supports renaming a schema with the ALTER SCHEMA RENAME
statement. ALTER SCHEMA SET AUTHORIZATION
is not supported.
Performance
The connector includes a number of performance improvements, detailed in the following sections.
Pushdown
The connector supports pushdown for a number of operations:
- Pushdown
Aggregate pushdown for the following functions:
avg
count
max
min
sum
Predicate pushdown support
The connector does not support pushdown of any predicates on columns
with textual types <string-data-types>
like CHAR
or VARCHAR
. This
ensures correctness of results since the data source may compare strings
case-insensitively.
In the following example, the predicate is not pushed down for either
query since name
is a column of type VARCHAR
:
SELECT * FROM nation WHERE name > 'CANADA';
SELECT * FROM nation WHERE name = 'CANADA';