SingleStore (MemSQL) connector
The SingleStore (formerly known as MemSQL) connector allows querying and creating tables in an external SingleStore database.
Requirements
To connect to SingleStore, you need:
- SingleStore version 7.1.4 or higher.
- Network access from the Trino coordinator and workers to SingleStore. Port 3306 is the default port.
Configuration
To configure the SingleStore connector, create a catalog properties file
in etc/catalog
named, for example, singlestore.properties
, to mount
the SingleStore connector as the singlestore
catalog. Create the file
with the following contents, replacing the connection properties as
appropriate for your setup:
connector.name=singlestore
connection-url=jdbc:singlestore://example.net:3306
connection-user=root
connection-password=secret
The connection-url
defines the connection information and parameters
to pass to the SingleStore JDBC driver. The supported parameters for the
URL are available in the SingleStore JDBC driver
documentation.
The connection-user
and connection-password
are typically required
and determine the user credentials for the connection, often a service
user. You can use secrets to avoid actual
values in the catalog properties files.
Connection security
If you have TLS configured with a globally-trusted certificate installed
on your data source, you can enable TLS between your cluster and the
data source by appending a parameter to the JDBC connection string set
in the connection-url
catalog configuration property.
Enable TLS between your cluster and SingleStore by appending the
useSsl=true
parameter to the connection-url
configuration property:
connection-url=jdbc:singlestore://example.net:3306/?useSsl=true
For more information on TLS configuration options, see the JDBC driver documentation.
Multiple SingleStore servers
You can have as many catalogs as you need, so if you have additional
SingleStore servers, simply add another properties file to etc/catalog
with a different name (making sure it ends in .properties
). For
example, if you name the property file sales.properties
, Trino will
create a catalog named sales
using the configured connector.
General configuration properties
The following table describes general catalog configuration properties for the connector:
Property name | Description | Default value |
---|---|---|
case-insensitive-name-matching | Support case insensitive schema and table names. | false |
case-insensitive-name-matching.cache-ttl | 1m | |
case-insensitive-name-matching.config-file | Path to a name mapping configuration file in JSON format that allows Trino to disambiguate between schemas and tables with similar names in different cases. | null |
case-insensitive-name-matching.refresh-period | Frequency with which Trino checks the name matching configuration file for changes. | 0 (refresh disabled) |
metadata.cache-ttl | Duration for which metadata, including table and column statistics, is cached. | 0 (caching disabled) |
metadata.cache-missing | Cache the fact that metadata, including table and column statistics, is not available | false |
metadata.cache-maximum-size | Maximum number of objects stored in the metadata cache | 10000 |
write.batch-size | Maximum number of statements in a batched execution. Do not change this setting from the default. Non-default values may negatively impact performance. | 1000 |
Procedures
system.flush_metadata_cache()
Flush JDBC metadata caches. For example, the following system call flushes the metadata caches for all schemas in the
example
catalogUSE example.myschema;
CALL system.flush_metadata_cache();
Case insensitive matching
When case-insensitive-name-matching
is set to true
, Trino is able to
query non-lowercase schemas and tables by maintaining a mapping of the
lowercase name to the actual name in the remote system. However, if two
schemas and/or tables have names that differ only in case (such as
"customers" and "Customers") then Trino fails to query them due to
ambiguity.
In these cases, use the case-insensitive-name-matching.config-file
catalog configuration property to specify a configuration file that maps
these remote schemas/tables to their respective Trino schemas/tables:
{
"schemas": [
{
"remoteSchema": "CaseSensitiveName",
"mapping": "case_insensitive_1"
},
{
"remoteSchema": "cASEsENSITIVEnAME",
"mapping": "case_insensitive_2"
}],
"tables": [
{
"remoteSchema": "CaseSensitiveName",
"remoteTable": "tablex",
"mapping": "table_1"
},
{
"remoteSchema": "CaseSensitiveName",
"remoteTable": "TABLEX",
"mapping": "table_2"
}]
}
Queries against one of the tables or schemes defined in the mapping
attributes are run against the corresponding remote entity. For example,
a query against tables in the case_insensitive_1
schema is forwarded
to the CaseSensitiveName schema and a query against case_insensitive_2
is forwarded to the cASEsENSITIVEnAME
schema.
At the table mapping level, a query on case_insensitive_1.table_1
as
configured above is forwarded to CaseSensitiveName.tablex
, and a query
on case_insensitive_1.table_2
is forwarded to
CaseSensitiveName.TABLEX
.
By default, when a change is made to the mapping configuration file,
Trino must be restarted to load the changes. Optionally, you can set the
case-insensitive-name-mapping.refresh-period
to have Trino refresh the
properties without requiring a restart:
case-insensitive-name-mapping.refresh-period=30s
Non-transactional INSERT
The connector supports adding rows using
INSERT statements </sql/insert>
. By default, data insertion is
performed by writing data to a temporary table. You can skip this step
to improve performance and write directly to the target table. Set the
insert.non-transactional-insert.enabled
catalog property or the
corresponding non_transactional_insert
catalog session property to
true
.
Note that with this property enabled, data can be corrupted in rare cases where exceptions occur during the insert operation. With transactions disabled, no rollback can be performed.
Querying SingleStore
The SingleStore connector provides a schema for every SingleStore
database. You can see the available SingleStore databases by running
SHOW SCHEMAS
:
SHOW SCHEMAS FROM singlestore;
If you have a SingleStore database named web
, you can view the tables
in this database by running SHOW TABLES
:
SHOW TABLES FROM singlestore.web;
You can see a list of the columns in the clicks
table in the web
database using either of the following:
DESCRIBE singlestore.web.clicks;
SHOW COLUMNS FROM singlestore.web.clicks;
Finally, you can access the clicks
table in the web
database:
SELECT * FROM singlestore.web.clicks;
If you used a different name for your catalog properties file, use that
catalog name instead of singlestore
in the above examples.
Type mapping
Type mapping configuration properties
The following properties can be used to configure how data types from the connected data source are mapped to Trino data types and how the metadata is cached in Trino.
Property name | Description | Default value |
---|---|---|
| Configure how unsupported column data types are handled:
The respective catalog session property is |
|
jdbc-types-mapped-to-varchar | Allow forced mapping of comma separated lists of data types to convert to unbounded VARCHAR |
SQL support
The connector provides read access and write access to data and metadata in a SingleStore database. In addition to the globally available and read operation statements, the connector supports the following features:
- INSERT
- DELETE
- TRUNCATE
- CREATE TABLE
- CREATE TABLE
- DROP TABLE
- ALTER TABLE
- CREATE SCHEMA
- DROP SCHEMA
SQL DELETE
If a WHERE
clause is specified, the DELETE
operation only works if
the predicate in the clause can be fully pushed down to the data source.
ALTER TABLE
The connector does not support renaming tables across multiple schemas. For example, the following statement is supported:
ALTER TABLE catalog.schema_one.table_one RENAME TO catalog.schema_one.table_two
The following statement attempts to rename a table across schemas, and therefore is not supported:
ALTER TABLE catalog.schema_one.table_one RENAME TO catalog.schema_two.table_two
Performance
The connector includes a number of performance improvements, detailed in the following sections.
Pushdown
The connector supports pushdown for a number of operations:
- Pushdown
- Pushdown
- Pushdown
Join pushdown
The join-pushdown.enabled
catalog configuration property or
join_pushdown_enabled
catalog session property
<session-properties-definition>
control whether the connector pushes
down join operations. The property defaults to false
, and enabling
join pushdowns may negatively impact performance for some queries.
Predicate pushdown support
The connector does not support pushdown of any predicates on columns
with textual types <string-data-types>
like CHAR
or VARCHAR
. This
ensures correctness of results since the data source may compare strings
case-insensitively.
In the following example, the predicate is not pushed down for either
query since name
is a column of type VARCHAR
:
SELECT * FROM nation WHERE name > 'CANADA';
SELECT * FROM nation WHERE name = 'CANADA';