Skip to main content

BigQuery connector

bigquery.png

The BigQuery connector allows querying the data stored in BigQuery. This can be used to join data between different systems like BigQuery and Hive. The connector uses the BigQuery Storage API to read the data from the tables.

BigQuery Storage API

The Storage API streams data in parallel directly from BigQuery via gRPC without using Google Cloud Storage as an intermediary. It has a number of advantages over using the previous export-based read flow that should generally lead to better read performance:

Direct Streaming
It does not leave any temporary files in Google Cloud Storage. Rows are read directly from BigQuery servers using an Avro wire format.

Column Filtering
The new API allows column filtering to only read the data you are interested in. Backed by a columnar datastore, it can efficiently stream data without reading all columns.

Dynamic Sharding
The API rebalances records between readers until they all complete. This means that all Map phases will finish nearly concurrently. See this blog article on how dynamic sharding is similarly used in Google Cloud Dataflow.

Requirements

To connect to BigQuery, you need:

  • To enable the BigQuery Storage Read API.

  • Network access from your Trino coordinator and workers to the Google Cloud API service endpoint. This endpoint uses HTTPS, or port 443.

  • To configure BigQuery so that the Trino coordinator and workers have permissions in BigQuery.

  • To set up authentication. Your authentiation options differ depending on whether you are using Dataproc/Google Compute Engine (GCE) or not.

    On Dataproc/GCE the authentication is done from the machine's role.

    Outside Dataproc/GCE you have 3 options:

    • Use a service account JSON key and GOOGLE_APPLICATION_CREDENTIALS as described in the Google Cloud authentication getting started guide.
    • Set bigquery.credentials-key in the catalog properties file. It should contain the contents of the JSON file, encoded using base64.
    • Set bigquery.credentials-file in the catalog properties file. It should point to the location of the JSON file.

Configuration

To configure the BigQuery connector, create a catalog properties file in etc/catalog named, for example, bigquery.properties, to mount the BigQuery connector as the bigquery catalog. Create the file with the following contents, replacing the connection properties as appropriate for your setup:

connector.name=bigquery
bigquery.project-id=<your Google Cloud Platform project id>

Multiple GCP projects

The BigQuery connector can only access a single GCP project.Thus, if you have data in multiple GCP projects, You need to create several catalogs, each pointing to a different GCP project. For example, if you have two GCP projects, one for the sales and one for analytics, you can create two properties files in etc/catalog named sales.properties and analytics.properties, both having connector.name=bigquery but with different project-id. This will create the two catalogs, sales and analytics respectively.

Configuring partitioning

By default the connector creates one partition per 400MB in the table being read (before filtering). This should roughly correspond to the maximum number of readers supported by the BigQuery Storage API. This can be configured explicitly with the bigquery.parallelism property. BigQuery may limit the number of partitions based on server constraints.

Reading from views

The connector has a preliminary support for reading from BigQuery views. Please note there are a few caveats:

  • Reading from views is disabled by default. In order to enable it, set the bigquery.views-enabled configuration property to true.
  • BigQuery views are not materialized by default, which means that the connector needs to materialize them before it can read them. This process affects the read performance.
  • The materialization process can also incur additional costs to your BigQuery bill.
  • By default, the materialized views are created in the same project and dataset. Those can be configured by the optional bigquery.view-materialization-project and bigquery.view-materialization-dataset properties, respectively. The service account must have write permission to the project and the dataset in order to materialize the view.

Configuration properties

PropertyDescriptionDefault
bigquery.project-idThe Google Cloud Project ID where the data resideTaken from the service account
bigquery.parent-project-idThe project ID Google Cloud Project to bill for the exportTaken from the service account
bigquery.parallelismThe number of partitions to split the data intoThe number of executors

bigquery.views-enabled

Enables the connector to read from views and not only tables. Please read this section before enabling this feature.

false

bigquery.view-expire-durationExpire duration for the materialized view.24h
bigquery.view-materialization-projectThe project where the materialized view is going to be createdThe view's project
bigquery.view-materialization-datasetThe dataset where the materialized view is going to be createdThe view's dataset

bigquery.skip-view-materialization

Use REST API to access views instead of Storage API. BigQueryBIGNUMERIC and TIMESTAMP types are unsupported.

false

bigquery.views-cache-ttl

Duration for which the materialization of a view will be cached and reused. Set to 0ms to disable the cache.

15m

bigquery.max-read-rows-retriesThe number of retries in case of retryable server issues3
bigquery.credentials-keyThe base64 encoded credentials keyNone. See the requirements section.
bigquery.credentials-fileThe path to the JSON credentials fileNone. See the requirements section.
bigquery.case-insensitive-name-matchingMatch dataset and table names case-insensitivelyfalse

bigquery.query-results-cache.enabled

Enable query results cache

false

Data types

With a few exceptions, all BigQuery types are mapped directly to their Trino counterparts. Here are all the mappings:

BigQueryTrinoNotes
ARRAYARRAY
BOOLEANBOOLEAN
BYTESVARBINARY
DATEDATE
DATETIMETIMESTAMP(6)
FLOATDOUBLE
GEOGRAPHYVARCHARIn Well-known text (WKT) format
INTEGERBIGINT
NUMERICDECIMAL(P,S)Defaults to 38 as precision and 9 as scale
BIGNUMERICDECIMAL(P,S)Precision > 38 is not supported. Note that the default precision and scale of BIGNUMERIC is (77, 38).
RECORDROW
STRINGVARCHAR
TIMETIME(6)
TIMESTAMPTIMESTAMP(6) WITH TIME ZONETime zone is UTC

System tables

For each Trino table which maps to BigQuery view there exists a system table which exposes BigQuery view definition. Given a BigQuery view customer_view you can send query SELECT * customer_view$view_definition to see the SQL which defines view in BigQuery.

Special columns

In addition to the defined columns, the BigQuery connector exposes partition information in a number of hidden columns:

  • $partition_date: Equivalent to _PARTITIONDATE pseudo-column in BigQuery
  • $partition_time: Equivalent to _PARTITIONTIME pseudo-column in BigQuery

You can use these columns in your SQL statements like any other column. They can be selected directly, or used in conditional statements. For example, you can inspect the partition date and time for each record:

SELECT *, "$partition_date", "$partition_time"
FROM bigquery.web.page_views;

Retrieve all records stored in the partition _PARTITIONDATE = '2022-04-07':

SELECT *
FROM bigquery.web.page_views
WHERE "$partition_date" = date '2022-04-07';
note

Two special partitions __NULL__ and __UNPARTITIONED__ are not supported.

SQL support

The connector provides read and write access to data and metadata in the BigQuery database. In addition to the globally available and read operation statements, the connector supports the following features:

  • CREATE TABLE
  • DROP TABLE
  • CREATE SCHEMA
  • DROP SCHEMA

FAQ

What is the Pricing for the Storage API?

See the BigQuery pricing documentation.