class Google::Cloud::Bigquery::QueryJob::Updater

Yielded to a block to accumulate changes for a patch request.

Public Class Methods

from_options(service, query, options) click to toggle source

@private Create an Updater from an options hash.

@return [Google::Cloud::Bigquery::QueryJob::Updater] A job

configuration object for setting query options.
# File lib/google/cloud/bigquery/query_job.rb, line 760
def self.from_options service, query, options
  job_ref = service.job_ref_from options[:job_id], options[:prefix]
  dataset_config = service.dataset_ref_from options[:dataset],
                                            options[:project]
  req = Google::Apis::BigqueryV2::Job.new(
    job_reference: job_ref,
    configuration: Google::Apis::BigqueryV2::JobConfiguration.new(
      query: Google::Apis::BigqueryV2::JobConfigurationQuery.new(
        query: query,
        default_dataset: dataset_config,
        maximum_billing_tier: options[:maximum_billing_tier]
      )
    )
  )

  updater = QueryJob::Updater.new service, req
  updater.set_params_and_types options[:params], options[:types] if options[:params]
  updater.create = options[:create]
  updater.write = options[:write]
  updater.table = options[:table]
  updater.dryrun = options[:dryrun]
  updater.maximum_bytes_billed = options[:maximum_bytes_billed]
  updater.labels = options[:labels] if options[:labels]
  updater.legacy_sql = Convert.resolve_legacy_sql options[:standard_sql], options[:legacy_sql]
  updater.external = options[:external] if options[:external]
  updater.priority = options[:priority]
  updater.cache = options[:cache]
  updater.large_results = options[:large_results]
  updater.flatten = options[:flatten]
  updater.udfs = options[:udfs]
  updater
end
new(service, gapi) click to toggle source

@private Create an Updater object.

Calls superclass method Google::Cloud::Bigquery::Job::new
# File lib/google/cloud/bigquery/query_job.rb, line 749
def initialize service, gapi
  super()
  @service = service
  @gapi = gapi
end

Public Instance Methods

cache=(value) click to toggle source

Specifies to look in the query cache for results.

@param [Boolean] value Whether to look for the result in the query

cache. The query cache is a best-effort cache that will be flushed
whenever tables in the query are modified. The default value is
true. For more information, see [query
caching](https://developers.google.com/bigquery/querying-data).

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 842
def cache= value
  @gapi.configuration.query.use_query_cache = value
end
cancel() click to toggle source
# File lib/google/cloud/bigquery/query_job.rb, line 1539
def cancel
  raise "not implemented in #{self.class}"
end
clustering_fields=(fields) click to toggle source

Sets the list of fields on which data should be clustered.

Only top-level, non-repeated, simple-type fields are supported. When you cluster a table using multiple columns, the order of columns you specify is important. The order of the specified columns determines the sort order of the data.

BigQuery supports clustering for both partitioned and non-partitioned tables.

See {QueryJob#clustering_fields}, {Table#clustering_fields} and {Table#clustering_fields=}.

@see cloud.google.com/bigquery/docs/clustered-tables

Introduction to clustered tables

@see cloud.google.com/bigquery/docs/creating-clustered-tables

Creating and using clustered tables

@param [Array<String>] fields The clustering fields. Only top-level,

non-repeated, simple-type fields are supported.

@example

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
destination_table = dataset.table "my_destination_table",
                                  skip_lookup: true

job = dataset.query_job "SELECT * FROM my_table" do |job|
  job.table = destination_table
  job.time_partitioning_type = "DAY"
  job.time_partitioning_field = "dob"
  job.clustering_fields = ["last_name", "first_name"]
end

job.wait_until_done!
job.done? #=> true

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 1534
def clustering_fields= fields
  @gapi.configuration.query.clustering ||= Google::Apis::BigqueryV2::Clustering.new
  @gapi.configuration.query.clustering.fields = fields
end
create=(value) click to toggle source

Sets the create disposition for creating the query results table.

@param [String] value Specifies whether the job is allowed to create new tables. The default value is `needed`.

The following values are supported:

* `needed` - Create the table if it does not exist.
* `never` - The table must already exist. A 'notFound' error is
  raised if the table does not exist.

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 1017
def create= value
  @gapi.configuration.query.create_disposition = Convert.create_disposition value
end
dataset=(value) click to toggle source

Sets the default dataset of tables referenced in the query.

@param [Dataset] value The default dataset to use for unqualified

table names in the query.

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 879
def dataset= value
  @gapi.configuration.query.default_dataset = @service.dataset_ref_from value
end
dry_run=(value)
Alias for: dryrun=
dryrun=(value) click to toggle source

Sets the dry run flag for the query job.

@param [Boolean] value If set, don't actually run this job. A valid

query will return a mostly empty response with some processing
statistics, while an invalid query will return the same error it
would if it wasn't a dry run..

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 1048
def dryrun= value
  @gapi.configuration.dry_run = value
end
Also aliased as: dry_run=
encryption=(val) click to toggle source

Sets the encryption configuration of the destination table.

@param [Google::Cloud::BigQuery::EncryptionConfiguration] val

Custom encryption configuration (e.g., Cloud KMS keys).

@example

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"

key_name = "projects/a/locations/b/keyRings/c/cryptoKeys/d"
encrypt_config = bigquery.encryption kms_key: key_name
job = bigquery.query_job "SELECT 1;" do |job|
  job.table = dataset.table "my_table", skip_lookup: true
  job.encryption = encrypt_config
end

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 1188
def encryption= val
  @gapi.configuration.query.update! destination_encryption_configuration: val.to_gapi
end
external=(value) click to toggle source

Sets definitions for external tables used in the query.

@param [Hash<String|Symbol, External::DataSource>] value A Hash

that represents the mapping of the external tables to the table
names used in the SQL query. The hash keys are the table names,
and the hash values are the external table objects.

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 1146
def external= value
  external_table_pairs = value.map { |name, obj| [String(name), obj.to_gapi] }
  external_table_hash = Hash[external_table_pairs]
  @gapi.configuration.query.table_definitions = external_table_hash
end
flatten=(value) click to toggle source

Flatten nested and repeated fields in legacy SQL queries.

@param [Boolean] value This option is specific to Legacy SQL.

Flattens all nested and repeated fields in the query results. The
default value is `true`. `large_results` parameter must be `true`
if this is set to `false`.

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 868
def flatten= value
  @gapi.configuration.query.flatten_results = value
end
labels=(value) click to toggle source

Sets the labels to use for the job.

@param [Hash] value A hash of user-provided labels associated with

the job. You can use these to organize and group your jobs.

The labels applied to a resource must meet the following requirements:

* Each resource can have multiple labels, up to a maximum of 64.
* Each label must be a key-value pair.
* Keys have a minimum length of 1 character and a maximum length of
  63 characters, and cannot be empty. Values can be empty, and have
  a maximum length of 63 characters.
* Keys and values can contain only lowercase letters, numeric characters,
  underscores, and dashes. All characters must use UTF-8 encoding, and
  international characters are allowed.
* The key portion of a label must be unique. However, you can use the
  same key with multiple resources.
* Keys must start with a lowercase letter or international character.

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 1100
def labels= value
  @gapi.configuration.update! labels: value
end
large_results=(value) click to toggle source

Allow large results for a legacy SQL query.

@param [Boolean] value This option is specific to Legacy SQL.

If `true`, allows the query to produce arbitrarily large result
tables at a slight cost in performance. Requires `table` parameter
to be set.

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 855
def large_results= value
  @gapi.configuration.query.allow_large_results = value
end
legacy_sql=(value) click to toggle source

Sets the query syntax to legacy SQL.

@param [Boolean] value Specifies whether to use BigQuery's [legacy

SQL](https://cloud.google.com/bigquery/docs/reference/legacy-sql)
dialect for this query. If set to false, the query will use
BigQuery's [standard
SQL](https://cloud.google.com/bigquery/docs/reference/standard-sql/)
dialect. Optional. The default value is false.

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 1116
def legacy_sql= value
  @gapi.configuration.query.use_legacy_sql = value
end
location=(value) click to toggle source

Sets the geographic location where the job should run. Required except for US and EU.

@param [String] value A geographic location, such as “US”, “EU” or

"asia-northeast1". Required except for US and EU.

@example

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"

job = bigquery.query_job "SELECT 1;" do |query|
  query.table = dataset.table "my_table", skip_lookup: true
  query.location = "EU"
end

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 812
def location= value
  @gapi.job_reference.location = value
  return unless value.nil?

  # Treat assigning value of nil the same as unsetting the value.
  unset = @gapi.job_reference.instance_variables.include? :@location
  @gapi.job_reference.remove_instance_variable :@location if unset
end
maximum_bytes_billed=(value) click to toggle source

Sets the maximum bytes billed for the query.

@param [Integer] value Limits the bytes billed for this job.

Queries that will have bytes billed beyond this limit will fail
(without incurring a charge). Optional. If unspecified, this will
be set to your project default.

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 1074
def maximum_bytes_billed= value
  @gapi.configuration.query.maximum_bytes_billed = value
end
params=(params) click to toggle source

Sets the query parameters. Standard SQL only.

Use {set_params_and_types} to set both params and types.

@param [Array, Hash] params Standard SQL only. Used to pass query arguments when the `query` string contains

either positional (`?`) or named (`@myparam`) query parameters. If value passed is an array `["foo"]`, the
query must use positional query parameters. If value passed is a hash `{ myparam: "foo" }`, the query must
use named query parameters. When set, `legacy_sql` will automatically be set to false and `standard_sql`
to true.

BigQuery types are converted from Ruby types as follows:

| BigQuery     | Ruby                                 | Notes                                            |
|--------------|--------------------------------------|--------------------------------------------------|
| `BOOL`       | `true`/`false`                       |                                                  |
| `INT64`      | `Integer`                            |                                                  |
| `FLOAT64`    | `Float`                              |                                                  |
| `NUMERIC`    | `BigDecimal`                         | `BigDecimal` values will be rounded to scale 9.  |
| `BIGNUMERIC` | `BigDecimal`                         | NOT AUTOMATIC: Must be mapped using `types`.     |
| `STRING`     | `String`                             |                                                  |
| `DATETIME`   | `DateTime`                           | `DATETIME` does not support time zone.           |
| `DATE`       | `Date`                               |                                                  |
| `GEOGRAPHY`  | `String` (WKT or GeoJSON)            | NOT AUTOMATIC: Must be mapped using `types`.     |
| `TIMESTAMP`  | `Time`                               |                                                  |
| `TIME`       | `Google::Cloud::BigQuery::Time`      |                                                  |
| `BYTES`      | `File`, `IO`, `StringIO`, or similar |                                                  |
| `ARRAY`      | `Array`                              | Nested arrays, `nil` values are not supported.   |
| `STRUCT`     | `Hash`                               | Hash keys may be strings or symbols.             |

See [Data Types](https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types) for an overview
of each BigQuery data type, including allowed values. For the `GEOGRAPHY` type, see [Working with BigQuery
GIS data](https://cloud.google.com/bigquery/docs/gis-data).

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 918
def params= params
  set_params_and_types params
end
priority=(value) click to toggle source

Sets the priority of the query.

@param [String] value Specifies a priority for the query. Possible

values include `INTERACTIVE` and `BATCH`.

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 828
def priority= value
  @gapi.configuration.query.priority = priority_value value
end
range_partitioning_end=(range_end) click to toggle source

Sets the end of range partitioning, exclusive, for the destination table. See [Creating and using integer range partitioned tables](cloud.google.com/bigquery/docs/creating-integer-range-partitions).

You can only set range partitioning when creating a table. BigQuery does not allow you to change partitioning on an existing table.

See {#range_partitioning_start=}, {#range_partitioning_interval=} and {#range_partitioning_field=}.

@param [Integer] range_end The end of range partitioning, exclusive.

@example

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
destination_table = dataset.table "my_destination_table",
                                  skip_lookup: true

job = bigquery.query_job "SELECT num FROM UNNEST(GENERATE_ARRAY(0, 99)) AS num" do |job|
  job.table = destination_table
  job.range_partitioning_field = "num"
  job.range_partitioning_start = 0
  job.range_partitioning_interval = 10
  job.range_partitioning_end = 100
end

job.wait_until_done!
job.done? #=> true

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 1343
def range_partitioning_end= range_end
  @gapi.configuration.query.range_partitioning ||= Google::Apis::BigqueryV2::RangePartitioning.new(
    range: Google::Apis::BigqueryV2::RangePartitioning::Range.new
  )
  @gapi.configuration.query.range_partitioning.range.end = range_end
end
range_partitioning_field=(field) click to toggle source

Sets the field on which to range partition the table. See [Creating and using integer range partitioned tables](cloud.google.com/bigquery/docs/creating-integer-range-partitions).

See {#range_partitioning_start=}, {#range_partitioning_interval=} and {#range_partitioning_end=}.

You can only set range partitioning when creating a table. BigQuery does not allow you to change partitioning on an existing table.

@param [String] field The range partition field. the destination table is partitioned by this

field. The field must be a top-level `NULLABLE/REQUIRED` field. The only supported
type is `INTEGER/INT64`.

@example

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
destination_table = dataset.table "my_destination_table",
                                  skip_lookup: true

job = bigquery.query_job "SELECT num FROM UNNEST(GENERATE_ARRAY(0, 99)) AS num" do |job|
  job.table = destination_table
  job.range_partitioning_field = "num"
  job.range_partitioning_start = 0
  job.range_partitioning_interval = 10
  job.range_partitioning_end = 100
end

job.wait_until_done!
job.done? #=> true

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 1226
def range_partitioning_field= field
  @gapi.configuration.query.range_partitioning ||= Google::Apis::BigqueryV2::RangePartitioning.new(
    range: Google::Apis::BigqueryV2::RangePartitioning::Range.new
  )
  @gapi.configuration.query.range_partitioning.field = field
end
range_partitioning_interval=(range_interval) click to toggle source

Sets width of each interval for data in range partitions. See [Creating and using integer range partitioned tables](cloud.google.com/bigquery/docs/creating-integer-range-partitions).

You can only set range partitioning when creating a table. BigQuery does not allow you to change partitioning on an existing table.

See {#range_partitioning_field=}, {#range_partitioning_start=} and {#range_partitioning_end=}.

@param [Integer] range_interval The width of each interval, for data in partitions.

@example

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
destination_table = dataset.table "my_destination_table",
                                  skip_lookup: true

job = bigquery.query_job "SELECT num FROM UNNEST(GENERATE_ARRAY(0, 99)) AS num" do |job|
  job.table = destination_table
  job.range_partitioning_field = "num"
  job.range_partitioning_start = 0
  job.range_partitioning_interval = 10
  job.range_partitioning_end = 100
end

job.wait_until_done!
job.done? #=> true

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 1304
def range_partitioning_interval= range_interval
  @gapi.configuration.query.range_partitioning ||= Google::Apis::BigqueryV2::RangePartitioning.new(
    range: Google::Apis::BigqueryV2::RangePartitioning::Range.new
  )
  @gapi.configuration.query.range_partitioning.range.interval = range_interval
end
range_partitioning_start=(range_start) click to toggle source

Sets the start of range partitioning, inclusive, for the destination table. See [Creating and using integer range partitioned tables](cloud.google.com/bigquery/docs/creating-integer-range-partitions).

You can only set range partitioning when creating a table. BigQuery does not allow you to change partitioning on an existing table.

See {#range_partitioning_field=}, {#range_partitioning_interval=} and {#range_partitioning_end=}.

@param [Integer] range_start The start of range partitioning, inclusive.

@example

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
destination_table = dataset.table "my_destination_table",
                                  skip_lookup: true

job = bigquery.query_job "SELECT num FROM UNNEST(GENERATE_ARRAY(0, 99)) AS num" do |job|
  job.table = destination_table
  job.range_partitioning_field = "num"
  job.range_partitioning_start = 0
  job.range_partitioning_interval = 10
  job.range_partitioning_end = 100
end

job.wait_until_done!
job.done? #=> true

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 1265
def range_partitioning_start= range_start
  @gapi.configuration.query.range_partitioning ||= Google::Apis::BigqueryV2::RangePartitioning.new(
    range: Google::Apis::BigqueryV2::RangePartitioning::Range.new
  )
  @gapi.configuration.query.range_partitioning.range.start = range_start
end
refresh!()
Alias for: reload!
reload!() click to toggle source
# File lib/google/cloud/bigquery/query_job.rb, line 1547
def reload!
  raise "not implemented in #{self.class}"
end
Also aliased as: refresh!
rerun!() click to toggle source
# File lib/google/cloud/bigquery/query_job.rb, line 1543
def rerun!
  raise "not implemented in #{self.class}"
end
set_params_and_types(params, types = nil) click to toggle source

Sets the query parameters. Standard SQL only.

@param [Array, Hash] params Standard SQL only. Used to pass query arguments when the `query` string contains

either positional (`?`) or named (`@myparam`) query parameters. If value passed is an array `["foo"]`, the
query must use positional query parameters. If value passed is a hash `{ myparam: "foo" }`, the query must
use named query parameters. When set, `legacy_sql` will automatically be set to false and `standard_sql`
to true.

BigQuery types are converted from Ruby types as follows:

| BigQuery     | Ruby                                 | Notes                                            |
|--------------|--------------------------------------|--------------------------------------------------|
| `BOOL`       | `true`/`false`                       |                                                  |
| `INT64`      | `Integer`                            |                                                  |
| `FLOAT64`    | `Float`                              |                                                  |
| `NUMERIC`    | `BigDecimal`                         | `BigDecimal` values will be rounded to scale 9.  |
| `BIGNUMERIC` | `BigDecimal`                         | NOT AUTOMATIC: Must be mapped using `types`.     |
| `STRING`     | `String`                             |                                                  |
| `DATETIME`   | `DateTime`                           | `DATETIME` does not support time zone.           |
| `DATE`       | `Date`                               |                                                  |
| `GEOGRAPHY`  | `String` (WKT or GeoJSON)            | NOT AUTOMATIC: Must be mapped using `types`.     |
| `TIMESTAMP`  | `Time`                               |                                                  |
| `TIME`       | `Google::Cloud::BigQuery::Time`      |                                                  |
| `BYTES`      | `File`, `IO`, `StringIO`, or similar |                                                  |
| `ARRAY`      | `Array`                              | Nested arrays, `nil` values are not supported.   |
| `STRUCT`     | `Hash`                               | Hash keys may be strings or symbols.             |

See [Data Types](https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types) for an overview
of each BigQuery data type, including allowed values. For the `GEOGRAPHY` type, see [Working with BigQuery
GIS data](https://cloud.google.com/bigquery/docs/gis-data).

@param [Array, Hash] types Standard SQL only. Types of the SQL parameters in `params`. It is not always

possible to infer the right SQL type from a value in `params`. In these cases, `types` must be used to
specify the SQL type for these values.

Arguments must match the value type passed to `params`. This must be an `Array` when the query uses
positional query parameters. This must be an `Hash` when the query uses named query parameters. The values
should be BigQuery type codes from the following list:

* `:BOOL`
* `:INT64`
* `:FLOAT64`
* `:NUMERIC`
* `:BIGNUMERIC`
* `:STRING`
* `:DATETIME`
* `:DATE`
* `:GEOGRAPHY`
* `:TIMESTAMP`
* `:TIME`
* `:BYTES`
* `Array` - Lists are specified by providing the type code in an array. For example, an array of integers
  are specified as `[:INT64]`.
* `Hash` - Types for STRUCT values (`Hash` objects) are specified using a `Hash` object, where the keys
  match the `params` hash, and the values are the types value that matches the data.

Types are optional.

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 981
def set_params_and_types params, types = nil
  types ||= params.class.new
  raise ArgumentError, "types must use the same format as params" if types.class != params.class

  case params
  when Array
    @gapi.configuration.query.use_legacy_sql = false
    @gapi.configuration.query.parameter_mode = "POSITIONAL"
    @gapi.configuration.query.query_parameters = params.zip(types).map do |param, type|
      Convert.to_query_param param, type
    end
  when Hash
    @gapi.configuration.query.use_legacy_sql = false
    @gapi.configuration.query.parameter_mode = "NAMED"
    @gapi.configuration.query.query_parameters = params.map do |name, param|
      type = types[name]
      Convert.to_query_param(param, type).tap { |named_param| named_param.name = String name }
    end
  else
    raise ArgumentError, "params must be an Array or a Hash"
  end
end
standard_sql=(value) click to toggle source

Sets the query syntax to standard SQL.

@param [Boolean] value Specifies whether to use BigQuery's [standard

SQL](https://cloud.google.com/bigquery/docs/reference/standard-sql/)
dialect for this query. If set to true, the query will use
standard SQL rather than the [legacy
SQL](https://cloud.google.com/bigquery/docs/reference/legacy-sql)
dialect. Optional. The default value is true.

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 1132
def standard_sql= value
  @gapi.configuration.query.use_legacy_sql = !value
end
table=(value) click to toggle source

Sets the destination for the query results table.

@param [Table] value The destination table where the query results

should be stored. If not present, a new table will be created
according to the create disposition to store the results.

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 1061
def table= value
  @gapi.configuration.query.destination_table = table_ref_from value
end
time_partitioning_expiration=(expiration) click to toggle source

Sets the partition expiration for the destination table. See [Partitioned Tables](cloud.google.com/bigquery/docs/partitioned-tables).

The destination table must also be partitioned. See {#time_partitioning_type=}.

@param [Integer] expiration An expiration time, in seconds,

for data in partitions.

@example

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
destination_table = dataset.table "my_destination_table",
                                  skip_lookup: true

job = dataset.query_job "SELECT * FROM UNNEST(" \
                        "GENERATE_TIMESTAMP_ARRAY(" \
                        "'2018-10-01 00:00:00', " \
                        "'2018-10-10 00:00:00', " \
                        "INTERVAL 1 DAY)) AS dob" do |job|
  job.table = destination_table
  job.time_partitioning_type = "DAY"
  job.time_partitioning_expiration = 86_400
end

job.wait_until_done!
job.done? #=> true

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 1471
def time_partitioning_expiration= expiration
  @gapi.configuration.query.time_partitioning ||= Google::Apis::BigqueryV2::TimePartitioning.new
  @gapi.configuration.query.time_partitioning.update! expiration_ms: expiration * 1000
end
time_partitioning_field=(field) click to toggle source

Sets the field on which to partition the destination table. If not set, the destination table is partitioned by pseudo column `_PARTITIONTIME`; if set, the table is partitioned by this field. See [Partitioned Tables](cloud.google.com/bigquery/docs/partitioned-tables).

The destination table must also be partitioned. See {#time_partitioning_type=}.

You can only set the partitioning field while creating a table. BigQuery does not allow you to change partitioning on an existing table.

@param [String] field The partition field. The field must be a

top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or
REQUIRED.

@example

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
destination_table = dataset.table "my_destination_table",
                                  skip_lookup: true

job = dataset.query_job "SELECT * FROM UNNEST(" \
                        "GENERATE_TIMESTAMP_ARRAY(" \
                        "'2018-10-01 00:00:00', " \
                        "'2018-10-10 00:00:00', " \
                        "INTERVAL 1 DAY)) AS dob" do |job|
  job.table = destination_table
  job.time_partitioning_type  = "DAY"
  job.time_partitioning_field = "dob"
end

job.wait_until_done!
job.done? #=> true

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 1432
def time_partitioning_field= field
  @gapi.configuration.query.time_partitioning ||= Google::Apis::BigqueryV2::TimePartitioning.new
  @gapi.configuration.query.time_partitioning.update! field: field
end
time_partitioning_require_filter=(val) click to toggle source

If set to true, queries over the destination table will require a partition filter that can be used for partition elimination to be specified. See [Partitioned Tables](cloud.google.com/bigquery/docs/partitioned-tables).

@param [Boolean] val Indicates if queries over the destination table

will require a partition filter. The default value is `false`.

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 1487
def time_partitioning_require_filter= val
  @gapi.configuration.query.time_partitioning ||= Google::Apis::BigqueryV2::TimePartitioning.new
  @gapi.configuration.query.time_partitioning.update! require_partition_filter: val
end
time_partitioning_type=(type) click to toggle source

Sets the partitioning for the destination table. See [Partitioned Tables](cloud.google.com/bigquery/docs/partitioned-tables). The supported types are `DAY`, `HOUR`, `MONTH`, and `YEAR`, which will generate one partition per day, hour, month, and year, respectively.

You can only set the partitioning field while creating a table. BigQuery does not allow you to change partitioning on an existing table.

@param [String] type The partition type. The supported types are `DAY`,

`HOUR`, `MONTH`, and `YEAR`, which will generate one partition per day,
hour, month, and year, respectively.

@example

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
destination_table = dataset.table "my_destination_table",
                                  skip_lookup: true

job = dataset.query_job "SELECT * FROM UNNEST(" \
                        "GENERATE_TIMESTAMP_ARRAY(" \
                        "'2018-10-01 00:00:00', " \
                        "'2018-10-10 00:00:00', " \
                        "INTERVAL 1 DAY)) AS dob" do |job|
  job.table = destination_table
  job.time_partitioning_type = "DAY"
end

job.wait_until_done!
job.done? #=> true

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 1386
def time_partitioning_type= type
  @gapi.configuration.query.time_partitioning ||= Google::Apis::BigqueryV2::TimePartitioning.new
  @gapi.configuration.query.time_partitioning.update! type: type
end
to_gapi() click to toggle source

@private Returns the Google API client library version of this job.

@return [<Google::Apis::BigqueryV2::Job>] (See

{Google::Apis::BigqueryV2::Job})
# File lib/google/cloud/bigquery/query_job.rb, line 1561
def to_gapi
  @gapi
end
udfs=(value) click to toggle source

Sets user defined functions for the query.

@param [Array<String>, String] value User-defined function resources

used in the query. May be either a code resource to load from a
Google Cloud Storage URI (`gs://bucket/path`), or an inline
resource that contains code for a user-defined function (UDF).
Providing an inline code resource is equivalent to providing a URI
for a file containing the same code. See [User-Defined
Functions](https://cloud.google.com/bigquery/docs/reference/standard-sql/user-defined-functions).

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 1164
def udfs= value
  @gapi.configuration.query.user_defined_function_resources = udfs_gapi_from value
end
wait_until_done!() click to toggle source
# File lib/google/cloud/bigquery/query_job.rb, line 1552
def wait_until_done!
  raise "not implemented in #{self.class}"
end
write=(value) click to toggle source

Sets the write disposition for when the query results table exists.

@param [String] value Specifies the action that occurs if the

destination table already exists. The default value is `empty`.

The following values are supported:

* `truncate` - BigQuery overwrites the table data.
* `append` - BigQuery appends the data to the table.
* `empty` - A 'duplicate' error is returned in the job result if
  the table exists and contains data.

@!group Attributes

# File lib/google/cloud/bigquery/query_job.rb, line 1035
def write= value
  @gapi.configuration.query.write_disposition = Convert.write_disposition value
end

Protected Instance Methods

priority_value(str) click to toggle source
# File lib/google/cloud/bigquery/query_job.rb, line 1577
def priority_value str
  { "batch" => "BATCH", "interactive" => "INTERACTIVE" }[str.to_s.downcase]
end
table_ref_from(tbl) click to toggle source

Creates a table reference from a table object.

# File lib/google/cloud/bigquery/query_job.rb, line 1568
def table_ref_from tbl
  return nil if tbl.nil?
  Google::Apis::BigqueryV2::TableReference.new(
    project_id: tbl.project_id,
    dataset_id: tbl.dataset_id,
    table_id:   tbl.table_id
  )
end
udfs_gapi_from(array_or_str) click to toggle source
# File lib/google/cloud/bigquery/query_job.rb, line 1581
def udfs_gapi_from array_or_str
  Array(array_or_str).map do |uri_or_code|
    resource = Google::Apis::BigqueryV2::UserDefinedFunctionResource.new
    if uri_or_code.start_with? "gs://"
      resource.resource_uri = uri_or_code
    else
      resource.inline_code = uri_or_code
    end
    resource
  end
end