# Google BigQuery

Google BigQuery is a fully managed data warehouse for large-scale data analytics, offering fast SQL queries and machine learning capabilities on massive datasets

- **Category:** databases
- **Auth:** OAUTH2, GOOGLE_SERVICE_ACCOUNT
- **Composio Managed App Available?** Yes
- **Tools:** 63
- **Triggers:** 0
- **Slug:** `GOOGLEBIGQUERY`
- **Version:** 20260316_00

## Tools

### Cancel BigQuery Job

**Slug:** `GOOGLEBIGQUERY_CANCEL_JOB`

Tool to cancel a running BigQuery job. This call returns immediately, and you need to poll for the job status to see if the cancel completed successfully. Note that cancelled jobs may still incur costs.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `job_id` | string | Yes | Required. Job ID of the job to cancel. |
| `location` | string | No | The geographic location of the job. You must specify the location to run the job for the following scenarios: If the location to run a job is not in the `us` or the `eu` multi-regional location. If the job's location is in a single region (for example, `us-central1`). |
| `project_id` | string | Yes | Required. Project ID of the job to cancel. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Capacity Commitment

**Slug:** `GOOGLEBIGQUERY_CREATE_CAPACITY_COMMITMENT`

Tool to create a new capacity commitment resource in BigQuery Reservation. Use when you need to purchase compute capacity (slots) with a committed period of usage for BigQuery jobs. Supports various commitment plans (FLEX, MONTHLY, ANNUAL, THREE_YEAR) and editions (STANDARD, ENTERPRISE, ENTERPRISE_PLUS).

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `plan` | string ("COMMITMENT_PLAN_UNSPECIFIED" | "FLEX" | "FLEX_FLAT_RATE" | "TRIAL" | "MONTHLY" | "MONTHLY_FLAT_RATE" | "ANNUAL" | "ANNUAL_FLAT_RATE" | "THREE_YEAR" | "NONE") | Yes | Required. Capacity commitment plan. Determines the commitment period and pricing. |
| `parent` | string | Yes | Required. Resource name of the parent reservation. Must be in the format 'projects/{project}/locations/{location}' (e.g., 'projects/myproject/locations/US'). |
| `edition` | string ("EDITION_UNSPECIFIED" | "STANDARD" | "ENTERPRISE" | "ENTERPRISE_PLUS") | No | Edition of the capacity commitment. |
| `slotCount` | string | Yes | Required. Number of slots in this commitment. Must be a positive integer represented as a string. |
| `renewalPlan` | string ("COMMITMENT_PLAN_UNSPECIFIED" | "FLEX" | "FLEX_FLAT_RATE" | "TRIAL" | "MONTHLY" | "MONTHLY_FLAT_RATE" | "ANNUAL" | "ANNUAL_FLAT_RATE" | "THREE_YEAR" | "NONE") | No | Capacity commitment plan types. |
| `capacityCommitmentId` | string | No | The optional capacity commitment ID. Capacity commitment name will be generated automatically if this field is empty. Must only contain lower case alphanumeric characters or dashes. The first and last character cannot be a dash. Max length is 64 characters. |
| `multiRegionAuxiliary` | boolean | No | Applicable only for commitments located within one of the BigQuery multi-regions (US or EU). If set to true, this commitment is placed in the organization's secondary region for disaster recovery. NOTE: this is a preview feature. |
| `enforceSingleAdminProjectPerOrg` | boolean | No | If true, fail the request if another project in the organization has a capacity commitment. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create BigQuery Connection

**Slug:** `GOOGLEBIGQUERY_CREATE_CONNECTION`

Tool to create a new BigQuery connection to external data sources using the BigQuery Connection API. Use when setting up connections to AWS, Azure, Cloud Spanner, Cloud SQL, Salesforce DataCloud, or Apache Spark.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `aws` | object | No | Connection properties specific to Amazon Web Services (AWS). |
| `azure` | object | No | Container for connection properties specific to Azure. |
| `spark` | object | No | Container for connection properties to execute stored procedures for Apache Spark. |
| `parent` | string | Yes | Required. Parent resource name in the format 'projects/{project_id}/locations/{location_id}'. Example: 'projects/my-project/locations/us-central1'. |
| `cloudSql` | object | No | Connection properties specific to Cloud SQL. |
| `kmsKeyName` | string | No | Optional. The Cloud KMS key that is used for encryption. Example: 'projects/{kms_project_id}/locations/{region}/keyRings/{key_region}/cryptoKeys/{key}'. |
| `description` | string | No | User provided description for the connection. |
| `cloudSpanner` | object | No | Connection properties specific to Cloud Spanner. |
| `connectionId` | string | No | Optional. Connection id that should be assigned to the created connection. If not specified, a random connection id will be generated. |
| `friendlyName` | string | No | User provided display name for the connection. |
| `cloudResource` | object | No | Container for connection properties for delegation of access to GCP resources. |
| `salesforceDataCloud` | object | No | Connection properties specific to Salesforce DataCloud. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Analytics Hub Data Exchange

**Slug:** `GOOGLEBIGQUERY_CREATE_DATA_EXCHANGE`

Tool to create a new Analytics Hub data exchange for sharing BigQuery datasets. Use when you need to set up a container for data sharing with descriptive information and listings.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `icon` | string | No | Optional. Base64 encoded image representing the data exchange. Max Size: 3.0MiB. Expected image dimensions are 512x512 pixels. |
| `parent` | string | Yes | Required. The parent resource path of the data exchange in the format 'projects/{project}/locations/{location}'. Example: 'projects/my-project/locations/US'. |
| `description` | string | No | Optional. Description of the data exchange. Must not contain Unicode non-characters as well as C0 and C1 control codes except tabs (HT), new lines (LF), carriage returns (CR), and page breaks (FF). Max length: 2000 bytes. |
| `displayName` | string | Yes | Required. Human-readable display name of the data exchange. Must contain only Unicode letters, numbers (0-9), underscores (_), dashes (-), spaces ( ), ampersands (&) and must not start or end with spaces. Max length: 63 bytes. |
| `documentation` | string | No | Optional. Documentation describing the data exchange. |
| `dataExchangeId` | string | Yes | Required. The ID of the data exchange. Must contain only Unicode letters, numbers (0-9), underscores (_). Should not use characters that require URL-escaping, or characters outside of ASCII, spaces. Max length: 100 bytes. |
| `primaryContact` | string | No | Optional. Email or URL of the primary point of contact of the data exchange. Max Length: 1000 bytes. |
| `sharingEnvironmentConfig` | object | No | Sharing environment configuration for data exchange behavior. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Analytics Hub Listing

**Slug:** `GOOGLEBIGQUERY_CREATE_DATAEXCHANGES_LISTINGS`

Tool to create a new listing in a BigQuery Analytics Hub data exchange. Use when you need to share a BigQuery dataset with specific subscribers or make it available for discovery. The dataset must exist and be in the same region as the data exchange.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `parent` | string | Yes | Required. The parent data exchange resource path in format 'projects/{projectId}/locations/{location}/dataExchanges/{dataExchangeId}'. The location must match the dataset location. |
| `listingId` | string | Yes | Required. The ID to use for the listing. Must contain only Unicode letters, numbers (0-9), underscores (_). Should not use characters that require URL-escaping, or characters outside of ASCII, spaces. Max length: 100 bytes. |
| `publisher` | object | No | Details of the publisher who owns the listing. |
| `categories` | array | No | Optional. Categories of the listing. Up to two categories are allowed. Helps subscribers discover relevant data. |
| `description` | string | No | Optional. Short description of the listing. Max 2000 bytes. Use this to explain what data the listing contains and how it can be used. |
| `displayName` | string | Yes | Required. Human-readable display name of the listing. Max 63 bytes. Supports Unicode letters, numbers, underscores, dashes, spaces, ampersands. |
| `dataProvider` | object | No | Details of the data provider who owns the source data. |
| `documentation` | string | No | Optional. Documentation describing the listing in detail. Can include usage instructions, schema details, and data lineage information. |
| `requestAccess` | string | No | Optional. Email or URL where users can request access to the listing. Max 1000 bytes. |
| `primaryContact` | string | No | Optional. Email or URL of the primary point of contact for the listing. Max 1000 bytes. |
| `bigqueryDataset` | object | Yes | Required. The BigQuery dataset source to be shared. This dataset must exist and be in the same region as the data exchange. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create BigQuery Dataset

**Slug:** `GOOGLEBIGQUERY_CREATE_DATASET`

Tool to create a new BigQuery dataset with explicit location, labels, and description using the BigQuery Datasets API. Use when the workflow needs to set up a staging/warehouse dataset and correctness of region is critical to avoid downstream job location mismatches. Surfaces 409 Already Exists errors cleanly without retrying.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `access` | array | No | Access control list (ACL) for the dataset. If not specified, the dataset inherits project-level permissions. |
| `labels` | object | No | Labels to organize and categorize the dataset. Labels are key-value pairs. Keys and values must be lowercase, max 63 characters. |
| `location` | string | Yes | Geographic location where the dataset should reside. This is CRITICAL to avoid location-related query/job errors. Examples: 'US', 'EU', 'us-central1', 'europe-west1'. |
| `dataset_id` | string | Yes | The dataset ID. Must be unique within the project. Use alphanumeric characters, underscores, or hyphens. |
| `project_id` | string | Yes | The project ID where the dataset will be created. This is used in the URL path, not in the request body. |
| `description` | string | No | A description of the dataset. Use this to document the dataset's purpose and contents. |
| `friendly_name` | string | No | A user-friendly name for the dataset. This is a descriptive label that appears in the BigQuery UI. |
| `defaultTableExpirationMs` | integer | No | Default lifetime of all tables in the dataset, in milliseconds. Tables will be deleted this many milliseconds after creation unless explicitly set otherwise. |
| `defaultPartitionExpirationMs` | integer | No | Default lifetime of all partitions in tables in the dataset, in milliseconds. Partitions will be deleted this many milliseconds after creation. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Analytics Hub Listing

**Slug:** `GOOGLEBIGQUERY_CREATE_LISTING`

Tool to create a new listing in a data exchange using Analytics Hub API. Use when publishing a BigQuery dataset to make it available for subscription by other users or organizations.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `alt` | string ("json" | "media" | "proto") | No | Data format for response. |
| `key` | string | No | API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token. |
| `icon` | string | No | Base64 encoded image representing the listing. Max Size: 3.0MiB. Expected image dimensions are 512x512 pixels. |
| `xgafv` | string ("1" | "2") | No | V1 error format. |
| `fields` | string | No | Selector specifying which fields to include in a partial response. |
| `parent` | string | Yes | Required. The parent resource path of the listing. Format: 'projects/{project_id}/locations/{location}/dataExchanges/{data_exchange_id}'. |
| `callback` | string | No | JSONP callback parameter. |
| `listingId` | string | No | The ID of the listing to create. Must contain only Unicode letters, numbers (0-9), underscores (_). Should not use characters that require URL-escaping, or characters outside of ASCII, spaces. Max length: 100 bytes. If not provided, a random ID will be generated. |
| `publisher` | object | No | Details of the listing publisher. |
| `categories` | array | No | Categories of the listing. Up to two categories are allowed. |
| `quota_user` | string | No | Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters. |
| `uploadType` | string | No | Legacy upload protocol for media (e.g. 'media', 'multipart'). |
| `description` | string | No | Short description of the listing. Must not contain Unicode non-characters and C0 and C1 control codes except tabs (HT), new lines (LF), carriage returns (CR), and page breaks (FF). Max length: 2000 bytes. |
| `displayName` | string | Yes | Required. Human-readable display name of the listing. Must contain only Unicode letters, numbers (0-9), underscores (_), dashes (-), spaces ( ), ampersands (&) and cannot start or end with spaces. Max length: 63 bytes. |
| `oauth_token` | string | No | OAuth 2.0 token for the current user. |
| `prettyPrint` | boolean | No | Returns response with indentations and line breaks. |
| `access_token` | string | No | OAuth access token. |
| `dataProvider` | object | No | Details of the data provider. |
| `documentation` | string | No | Documentation describing the listing. |
| `requestAccess` | string | No | Email or URL of the request access of the listing. Subscribers can use this reference to request access. Max Length: 1000 bytes. |
| `primaryContact` | string | No | Email or URL of the primary point of contact of the listing. Max Length: 1000 bytes. |
| `bigqueryDataset` | object | Yes | Required. Reference to the shared BigQuery dataset. Analytics Hub creates a linked dataset for subscribers when they subscribe to this listing. |
| `upload_protocol` | string | No | Upload protocol for media (e.g. 'raw', 'multipart'). |
| `restrictedExportConfig` | object | No | Restricted export configuration for linked dataset. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create BigQuery Data Policy (v2beta1)

**Slug:** `GOOGLEBIGQUERY_CREATE_LOCATIONS_DATAPOLICIES`

Tool to create a new data policy under a project with specified location using the v2beta1 BigQuery Data Policy API. Use when you need to set up data masking rules or column-level security for sensitive data. The v2beta1 endpoint uses a nested request structure.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `parent` | string | Yes | Required. Resource name of the project and location that the data policy will belong to. The format is `projects/{project_number}/locations/{location_id}`. Example: 'projects/934040047113/locations/us-central1'. |
| `dataPolicy` | object | Yes | Required. The data policy configuration containing the policy type, policy tag, and masking rules. |
| `dataPolicyId` | string | Yes | Required. User-assigned (human readable) ID of the data policy that needs to be unique within a project. Used as {data_policy_id} in part of the resource name. This will also be used as the display name. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create Analytics Hub Query Template

**Slug:** `GOOGLEBIGQUERY_CREATE_QUERY_TEMPLATE`

Tool to create a new query template in a BigQuery Analytics Hub Data Clean Room (DCR) data exchange. Use when you need to define predefined and approved queries for data clean room use cases. Query templates must be created in DCR data exchanges only.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `parent` | string | Yes | Required. The parent resource where this query template will be created. Format: projects/{project}/locations/{location}/dataExchanges/{dataExchange}. Note: Query templates require a Data Clean Room (DCR) data exchange. |
| `routine` | object | Yes | Required. The routine definition containing the query template logic. |
| `description` | string | No | Optional. Description of the query template explaining its purpose and usage. |
| `displayName` | string | Yes | Required. Human-readable name of the query template. This name will be shown to subscribers of the data exchange. |
| `queryTemplateId` | string | Yes | Required. The ID to use for the query template, which will become the final component of the template's resource name. Must contain only Unicode letters, numbers (0-9), underscores (_). Should not use characters that require URL-escaping, or characters outside of ASCII, spaces. Max length: 100 bytes. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create BigQuery Reservation

**Slug:** `GOOGLEBIGQUERY_CREATE_RESERVATION`

Tool to create a new BigQuery reservation resource to guarantee compute capacity (slots) for query and pipeline jobs. Use when you need to reserve dedicated compute resources for predictable performance and cost management. Reservations can be configured with autoscaling, concurrency limits, and edition-based features.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `parent` | string | Yes | Required. Project, location. E.g., 'projects/myproject/locations/US'. The parent resource where the reservation will be created. |
| `edition` | string ("EDITION_UNSPECIFIED" | "STANDARD" | "ENTERPRISE" | "ENTERPRISE_PLUS") | No | Edition of the reservation. |
| `autoscale` | object | No | Auto scaling settings for the reservation. |
| `concurrency` | string | No | Job concurrency target which sets a soft upper bound on the number of jobs that can run concurrently in this reservation. Default value is 0 which means that concurrency target will be automatically computed by the system. Must be a non-negative integer represented as a string. |
| `slotCapacity` | string | Yes | Required. Baseline slots available to this reservation. A slot is a unit of computational power in BigQuery. Must be a positive integer represented as a string. You can increase baseline slots every few minutes, but decreases are limited to once an hour if slots exceed committed slots. |
| `reservationId` | string | No | Optional reservation ID. It must only contain lower case alphanumeric characters or dashes. It must start with a letter and must not end with a dash. Its maximum length is 64 characters. If not provided, a reservation ID will be generated automatically. |
| `ignoreIdleSlots` | boolean | No | If false, any query or pipeline job using this reservation will use idle slots from other reservations within the same admin project. If true, jobs will execute with only the slot capacity specified in slot_capacity field. |
| `multiRegionAuxiliary` | boolean | No | Applicable only for reservations located within one of the BigQuery multi-regions (US or EU). If set to true, this reservation is placed in the organization's secondary region which is designated for disaster recovery purposes. If false, this reservation is placed in the organization's default region. NOTE: this is a preview feature. Project must be allow-listed in order to set this field. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create BigQuery Reservation Assignment

**Slug:** `GOOGLEBIGQUERY_CREATE_RESERVATION_ASSIGNMENT`

Tool to create a BigQuery reservation assignment that allows a project, folder, or organization to submit jobs using slots from a specified reservation. Use when setting up resource allocation for BigQuery workloads. Note: A resource can only have one assignment per (job_type, location) combination.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `parent` | string | Yes | Required. The parent resource name of the assignment. Format: 'projects/{project_id}/locations/{location}/reservations/{reservation_id}'. Example: 'projects/myproject/locations/US/reservations/team1-prod'. |
| `jobType` | string ("JOB_TYPE_UNSPECIFIED" | "PIPELINE" | "QUERY" | "ML_EXTERNAL" | "BACKGROUND") | Yes | Required. Which type of jobs will use the reservation. Typically 'QUERY' for SQL queries, 'PIPELINE' for data pipelines, 'ML_EXTERNAL' for ML jobs, or 'BACKGROUND' for background jobs. |
| `assignee` | string | Yes | Required. The resource which will use the reservation. Can be a project, folder, or organization. Format: 'projects/{project_id}', 'folders/{folder_id}', or 'organizations/{org_id}'. |
| `assignmentId` | string | No | Optional assignment ID. Assignment name will be generated automatically if this field is empty. Must only contain lower case alphanumeric characters or dashes. Max length is 64 characters. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create BigQuery Routine

**Slug:** `GOOGLEBIGQUERY_CREATE_ROUTINE`

Tool to create a new user-defined routine (function or procedure) in a BigQuery dataset. Use when you need to define SQL, JavaScript, Python, Java, or Scala functions/procedures for reusable logic, data transformations, or custom masking. Supports scalar functions, table-valued functions, procedures, and aggregate functions with comprehensive type definitions.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `language` | string ("LANGUAGE_UNSPECIFIED" | "SQL" | "JAVASCRIPT" | "PYTHON" | "JAVA" | "SCALA") | No | Programming language of the routine. |
| `arguments` | array | No | Optional. Input/output arguments for the routine. |
| `dataset_id` | string | Yes | Required. Dataset ID where the routine will be created. Used in the URL path. |
| `project_id` | string | Yes | Required. Project ID where the routine will be created. Used in the URL path. |
| `returnType` | object | No | Data type of a BigQuery field or function argument. Supports simple types like INT64, and complex types like ARRAY and STRUCT. |
| `strictMode` | boolean | No | Optional. If TRUE (default), the procedure body is checked for errors like non-existent tables. Set to FALSE for recursive procedures to avoid validation errors. |
| `description` | string | No | Optional. A description of the routine. |
| `routineType` | string ("ROUTINE_TYPE_UNSPECIFIED" | "SCALAR_FUNCTION" | "PROCEDURE" | "TABLE_VALUED_FUNCTION" | "AGGREGATE_FUNCTION") | Yes | Required. The type of routine: SCALAR_FUNCTION, PROCEDURE, TABLE_VALUED_FUNCTION, or AGGREGATE_FUNCTION. |
| `securityMode` | string ("SECURITY_MODE_UNSPECIFIED" | "DEFINER" | "INVOKER") | No | Security mode of the routine. |
| `sparkOptions` | object | No | Options for user-defined Spark routines. |
| `definitionBody` | string | Yes | Required. The body of the routine. For SQL functions, this is the expression in the AS clause (excluding parentheses). For JavaScript, it's the evaluated string in the AS clause. |
| `returnTableType` | object | No | A table type for table-valued functions. |
| `determinismLevel` | string ("DETERMINISM_LEVEL_UNSPECIFIED" | "DETERMINISTIC" | "NOT_DETERMINISTIC") | No | Determinism level for JavaScript UDFs. |
| `routineReference` | object | Yes | Required. Reference containing projectId, datasetId, and routineId for the new routine. |
| `importedLibraries` | array | No | Optional. For JavaScript routines, paths of imported JavaScript libraries. |
| `dataGovernanceType` | string ("DATA_GOVERNANCE_TYPE_UNSPECIFIED" | "DATA_MASKING") | No | Data governance type for the routine. |
| `remoteFunctionOptions` | object | No | Options for remote user-defined functions. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Create BigQuery Table

**Slug:** `GOOGLEBIGQUERY_CREATE_TABLE`

Tool to create a new, empty table in a BigQuery dataset. Use when setting up data infrastructure for standard tables, external tables, views, or materialized views. Supports partitioning, clustering, and encryption configuration.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `view` | object | No | Logical view definition. |
| `labels` | object | No | Labels to organize tables. Keys and values must be lowercase, max 63 characters, starting with a letter. |
| `schema` | object | No | Schema of a table defining its fields/columns. |
| `clustering` | object | No | Clustering configuration for a table. |
| `dataset_id` | string | Yes | The dataset ID where the table will be created. Used in the URL path. |
| `project_id` | string | Yes | The project ID where the table will be created. Used in the URL path. |
| `description` | string | No | A user-friendly description of the table. |
| `friendlyName` | string | No | A descriptive name for the table shown in the BigQuery UI. |
| `expirationTime` | string | No | Time when the table expires, in milliseconds since epoch. If not set, the table persists indefinitely or uses dataset default. |
| `tableReference` | object | Yes | Reference identifying the table. Must match project_id and dataset_id from the path. |
| `defaultCollation` | string | No | Default collation for new STRING fields. Options: 'und:ci' (case insensitive), '' (case sensitive, default). |
| `materializedView` | object | No | Materialized view definition and configuration. |
| `timePartitioning` | object | No | Time-based partitioning configuration for a table. |
| `rangePartitioning` | object | No | Range-based partitioning configuration for a table. |
| `requirePartitionFilter` | boolean | No | If true, queries over this table require a partition filter for partition elimination. |
| `encryptionConfiguration` | object | No | Encryption configuration for a table. |
| `externalDataConfiguration` | object | No | Configuration for external data sources. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete BigQuery Dataset

**Slug:** `GOOGLEBIGQUERY_DELETE_DATASET`

Tool to delete a BigQuery dataset specified by datasetId via the datasets.delete API. Before deletion, you must delete all tables unless deleteContents=True is specified. Use when cleaning up test datasets or removing unused data warehouses. Immediately after deletion, you can create another dataset with the same name.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `dataset_id` | string | Yes | Required. Dataset ID of dataset being deleted. |
| `project_id` | string | Yes | Required. Project ID of the dataset being deleted. |
| `delete_contents` | boolean | No | If True, delete all the tables in the dataset. If False and the dataset contains tables, the request will fail. Default is False. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete BigQuery Job Metadata

**Slug:** `GOOGLEBIGQUERY_DELETE_JOB_METADATA`

Tool to delete the metadata of a BigQuery job. Use when you need to remove job metadata from the system. If this is a parent job with child jobs, metadata from all child jobs will be deleted as well.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `job_id` | string | Yes | Required. Job ID of the job for which metadata is to be deleted. If this is a parent job which has child jobs, the metadata from all child jobs will be deleted as well. Direct deletion of the metadata of child jobs is not allowed. |
| `location` | string | No | The geographic location of the job. Required for jobs in certain regions. See details at: https://cloud.google.com/bigquery/docs/locations#specifying_your_location. |
| `project_id` | string | Yes | Required. Project ID of the job for which metadata is to be deleted. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete BigQuery ML Model

**Slug:** `GOOGLEBIGQUERY_DELETE_MODEL`

Tool to delete a BigQuery ML model from a dataset. Use when you need to remove a trained machine learning model permanently. The operation deletes the model and cannot be undone.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model_id` | string | Yes | Required. Model ID of the model to delete. |
| `dataset_id` | string | Yes | Required. Dataset ID of the model to delete. |
| `project_id` | string | Yes | Required. Project ID of the model to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete BigQuery Routine

**Slug:** `GOOGLEBIGQUERY_DELETE_ROUTINE`

Tool to delete a BigQuery routine by its ID. Use when you need to remove a stored procedure, user-defined function, or table function from a dataset. This operation is irreversible.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `dataset_id` | string | Yes | Required. Dataset ID of the routine to delete. |
| `project_id` | string | Yes | Required. Project ID of the routine to delete. |
| `routine_id` | string | Yes | Required. Routine ID of the routine to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Delete BigQuery Table

**Slug:** `GOOGLEBIGQUERY_DELETE_TABLE`

Tool to delete a BigQuery table from a dataset. Use when you need to remove a table and all its data permanently. The operation deletes all data in the table and cannot be undone.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `table_id` | string | Yes | Required. Table ID of the table to delete. |
| `dataset_id` | string | Yes | Required. Dataset ID of the table to delete. |
| `project_id` | string | Yes | Required. Project ID of the table to delete. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get BigQuery ML Model

**Slug:** `GOOGLEBIGQUERY_GET_BIGQUERY_MODEL`

Tool to retrieve a specific BigQuery ML model resource by model ID. Use when you need detailed information about a trained machine learning model including its configuration, training runs, hyperparameters, and evaluation metrics.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model_id` | string | Yes | Required. Model ID of the requested model. |
| `dataset_id` | string | Yes | Required. Dataset ID of the requested model. |
| `project_id` | string | Yes | Required. Project ID of the requested model. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get BigQuery Connection IAM Policy

**Slug:** `GOOGLEBIGQUERY_GET_CONNECTION_IAM_POLICY`

Tool to get the IAM access control policy for a BigQuery connection resource. Returns an empty policy if the resource exists but has no policy set. Use this to check who has access to a specific connection before modifying permissions.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `options` | object | No | Optional settings for GetIamPolicy request. Specify requestedPolicyVersion if you need a specific policy version format. |
| `resource` | string | Yes | REQUIRED: The resource for which the policy is being requested. Format: projects/{project}/locations/{location}/connections/{connection}. Example: projects/my-project/locations/us/connections/my-connection |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get BigQuery Dataset Metadata

**Slug:** `GOOGLEBIGQUERY_GET_DATASET`

Tool to retrieve BigQuery dataset metadata including location via the datasets.get API. Use this before creating jobs/queries if the workflow has been failing with location mismatch to confirm the dataset's region and correct the job location accordingly.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `dataset_id` | string | Yes | Dataset ID of the requested dataset. |
| `project_id` | string | Yes | Project ID of the requested dataset. |
| `dataset_view` | string ("DATASET_VIEW_UNSPECIFIED" | "METADATA" | "ACL" | "FULL") | No | Optional view specifying which dataset information to return. DATASET_VIEW_UNSPECIFIED defaults to FULL. METADATA returns only metadata. ACL returns only access control. FULL returns both metadata and ACL. |
| `access_policy_version` | integer | No | Version of access policy schema to fetch. Valid values: 0, 1, 3. Use version 3 for conditional access policy bindings. If unset, defaults to version 1. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get BigQuery Job

**Slug:** `GOOGLEBIGQUERY_GET_JOB`

Tool to retrieve information about a specific BigQuery job. Returns job configuration, status, and statistics. Use this to check job status after running queries or to get details about job execution.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `job_id` | string | Yes | Required. Job ID of the requested job. |
| `location` | string | No | The geographic location of the job. You must specify the location to run the job for the following scenarios: - If the location to run a job is not in the `us` or the `eu` multi-regional location - If the job's location is in a single region (for example, `us-central1`) For more information, see https://cloud.google.com/bigquery/docs/locations#specifying_your_location. |
| `project_id` | string | Yes | Required. Project ID of the requested job. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get BigQuery Query Results

**Slug:** `GOOGLEBIGQUERY_GET_QUERY_RESULTS`

Tool to get the results of a BigQuery query job via RPC. Use this to retrieve results after running a query, or to check job completion status and fetch paginated results.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `alt` | string ("json" | "media" | "proto") | No | Data format for response enum. |
| `key` | string | No | API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token. Optional. |
| `jobId` | string | Yes | Required. Job ID of the query job. |
| `xgafv` | string ("1" | "2") | No | V1 error format enum. |
| `fields` | string | No | Selector specifying which fields to include in a partial response. Optional. |
| `callback` | string | No | JSONP callback parameter. Optional. |
| `location` | string | No | The geographic location of the job. You must specify the location to run the job for the following scenarios: - If the location to run a job is not in the `us` or the `eu` multi-regional location - If the job's location is in a single region (for example, `us-central1`). For more information, see https://cloud.google.com/bigquery/docs/locations#specifying_your_location. Optional. |
| `pageToken` | string | No | Page token, returned by a previous call, to request the next page of results. Optional. |
| `projectId` | string | Yes | Required. Project ID of the query job. |
| `quotaUser` | string | No | Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters. Optional. |
| `timeoutMs` | integer | No | Optional: Specifies the maximum amount of time, in milliseconds, that the client is willing to wait for the query to complete. By default, this limit is 10 seconds (10,000 milliseconds). If the query is complete, the jobComplete field in the response is true. If the query has not yet completed, jobComplete is false. You can request a longer timeout period in the timeoutMs field. However, the call is not guaranteed to wait for the specified timeout; it typically returns after around 200 seconds (200,000 milliseconds), even if the query is not complete. If jobComplete is false, you can continue to wait for the query to complete by calling the getQueryResults method until the jobComplete field in the getQueryResults response is true. Optional. |
| `maxResults` | integer | No | Maximum number of results to read. Optional. |
| `startIndex` | string | No | Zero-based index of the starting row. Optional. |
| `uploadType` | string | No | Legacy upload protocol for media (e.g. "media", "multipart"). Optional. |
| `oauth_token` | string | No | OAuth 2.0 token for the current user. Optional. |
| `prettyPrint` | boolean | No | Returns response with indentations and line breaks. Optional. |
| `access_token` | string | No | OAuth access token. Optional. |
| `upload_protocol` | string | No | Upload protocol for media (e.g. "raw", "multipart"). Optional. |
| `formatOptions.useInt64Timestamp` | boolean | No | Optional. Output timestamp as usec int64. Default is false. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get BigQuery Routine

**Slug:** `GOOGLEBIGQUERY_GET_ROUTINE`

Tool to retrieve a BigQuery routine (user-defined function or stored procedure) by its ID. Use to inspect routine definitions, arguments, return types, and metadata.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `read_mask` | string | No | If set, only the Routine fields in the field mask are returned in the response. If unset, all Routine fields are returned. |
| `dataset_id` | string | Yes | Required. Dataset ID of the requested routine. |
| `project_id` | string | Yes | Required. Project ID of the requested routine. |
| `routine_id` | string | Yes | Required. Routine ID of the requested routine. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get BigQuery Routine IAM Policy

**Slug:** `GOOGLEBIGQUERY_GET_ROUTINE_IAM_POLICY`

Tool to retrieve the IAM access control policy for a BigQuery routine resource. Returns an empty policy if the routine exists but has no policy set. Use this to check current access permissions before modifying them.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `options` | object | No | Optional settings for GetIamPolicy request. Specify requestedPolicyVersion if you need a specific policy version format. |
| `dataset_id` | string | Yes | Required. The ID of the dataset containing the routine. |
| `project_id` | string | Yes | Required. The ID of the project containing the routine. |
| `routine_id` | string | Yes | Required. The ID of the routine to get IAM policy for. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get BigQuery Service Account

**Slug:** `GOOGLEBIGQUERY_GET_SERVICE_ACCOUNT`

Tool to get the service account for a project used for interactions with Google Cloud KMS. Use when you need to retrieve the BigQuery service account email for KMS encryption configuration or key access permissions.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `alt` | string ("json" | "media" | "proto") | No | Data format for response. Optional query parameter. |
| `key` | string | No | API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token. Optional query parameter. |
| `xgafv` | string ("1" | "2") | No | V1 error format. Optional query parameter. |
| `fields` | string | No | Selector specifying which fields to include in a partial response. Optional query parameter. |
| `callback` | string | No | JSONP callback. Optional query parameter. |
| `project_id` | string | Yes | Required. ID of the project. |
| `quota_user` | string | No | Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters. Optional query parameter. |
| `uploadType` | string | No | Legacy upload protocol for media (e.g. 'media', 'multipart'). Optional query parameter. |
| `oauth_token` | string | No | OAuth 2.0 token for the current user. Optional query parameter. |
| `prettyPrint` | boolean | No | Returns response with indentations and line breaks. Optional query parameter. |
| `access_token` | string | No | OAuth access token. Optional query parameter. |
| `upload_protocol` | string | No | Upload protocol for media (e.g. 'raw', 'multipart'). Optional query parameter. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get BigQuery Table IAM Policy

**Slug:** `GOOGLEBIGQUERY_GET_TABLE_IAM_POLICY`

Tool to retrieve the IAM access control policy for a BigQuery table resource. Returns an empty policy if the resource exists but has no policy set. Use this to check current access permissions before modifying them.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `resource` | string | Yes | REQUIRED: The resource for which the policy is being requested. Format: 'projects/{projectId}/datasets/{datasetId}/tables/{tableId}'. |
| `requested_policy_version` | integer | No | Optional. The maximum policy version that will be used to format the policy. Valid values are 0, 1, and 3. Policies with conditional role bindings must specify version 3. If unset, defaults to version 1. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get BigQuery Table Schema

**Slug:** `GOOGLEBIGQUERY_GET_TABLE_SCHEMA`

Tool to fetch a BigQuery table's schema and metadata without querying row data. Use before generating SQL queries to avoid column name typos and confirm field types and nullable modes. This is especially useful when INFORMATION_SCHEMA access is restricted.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `table_id` | string | Yes | The table ID to retrieve schema and metadata for. |
| `dataset_id` | string | Yes | The dataset ID containing the table. |
| `project_id` | string | Yes | The project ID containing the dataset. |
| `selected_fields` | string | No | Comma-separated list of fields to return (e.g., 'schema,numRows,type'). If not specified, returns all fields. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Insert Data into BigQuery Table

**Slug:** `GOOGLEBIGQUERY_INSERT_ALL`

Tool to stream data into BigQuery one record at a time without running a load job. Use when you need immediate data availability or inserting small batches. Supports row-level deduplication via insertId and error handling via skipInvalidRows.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `rows` | array | Yes | Required. Array of rows to insert. Each row contains the data to be inserted as a JSON object. At least one row must be provided. |
| `traceId` | string | No | Optional. Unique request trace ID for debugging. Case-sensitive, up to 36 ASCII characters. A UUID is recommended. |
| `table_id` | string | Yes | Required. Table ID of the destination table. |
| `dataset_id` | string | Yes | Required. Dataset ID of the destination table. |
| `project_id` | string | Yes | Required. Project ID of the destination table. |
| `templateSuffix` | string | No | Optional. If specified, treats the destination table as a base template and inserts rows into an instance table named '{destination}{templateSuffix}'. Useful for table sharding patterns. |
| `skipInvalidRows` | boolean | No | Optional. If true, insert all valid rows even if some rows are invalid. If false (default), the entire request fails if any row is invalid. |
| `ignoreUnknownValues` | boolean | No | Optional. If true, accept rows with values that don't match the schema and ignore the unknown values. If false (default), treat unknown values as errors. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Insert BigQuery Job

**Slug:** `GOOGLEBIGQUERY_INSERT_JOB`

Tool to start a new asynchronous BigQuery job (query, load, extract, or copy). Use when you need to run a query as a job, load data from Cloud Storage, extract table data to GCS, or copy tables. For dry-run validation without execution, set dryRun to true in configuration.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `project_id` | string | Yes | Project ID of project that will be billed for the job. This is used in the URL path. |
| `jobReference` | object | No | Job reference identifying the job. |
| `configuration` | object | Yes | Required. Job configuration. Specify exactly one of: query, load, extract, or copy in the configuration object. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Insert BigQuery Job with Upload

**Slug:** `GOOGLEBIGQUERY_INSERT_JOB_WITH_UPLOAD`

Tool to start a new BigQuery load job with file upload. Uploads a file (CSV, JSON, etc.) and loads it into a BigQuery table in a single operation. Use when you need to upload data from a local file directly to BigQuery rather than loading from Cloud Storage.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `project_id` | string | Yes | Project ID that will be billed for the job. Used in the URL path. |
| `jobReference` | object | No | Optional job reference for the upload job. |
| `configuration` | object | Yes | Job configuration containing load settings (schema, destination table, format, etc.). |
| `file_to_upload` | object | Yes | File to upload to BigQuery. The file content will be loaded according to the sourceFormat specified in configuration. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Analytics Hub Listings

**Slug:** `GOOGLEBIGQUERY_LIST_ANALYTICS_HUB_LISTINGS`

Tool to list all listings in a given Analytics Hub data exchange. Use when you need to discover available data listings within a specific data exchange that can be subscribed to for data sharing.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `alt` | string ("json" | "media" | "proto") | No | Data format for response. |
| `key` | string | No | API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token. |
| `xgafv` | string ("1" | "2") | No | V1 error format. |
| `fields` | string | No | Selector specifying which fields to include in a partial response. |
| `parent` | string | Yes | Required. The parent resource path of the listing. Format: 'projects/{project_id}/locations/{location}/dataExchanges/{data_exchange_id}' where project_id is your GCP project ID, location is the region (e.g., 'US', 'us-central1'), and data_exchange_id is the data exchange identifier. |
| `callback` | string | No | JSONP callback parameter. |
| `pageSize` | integer | No | The maximum number of results to return in a single response page. Use page tokens to iterate through the entire collection. |
| `pageToken` | string | No | Page token, returned by a previous call, to request the next page of results. |
| `quota_user` | string | No | Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters. |
| `uploadType` | string | No | Legacy upload protocol for media (e.g. 'media', 'multipart'). |
| `oauth_token` | string | No | OAuth 2.0 token for the current user. |
| `prettyPrint` | boolean | No | Returns response with indentations and line breaks. |
| `access_token` | string | No | OAuth access token. |
| `upload_protocol` | string | No | Upload protocol for media (e.g. 'raw', 'multipart'). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List BigQuery Connections

**Slug:** `GOOGLEBIGQUERY_LIST_BIG_QUERY_CONNECTIONS`

Tool to list BigQuery connections in a given project and location. Use when you need to discover available external data source connections (Cloud SQL, AWS, Azure, Spark, etc.) configured for BigQuery.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `parent` | string | Yes | Required. Parent resource name. Must be in the form: `projects/{project_id}/locations/{location_id}`. Example: 'projects/my-project/locations/us-central1' |
| `pageSize` | integer | No | Maximum number of connections to return per page. |
| `pageToken` | string | No | Page token for pagination. Use the nextPageToken from a previous response to get the next page of results. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List BigQuery Capacity Commitments

**Slug:** `GOOGLEBIGQUERY_LIST_CAPACITY_COMMITMENTS`

Tool to list all capacity commitments for the admin project. Use when you need to view purchased compute capacity slots and their commitment details (plan, state, duration).

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `parent` | string | Yes | Required. Resource name of the parent reservation. E.g., `projects/myproject/locations/US` |
| `page_size` | integer | No | The maximum number of items to return. |
| `page_token` | string | No | The next_page_token value returned from a previous List request, if any. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Data Exchange Listings

**Slug:** `GOOGLEBIGQUERY_LIST_DATAEXCHANGES_LISTINGS`

Tool to list all listings in a given Analytics Hub data exchange using the v1beta1 API. Use when you need to discover available data listings within a specific data exchange that can be subscribed to for data sharing.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `alt` | string ("json" | "media" | "proto") | No | Data format for response. |
| `key` | string | No | API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token. |
| `xgafv` | string ("1" | "2") | No | V1 error format. |
| `fields` | string | No | Selector specifying which fields to include in a partial response. |
| `parent` | string | Yes | Required. The parent resource path of the listing. Format: 'projects/{project_id}/locations/{location}/dataExchanges/{data_exchange_id}' where project_id is your GCP project ID, location is the region (e.g., 'US', 'us-central1'), and data_exchange_id is the data exchange identifier. |
| `callback` | string | No | JSONP callback parameter. |
| `pageSize` | integer | No | The maximum number of results to return in a single response page. Use page tokens to iterate through the entire collection. |
| `pageToken` | string | No | Page token, returned by a previous call, to request the next page of results. |
| `quota_user` | string | No | Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters. |
| `uploadType` | string | No | Legacy upload protocol for media (e.g. 'media', 'multipart'). |
| `oauth_token` | string | No | OAuth 2.0 token for the current user. |
| `prettyPrint` | boolean | No | Returns response with indentations and line breaks. |
| `access_token` | string | No | OAuth access token. |
| `upload_protocol` | string | No | Upload protocol for media (e.g. 'raw', 'multipart'). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List BigQuery Datasets

**Slug:** `GOOGLEBIGQUERY_LIST_DATASETS`

Tool to list datasets in a specific BigQuery project, including dataset locations. Use after identifying an accessible project to discover available datasets and their locations before querying. The dataset location is critical for avoiding location-related query/job errors.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `all` | boolean | No | Whether to list all datasets, including hidden ones. Defaults to false. |
| `filter` | string | No | Filter datasets by label in the format 'labels.key:value'. Multiple filters can be ANDed together by connecting with a space. |
| `page_token` | string | No | Page token for pagination. Use the nextPageToken from a previous response to get the next page of results. |
| `project_id` | string | Yes | The project ID containing the datasets. |
| `max_results` | integer | No | Maximum number of datasets to return per page. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List BigQuery Jobs

**Slug:** `GOOGLEBIGQUERY_LIST_JOBS`

Tool to list all jobs that you started in a BigQuery project. Job information is available for a six month period after creation. Jobs are sorted in reverse chronological order by creation time. Use to monitor query execution, track job statuses, and retrieve job history.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `all_users` | boolean | No | Whether to display jobs owned by all users in the project. Default False. Requires Is Owner project role to set to true. |
| `page_token` | string | No | Page token, returned by a previous call, to request the next page of results. |
| `project_id` | string | Yes | Project ID of the jobs to list. |
| `projection` | string ("full" | "minimal") | No | Restrict information returned to a set of selected fields. |
| `max_results` | integer | No | The maximum number of results to return in a single response page. Leverage the page tokens to iterate through the entire collection. |
| `state_filter` | array | No | Filter for job state. Can include values like 'done', 'pending', 'running'. Multiple states can be specified. |
| `parent_job_id` | string | No | If set, show only child jobs of the specified parent. Otherwise, show all top-level jobs. |
| `max_creation_time` | string | No | Max value for job creation time, in milliseconds since the POSIX epoch. If set, only jobs created before or at this timestamp are returned. |
| `min_creation_time` | string | No | Min value for job creation time, in milliseconds since the POSIX epoch. If set, only jobs created after or at this timestamp are returned. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List BigQuery Data Transfer Locations

**Slug:** `GOOGLEBIGQUERY_LIST_LOCATIONS`

Tool to list information about supported locations for BigQuery Data Transfer Service. Use when you need to discover available regions/locations where BigQuery Data Transfer operations can be performed.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | The resource that owns the locations collection, if applicable. Format: 'projects/{project_id}' where project_id is your GCP project ID. |
| `filter` | string | No | A filter to narrow down results to a preferred subset. The filtering language accepts strings like 'displayName=tokyo', and is documented in more detail in AIP-160. |
| `pageSize` | integer | No | The maximum number of results to return. If not set, the service selects a default. |
| `pageToken` | string | No | A page token received from the next_page_token field in the response. Send that page token to receive the subsequent page. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Connections in Location

**Slug:** `GOOGLEBIGQUERY_LIST_LOCATIONS_CONNECTIONS`

Tool to list BigQuery connections in a given project and location using the v1beta1 API. Use when you need to discover available external data source connections (Cloud SQL, AWS, Azure, Spark, etc.) configured for BigQuery in a specific location.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `parent` | string | Yes | Required. Parent resource name. Must be in the form: `projects/{project_id}/locations/{location_id}`. Example: 'projects/my-project/locations/us' |
| `pageToken` | string | No | Page token for pagination. Use the nextPageToken from a previous response to get the next page of results. |
| `maxResults` | integer | Yes | Required. Maximum number of connections to return per page. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List BigQuery Location Data Policies

**Slug:** `GOOGLEBIGQUERY_LIST_LOCATIONS_DATAPOLICIES`

Tool to list all data policies in a specified parent project and location using the v2beta1 API. Use when you need to discover data masking policies and column-level security policies configured for BigQuery datasets.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `filter` | string | No | Filters the data policies by policy tags that they are associated with. Currently filter only supports 'policy_tag' based filtering and OR based predicates. Sample filter can be 'policy_tag: projects/1/locations/us/taxonomies/2/policyTags/3'. You may also use wildcard such as 'policy_tag: projects/1/locations/us/taxonomies/2*'. Please note that OR predicates cannot be used with wildcard filters. |
| `parent` | string | Yes | Required. Resource name of the project for which to list data policies. Format is projects/{project}/locations/{location}. |
| `pageSize` | integer | No | The maximum number of data policies to return. Must be a value between 1 and 1000. If not set, defaults to 50. |
| `pageToken` | string | No | The nextPageToken value returned from a previous list request, if any. If not set, defaults to an empty string. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List BigQuery Models

**Slug:** `GOOGLEBIGQUERY_LIST_MODELS`

Tool to list all BigQuery ML models in a specified dataset. Requires READER dataset role. Use this to discover available models before getting detailed information via models.get method.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `dataset_id` | string | Yes | Required. Dataset ID of the models to list. |
| `page_token` | string | No | Page token for pagination, returned by a previous call to request the next page of results. |
| `project_id` | string | Yes | Required. Project ID of the models to list. |
| `max_results` | integer | No | Maximum number of models to return per page. Use page tokens to iterate through the entire collection. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Organization Data Exchanges

**Slug:** `GOOGLEBIGQUERY_LIST_ORGANIZATION_DATA_EXCHANGES`

Tool to list all data exchanges from projects in a given organization and location using Analytics Hub API. Use when you need to discover available data exchanges within an organization that can be used for data sharing.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `alt` | string ("json" | "media" | "proto") | No | Data format for response. |
| `key` | string | No | API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token. |
| `xgafv` | string ("1" | "2") | No | V1 error format. |
| `fields` | string | No | Selector specifying which fields to include in a partial response. |
| `callback` | string | No | JSONP callback parameter. |
| `pageSize` | integer | No | The maximum number of results to return in a single response page. Use page tokens to iterate through the entire collection. |
| `pageToken` | string | No | Page token, returned by a previous call, to request the next page of results. |
| `quota_user` | string | No | Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters. |
| `uploadType` | string | No | Legacy upload protocol for media (e.g. 'media', 'multipart'). |
| `oauth_token` | string | No | OAuth 2.0 token for the current user. |
| `prettyPrint` | boolean | No | Returns response with indentations and line breaks. |
| `access_token` | string | No | OAuth access token. |
| `organization` | string | Yes | Required. The organization resource path of the projects containing DataExchanges. Format: 'organizations/{organization_id}/locations/{location}' where organization_id is your GCP organization ID and location is the region (e.g., 'US', 'us-central1'). |
| `upload_protocol` | string | No | Upload protocol for media (e.g. 'raw', 'multipart'). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List BigQuery Projects

**Slug:** `GOOGLEBIGQUERY_LIST_PROJECTS`

Tool to list BigQuery projects to which the user has been granted any project role. Returns projects with at least READ access. For enhanced capabilities, consider using the Resource Manager API.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `page_token` | string | No | Page token returned by a previous call, to request the next page of results. If not present, no further pages are available. |
| `max_results` | integer | No | Maximum number of projects to return per page. If not set, returns all results up to 50 per page. The number of projects returned may be fewer than maxResults because projects are filtered to only those with the BigQuery API enabled. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Analytics Hub Query Templates

**Slug:** `GOOGLEBIGQUERY_LIST_QUERY_TEMPLATES`

Tool to list all query templates in a given Analytics Hub data exchange. Use when you need to discover available query templates that define predefined and approved queries for data clean room use cases.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `parent` | string | Yes | Required. The parent resource path of the QueryTemplates. Format: projects/{project}/locations/{location}/dataExchanges/{dataExchange}. |
| `pageSize` | integer | No | The maximum number of results to return in a single response page. Use page tokens to iterate through the entire collection. |
| `pageToken` | string | No | Page token, returned by a previous call, to request the next page of results. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List BigQuery Reservation Assignments

**Slug:** `GOOGLEBIGQUERY_LIST_RESERVATION_ASSIGNMENTS`

Tool to list BigQuery reservation assignments. Only explicitly created assignments will be returned (no expansion or merge happens). Use wildcard "-" in parent path to list assignments across all reservations in a location.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `parent` | string | Yes | Required. The parent resource name of the reservation. Format: 'projects/{project_id}/locations/{location}/reservations/{reservation_id}' or 'projects/{project_id}/locations/{location}/reservations/-' (wildcard to list assignments across all reservations). |
| `page_size` | integer | No | Maximum number of assignments to return per page. |
| `page_token` | string | No | Page token from a previous ListAssignments response to retrieve the next page of results. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List BigQuery Reservation Groups

**Slug:** `GOOGLEBIGQUERY_LIST_RESERVATION_GROUPS`

Tool to list all BigQuery reservation groups for a project in a specified location. Use when you need to discover available reservation groups which serve as containers for reservations.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `parent` | string | Yes | Required. The parent resource name containing project and location in the format `projects/{projectId}/locations/{location}` where location is the geographic location (e.g., us-central1, us-east1). |
| `page_size` | integer | No | The maximum number of items to return per page. If not specified, the server will determine the number of results to return. |
| `page_token` | string | No | The next_page_token value returned from a previous List request, if any. Use this to retrieve the next page of results. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List BigQuery Reservations

**Slug:** `GOOGLEBIGQUERY_LIST_RESERVATIONS`

Tool to list all BigQuery reservations for a project in a specified location. Use when you need to discover available reservations or view reservation details including slot capacity and autoscale configuration.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `alt` | string ("json" | "media" | "proto") | No | Data format for response. |
| `key` | string | No | API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token. |
| `xgafv` | string ("1" | "2") | No | V1 error format. |
| `fields` | string | No | Selector specifying which fields to include in a partial response. |
| `parent` | string | Yes | Required. The parent resource name containing project and location, e.g.: `projects/myproject/locations/US`. The location must be specified (e.g., US, EU, us-central1). |
| `callback` | string | No | JSONP |
| `page_size` | integer | No | The maximum number of items to return per page. If not specified, the server will determine the number of results to return. |
| `page_token` | string | No | The next_page_token value returned from a previous List request, if any. Use this to retrieve the next page of results. |
| `quota_user` | string | No | Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters. |
| `oauth_token` | string | No | OAuth 2.0 token for the current user. |
| `upload_type` | string | No | Legacy upload protocol for media (e.g. 'media', 'multipart'). |
| `access_token` | string | No | OAuth access token. |
| `pretty_print` | boolean | No | Returns response with indentations and line breaks. |
| `upload_protocol` | string | No | Upload protocol for media (e.g. 'raw', 'multipart'). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List BigQuery Routines

**Slug:** `GOOGLEBIGQUERY_LIST_ROUTINES`

Tool to list all routines (user-defined functions and stored procedures) in a BigQuery dataset. Requires the READER dataset role. Use this to discover available routines before executing or inspecting them.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `filter` | string | No | If set, then only the Routines matching this filter are returned. The supported format is `routineType:{RoutineType}`, where `{RoutineType}` is a RoutineType enum. For example: `routineType:SCALAR_FUNCTION`. |
| `readMask` | string | No | If set, then only the Routine fields in the field mask, as well as project_id, dataset_id and routine_id, are returned in the response. If unset, then the following Routine fields are returned: etag, project_id, dataset_id, routine_id, routine_type, creation_time, last_modified_time, and language. |
| `pageToken` | string | No | Page token, returned by a previous call, to request the next page of results. |
| `dataset_id` | string | Yes | Required. Dataset ID of the routines to list. |
| `maxResults` | integer | No | The maximum number of results to return in a single response page. Leverage the page tokens to iterate through the entire collection. |
| `project_id` | string | Yes | Required. Project ID of the routines to list. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List BigQuery Row Access Policies

**Slug:** `GOOGLEBIGQUERY_LIST_ROW_ACCESS_POLICIES`

Tool to list all row access policies on a specified BigQuery table. Use when you need to discover which row-level security policies are applied to a table and their filter predicates.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `table_id` | string | Yes | Required. Table ID of the table to list row access policies. |
| `page_size` | integer | No | The maximum number of results to return in a single response page. Leverage the page tokens to iterate through the entire collection. |
| `dataset_id` | string | Yes | Required. Dataset ID of row access policies to list. |
| `page_token` | string | No | Page token, returned by a previous call, to request the next page of results. |
| `project_id` | string | Yes | Required. Project ID of the row access policies to list. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List BigQuery Table Data

**Slug:** `GOOGLEBIGQUERY_LIST_TABLE_DATA`

Tool to list the content of a BigQuery table in rows via the REST API. Use this to retrieve actual data from a table without writing SQL queries. Returns paginated results with row data in the native BigQuery format.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `table_id` | string | Yes | Required. Table ID of the table to list. |
| `dataset_id` | string | Yes | Required. Dataset ID of the table to list. |
| `page_token` | string | No | To retrieve the next page of table data, set this field to the string provided in the pageToken field of the response body from your previous call to tabledata.list. |
| `project_id` | string | Yes | Required. Project ID of the table to list. |
| `max_results` | integer | No | Row limit of the table. Maximum number of rows to return per page. |
| `start_index` | string | No | Start row index of the table. |
| `selected_fields` | string | No | Subset of fields to return, supports select into sub fields. Example: selected_fields = 'a,e.d.f'; |
| `format_options_use_int64_timestamp` | boolean | No | Optional. Output timestamp as usec int64. Default is false. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List BigQuery Tables

**Slug:** `GOOGLEBIGQUERY_LIST_TABLES`

Tool to list tables in a BigQuery dataset via the REST API. Use this early in exploration to discover accessible tables without relying on INFORMATION_SCHEMA, especially when SQL-based metadata queries are blocked or restricted. This provides a deterministic inventory of tables even when dataset-level permissions prevent INFORMATION_SCHEMA access.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `dataset_id` | string | Yes | The dataset ID to list tables from. |
| `page_token` | string | No | Page token for pagination. Use the nextPageToken from a previous response to get the next page of results. |
| `project_id` | string | Yes | The project ID containing the dataset. |
| `max_results` | integer | No | Maximum number of tables to return per page. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Patch BigQuery Dataset

**Slug:** `GOOGLEBIGQUERY_PATCH_DATASET`

Tool to update an existing BigQuery dataset using RFC5789 PATCH semantics. Only replaces fields provided in the request, leaving other fields unchanged. Use when you need to modify dataset properties like description, labels, expiration settings, or access controls without affecting other configuration.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `access` | array | No | Optional. An array of objects that define dataset access for one or more entities. |
| `labels` | object | No | The labels associated with this dataset. You can use these to organize and group your datasets. |
| `location` | string | No | The geographic location where the dataset should reside. See https://cloud.google.com/bigquery/docs/locations for supported locations. |
| `dataset_id` | string | Yes | Required. Dataset ID of the dataset being updated. |
| `project_id` | string | Yes | Required. Project ID of the dataset being updated. |
| `description` | string | No | Optional. A user-friendly description of the dataset. |
| `friendlyName` | string | No | Optional. A descriptive name for the dataset. |
| `datasetReference` | object | No | Dataset reference for patching. |
| `defaultCollation` | string | No | Optional. Defines the default collation specification of future tables created in the dataset. Supported values: 'und:ci' (case insensitive) or '' (case sensitive). |
| `isCaseInsensitive` | boolean | No | Optional. TRUE if the dataset and its table names are case-insensitive, otherwise FALSE. |
| `maxTimeTravelHours` | string | No | Optional. Defines the time travel window in hours. The value can be from 48 to 168 hours (2 to 7 days). |
| `defaultRoundingMode` | string ("ROUNDING_MODE_UNSPECIFIED" | "ROUND_HALF_AWAY_FROM_ZERO" | "ROUND_HALF_EVEN") | No | Optional. Defines the default rounding mode specification of new tables created within this dataset. |
| `linkedDatasetSource` | object | No | Linked dataset source configuration. |
| `storageBillingModel` | string ("STORAGE_BILLING_MODEL_UNSPECIFIED" | "LOGICAL" | "PHYSICAL") | No | Optional. Updates storage_billing_model for the dataset. |
| `defaultTableExpirationMs` | string | No | Optional. The default lifetime of all tables in the dataset, in milliseconds. Minimum value is 3600000 (one hour). Set to 0 to clear. |
| `externalDatasetReference` | object | No | External dataset reference configuration. |
| `defaultPartitionExpirationMs` | string | No | Default partition expiration in milliseconds. When new time-partitioned tables are created, they will inherit this value. |
| `defaultEncryptionConfiguration` | object | No | Encryption configuration for the dataset. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Patch BigQuery ML Model

**Slug:** `GOOGLEBIGQUERY_PATCH_MODEL`

Tool to update specific fields in an existing BigQuery ML model using PATCH semantics. Use when you need to modify model metadata like description, friendly name, labels, or expiration time without replacing the entire model resource.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `labels` | object | No | The labels associated with this model. Label keys and values can be no longer than 63 characters, can only contain lowercase letters, numeric characters, underscores and dashes. |
| `model_id` | string | Yes | Required. Model ID of the model to patch. |
| `dataset_id` | string | Yes | Required. Dataset ID of the model to patch. |
| `project_id` | string | Yes | Required. Project ID of the model to patch. |
| `description` | string | No | Optional. A user-friendly description of this model. |
| `friendlyName` | string | No | Optional. A descriptive name for this model. |
| `expirationTime` | string | No | Optional. The time when this model expires, in milliseconds since the epoch. If not present, the model will persist indefinitely. |
| `encryptionConfiguration` | object | No | Encryption configuration for the model. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Patch BigQuery Table

**Slug:** `GOOGLEBIGQUERY_PATCH_TABLE`

Tool to update specific fields in an existing BigQuery table using RFC5789 PATCH semantics. Only the fields provided in the request are updated; unspecified fields remain unchanged. Use when you need to modify table metadata like description, friendly name, labels, or expiration time without replacing the entire table resource.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `labels` | object | No | Labels to organize and categorize the table. Keys and values must be lowercase, max 63 characters. |
| `table_id` | string | Yes | The table ID to update. |
| `dataset_id` | string | Yes | The dataset ID containing the table to update. |
| `project_id` | string | Yes | The project ID containing the table to update. |
| `description` | string | No | User-friendly description of the table. Use this to document the table's purpose and contents. |
| `friendlyName` | string | No | A descriptive name for the table that appears in the BigQuery UI. |
| `expirationTime` | string | No | The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. |
| `autodetect_schema` | boolean | No | When true, will autodetect schema; otherwise will keep the original schema. |
| `requirePartitionFilter` | boolean | No | If true, queries over this table require a partition filter that can be used for partition elimination. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Query

**Slug:** `GOOGLEBIGQUERY_QUERY`

Query Tool runs a SQL query in BigQuery using the REST API. Use proper BigQuery SQL syntax, e.g., SELECT * FROM `project.dataset.table` WHERE column_name = 'value'. Results are returned under data.rows; an empty rows array means no matching data. Large result sets may be returned via remote_file_info instead of inline rows. Verify exact project_id, dataset, table, and column names before running; wrong identifiers trigger invalidQuery or notFound errors.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `query` | string | Yes | Query to run on BigQuery. Use standard SQL syntax. |
| `location` | string | No | Geographic location where the query should run (e.g., 'US', 'EU', 'us-central1'). Defaults to 'US' multi-region. Must match the dataset's actual geographic location; mismatched regions cause query failure. |
| `project_id` | string | Yes | The project ID to run the query against. |
| `timeout_ms` | integer | No | Query timeout in milliseconds. Defaults to 10000 (10 seconds). |
| `max_results` | integer | No | Maximum number of rows to return. If not specified, returns all rows. |
| `use_legacy_sql` | boolean | No | Whether to use legacy SQL syntax. Defaults to False (standard SQL). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Search All BigQuery Reservation Assignments

**Slug:** `GOOGLEBIGQUERY_SEARCH_ALL_ASSIGNMENTS`

Tool to search all BigQuery reservation assignments for a specified resource in a particular region. Use when you need to find assignments for a project, folder, or organization. Returns assignments created on the resource or its closest ancestor, covering all JobTypes.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `alt` | string ("json" | "media" | "proto") | No | Data format for response. |
| `key` | string | No | API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token. |
| `query` | string | No | Please specify resource name as assignee in the query. Examples: * `assignee=projects/myproject` * `assignee=folders/123` * `assignee=organizations/456` |
| `xgafv` | string ("1" | "2") | No | V1 error format. |
| `fields` | string | No | Selector specifying which fields to include in a partial response. |
| `parent` | string | Yes | Required. The resource name with location (project name could be the wildcard '-'), e.g.: `projects/-/locations/US`. |
| `callback` | string | No | JSONP |
| `page_size` | integer | No | The maximum number of items to return per page. |
| `page_token` | string | No | The next_page_token value returned from a previous List request, if any. |
| `quota_user` | string | No | Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters. |
| `oauth_token` | string | No | OAuth 2.0 token for the current user. |
| `upload_type` | string | No | Legacy upload protocol for media (e.g. 'media', 'multipart'). |
| `access_token` | string | No | OAuth access token. |
| `pretty_print` | boolean | No | Returns response with indentations and line breaks. |
| `upload_protocol` | string | No | Upload protocol for media (e.g. 'raw', 'multipart'). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Set BigQuery Routine IAM Policy

**Slug:** `GOOGLEBIGQUERY_SET_ROUTINE_IAM_POLICY`

Tool to set the IAM access control policy for a BigQuery routine resource. Use this to grant or modify access permissions for users, service accounts, or groups. Include the etag from getIamPolicy to prevent concurrent modifications.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `policy` | object | Yes | The IAM policy to set, containing bindings (array of role-members mappings), optional etag and version. |
| `dataset_id` | string | Yes | Required. Dataset ID of the routine. |
| `project_id` | string | Yes | Required. Project ID of the routine. |
| `routine_id` | string | Yes | Required. Routine ID. |
| `update_mask` | string | No | Field mask for selective policy updates. Specify fields to update (e.g., 'bindings,etag'). |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Test BigQuery Routine IAM Permissions

**Slug:** `GOOGLEBIGQUERY_TEST_ROUTINE_IAM_PERMISSIONS`

Tool to test which IAM permissions the caller has on a BigQuery routine. Returns the subset of requested permissions that the caller actually has. Use to verify access before performing operations.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `dataset_id` | string | Yes | Required. Dataset ID of the routine to test IAM permissions for. |
| `project_id` | string | Yes | Required. Project ID of the routine to test IAM permissions for. |
| `routine_id` | string | Yes | Required. Routine ID to test IAM permissions for. |
| `permissions` | array | Yes | Required. The set of permissions to check for the resource. Array of permission strings like 'bigquery.routines.get', 'bigquery.routines.update', 'bigquery.routines.delete'. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Undelete BigQuery Dataset

**Slug:** `GOOGLEBIGQUERY_UNDELETE_DATASET`

Tool to undelete a BigQuery dataset within the time travel window. If a deletion time is specified, the dataset version deleted at that time is undeleted; otherwise, the most recently deleted version is restored.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `dataset_id` | string | Yes | Required. Dataset ID of the dataset being undeleted. |
| `project_id` | string | Yes | Required. Project ID of the dataset to be undeleted. |
| `deletion_time` | string | No | Optional. The exact time when the dataset was deleted (RFC3339 format). If not specified, it will undelete the most recently deleted version. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update BigQuery Connection

**Slug:** `GOOGLEBIGQUERY_UPDATE_CONNECTION`

Tool to update a specified BigQuery connection using the BigQuery Connection API. Use when you need to modify connection properties such as friendly name, description, or connection-specific settings. For security reasons, credentials are automatically reset if connection properties are included in the update mask.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `aws` | object | No | Connection properties specific to Amazon Web Services. |
| `name` | string | Yes | Required. Name of the connection to update in the format 'projects/{project_id}/locations/{location_id}/connections/{connection_id}'. |
| `azure` | object | No | Connection properties specific to Azure. |
| `spark` | object | No | Connection properties for Apache Spark. |
| `cloudSql` | object | No | Connection properties specific to Cloud SQL. |
| `kmsKeyName` | string | No | Optional. The Cloud KMS key that is used for encryption. Example: 'projects/[kms_project_id]/locations/[region]/keyRings/[key_region]/cryptoKeys/[key]'. |
| `updateMask` | string | Yes | Required. Update mask for the connection fields to be updated. Comma-separated list of field paths (e.g., 'friendlyName,description'). For security reasons, credentials are reset if connection properties are in the mask. |
| `description` | string | No | User provided description. |
| `cloudSpanner` | object | No | Connection properties specific to Cloud Spanner. |
| `friendlyName` | string | No | User provided display name for the connection. |
| `cloudResource` | object | No | Connection properties for delegation of access to GCP resources. |
| `salesforceDataCloud` | object | No | Connection properties specific to Salesforce DataCloud. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update BigQuery Dataset

**Slug:** `GOOGLEBIGQUERY_UPDATE_DATASET`

Tool to update information in an existing BigQuery dataset using the PUT method. The update method replaces the entire dataset resource, whereas the patch method only replaces fields that are provided in the submitted dataset resource. Use when you need to modify dataset properties like description, access controls, or default settings.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `access` | array | No | Optional. An array of objects that define dataset access for one or more entities. If unspecified, BigQuery adds default dataset access for projectReaders, projectWriters, projectOwners, and the dataset creator. |
| `labels` | object | No | Optional. Labels to organize and categorize the dataset. Labels are key-value pairs. Keys and values must be lowercase, max 63 characters. |
| `location` | string | Yes | Required. Geographic location where the dataset resides. This cannot be changed after dataset creation. Examples: 'US', 'EU', 'us-central1', 'europe-west1'. |
| `dataset_id` | string | Yes | Required. Dataset ID of the dataset being updated. This is used in the URL path. |
| `project_id` | string | Yes | Required. Project ID of the dataset being updated. This is used in the URL path. |
| `description` | string | No | Optional. A user-friendly description of the dataset. Use this to document the dataset's purpose and contents. |
| `friendlyName` | string | No | Optional. A descriptive name for the dataset. This is a user-friendly label that appears in the BigQuery UI. |
| `datasetReference` | object | Yes | Required. Dataset reference containing the dataset ID and project ID. This must match the path parameters. |
| `defaultCollation` | string | No | Optional. Defines the default collation specification of future tables created in the dataset. Supported values: 'und:ci' (undetermined locale, case insensitive) or '' (empty string for case-sensitive). |
| `isCaseInsensitive` | boolean | No | Optional. TRUE if the dataset and its table names are case-insensitive, otherwise FALSE. By default, this is FALSE. |
| `maxTimeTravelHours` | integer | No | Optional. Defines the time travel window in hours. The value can be from 48 to 168 hours (2 to 7 days). The default value is 168 hours if this is not set. |
| `defaultRoundingMode` | string ("ROUNDING_MODE_UNSPECIFIED" | "ROUND_HALF_AWAY_FROM_ZERO" | "ROUND_HALF_EVEN") | No | Optional. Defines the default rounding mode specification of new tables created within this dataset. |
| `linkedDatasetSource` | object | No | A dataset source type which refers to another BigQuery dataset. |
| `storageBillingModel` | string ("STORAGE_BILLING_MODEL_UNSPECIFIED" | "LOGICAL" | "PHYSICAL") | No | Optional. Updates storage_billing_model for the dataset. LOGICAL uses logical bytes, PHYSICAL uses physical bytes. |
| `defaultTableExpirationMs` | integer | No | Optional. The default lifetime of all tables in the dataset, in milliseconds. The minimum lifetime value is 3600000 milliseconds (one hour). To clear an existing default expiration, set to 0. |
| `externalDatasetReference` | object | No | Configures the access a dataset defined in an external metadata storage. |
| `defaultPartitionExpirationMs` | integer | No | Optional. The default partition expiration, expressed in milliseconds. When new time-partitioned tables are created in this dataset, the table will inherit this value. |
| `defaultEncryptionConfiguration` | object | No | Encryption configuration for the dataset. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update BigQuery Routine

**Slug:** `GOOGLEBIGQUERY_UPDATE_ROUTINE`

Tool to update an existing BigQuery routine (function or stored procedure). This replaces the entire routine resource with the provided definition. Use when modifying routine logic, arguments, return types, or other configuration. Ensure all required fields are provided as this is a full replacement operation.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `language` | string | No | The language of the routine. Options: 'SQL', 'JAVASCRIPT', 'PYTHON', 'JAVA', 'SCALA'. Defaults to 'SQL' if remote_function_options is absent. |
| `arguments` | array | No | Optional. The list of arguments for the routine. |
| `dataset_id` | string | Yes | The dataset ID of the routine to update. Used in the URL path. |
| `project_id` | string | Yes | The project ID of the routine to update. Used in the URL path. |
| `returnType` | object | No | Data type specification for BigQuery routine arguments and return values. |
| `routine_id` | string | Yes | The routine ID of the routine to update. Used in the URL path. |
| `strictMode` | boolean | No | Optional. For procedures, enables additional error checking. Default is TRUE. |
| `description` | string | No | Optional. A description of the routine. |
| `routineType` | string | Yes | REQUIRED. The type of routine. Options: 'SCALAR_FUNCTION', 'PROCEDURE', 'TABLE_VALUED_FUNCTION', 'AGGREGATE_FUNCTION'. |
| `securityMode` | string | No | Optional. The security mode of the routine. Options: 'DEFINER', 'INVOKER'. |
| `sparkOptions` | object | No | Options for a user-defined Spark routine. |
| `definitionBody` | string | Yes | REQUIRED. The body of the routine. For SQL functions, this is the expression (without parentheses). For JavaScript, this is the evaluated string. |
| `returnTableType` | object | No | Table type returned by table-valued functions. |
| `determinismLevel` | string | No | Optional. The determinism level of JavaScript UDFs. Options: 'DETERMINISTIC', 'NOT_DETERMINISTIC'. |
| `routineReference` | object | Yes | REQUIRED. The routine reference in the request body. Must match the path parameters. |
| `importedLibraries` | array | No | Optional. For JavaScript routines, the paths of imported JavaScript libraries. |
| `dataGovernanceType` | string | No | Optional. If set to 'DATA_MASKING', the function is validated as a masking function. |
| `remoteFunctionOptions` | object | No | Options for a remote user-defined function. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Update BigQuery Table

**Slug:** `GOOGLEBIGQUERY_UPDATE_TABLE`

Tool to update an existing BigQuery table. The update method replaces the entire Table resource, whereas the patch method only replaces fields that are provided. Use when you need to modify table properties like schema, description, labels, partitioning, or clustering configuration.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `labels` | object | No | Optional. Labels to organize and group your tables. Keys and values must be lowercase, max 63 characters. |
| `schema` | object | No | Schema of a table. |
| `table_id` | string | Yes | Required. Table ID of the table to update. |
| `clustering` | object | No | Configures table clustering. |
| `dataset_id` | string | Yes | Required. Dataset ID of the table to update. |
| `project_id` | string | Yes | Required. Project ID of the table to update. |
| `description` | string | No | Optional. A user-friendly description of this table. |
| `friendly_name` | string | No | Optional. A descriptive name for this table. |
| `expirationTime` | string | No | Optional. The time when this table expires, in milliseconds since the epoch. |
| `table_reference` | object | Yes | Required. Reference identifying the table. Must match the path parameters. |
| `defaultCollation` | string | No | Optional. Defines the default collation specification of new STRING fields in the table. |
| `timePartitioning` | object | No | Configures time-based partitioning for the table. |
| `autodetect_schema` | boolean | No | Optional. When true will autodetect schema, else will keep original schema. |
| `rangePartitioning` | object | No | Configures range-based partitioning for the table. |
| `requirePartitionFilter` | boolean | No | Optional. If set to true, queries over this table require a partition filter. |
| `encryptionConfiguration` | object | No | Encryption configuration for the table. |
| `externalDataConfiguration` | object | No | Configuration for external data sources. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |
