MCP Reference: cloud-sql

A Model Context Protocol (MCP) server acts as a proxy between an external service that provides context, data, or capabilities to a Large Language Model (LLM) or AI application. MCP servers connect AI applications to external systems such as databases and web services, translating their responses into a format that the AI application can understand.

Server Setup

You must enable MCP servers and set up authentication before use. For more information about using Google and Google Cloud remote MCP servers, see Google Cloud MCP servers overview .

Cloud SQL Admin API for MCP

Server Endpoints

An MCP service endpoint is the network address and communication interface (usually a URL) of the MCP server that an AI application (the Host for the MCP client) uses to establish a secure, standardized connection. It is the point of contact for the LLM to request context, call a tool, or access a resource. Google MCP endpoints can be global or regional.

The cloud-sql MCP server has the following MCP endpoint:

  • https://sqladmin.googleapis.com/mcp

MCP Tools

An MCP tool is a function or executable capability that an MCP server exposes to a LLM or AI application to perform an action in the real world.

The cloud-sql MCP server has the following tools:

MCP Tools
List all Cloud SQL instances in the project.
Get the details of a Cloud SQL instance.

Initiates the creation of a Cloud SQL instance.

  • The tool returns a long-running operation. Use the get_operation tool to poll its status until the operation completes.
  • The instance creation operation can take several minutes. Use a command line tool to pause for 30 seconds before rechecking the status.
  • After you use the create_instance tool to create an instance, you can use the create_user tool to create an IAM user account for the user currently logged in to the project.
  • IMPORTANT: Set ipv4_enabled to 'false' if creating a Private Service Connect or a Private Service Access instance.
  • The value of data_api_access is set to ALLOW_DATA_API by default. This setting lets you execute SQL statements using the execute_sql tool and the executeSql API.

Unless otherwise specified, a newly created instance uses the default instance configuration of a development environment.

The following is the default configuration for an instance in a development environment:

 {
  "tier": "db-perf-optimized-N-2",
  "data_disk_size_gb": 100,
  "region": "us-central1",
  "database_version": "POSTGRES_18",
  "edition": "ENTERPRISE_PLUS",
  "availability_type": "ZONAL",
  "tags": [{"environment": "dev"}]
} 

The following configuration is recommended for an instance in a production environment:

 {
  "tier": "db-perf-optimized-N-8",
  "data_disk_size_gb": 250,
  "region": "us-central1",
  "database_version": "POSTGRES_18",
  "edition": "ENTERPRISE_PLUS",
  "availability_type": "REGIONAL",
  "tags": [{"environment": "prod"}]
} 

The following instance configuration is recommended for SQL Server:

 {
  "tier": "db-perf-optimized-N-8",
  "data_disk_size_gb": 250,
  "region": "us-central1",
  "database_version": "SQLSERVER_2022_STANDARD",
  "edition": "ENTERPRISE",
  "availability_type": "REGIONAL",
  "tags": [{"environment": "prod"}]
} 

Execute any valid SQL statement, including data definition language (DDL), data control language (DCL), data query language (DQL), or data manipulation language (DML) statements, on a Cloud SQL instance.

To support the execute_sql tool, a Cloud SQL instance must meet the following requirements:

  • The value of data_api_access must be set to ALLOW_DATA_API .
  • For built_in users password_secret_version must be set.
  • Otherwise, for IAM users, for a MySQL instance, the database flag cloudsql_iam_authentication must be set to on . For a PostgreSQL instance, the database flag cloudsql.iam_authentication must be set to on .
  • After you use the create_instance tool to create an instance, you can use the create_user tool to create an IAM user account for the user currently logged in to the project.

The execute_sql tool has the following limitations:

  • If a SQL statement returns a response larger than 10 MB, then the response will be truncated.
  • The execute_sql tool has a default timeout of 30 seconds. If a query runs longer than 30 seconds, then the tool returns a DEADLINE_EXCEEDED error.
  • The execute_sql tool isn't supported for SQL Server.

If you receive errors similar to "IAM authentication is not enabled for the instance", then you can use the get_instance tool to check the value of the IAM database authentication flag for the instance.

If you receive errors like "The instance doesn't allow using executeSql to access this instance", then you can use get_instance tool to check the data_api_access setting.

When you receive authentication errors:

  1. Check if the currently logged-in user account exists as an IAM user on the instance using the list_users tool.
  2. If the IAM user account doesn't exist, then use the create_user tool to create the IAM user account for the logged-in user.
  3. If the currently logged in user doesn't have the proper database user roles, then you can use update_user tool to grant database roles to the user. For example, cloudsqlsuperuser role can provide an IAM user with many required permissions.
  4. Check if the currently logged in user has the correct IAM permissions assigned for the project. You can use gcloud projects get-iam-policy [PROJECT_ID] command to check if the user has the proper IAM roles or permissions assigned for the project.

    • The user must have cloudsql.instance.login permission to do automatic IAM database authentication.
    • The user must have cloudsql.instances.executeSql permission to execute SQL statements using the execute_sql tool or executeSql API.
    • Common IAM roles that contain the required permissions: Cloud SQL Instance User ( roles/cloudsql.instanceUser ) or Cloud SQL Admin ( roles/cloudsql.admin )

When receiving an ExecuteSqlResponse , always check the message and status fields within the response body. A successful HTTP status code doesn't guarantee full success of all SQL statements. The message and status fields will indicate if there were any partial errors or warnings during SQL statement execution.

Execute any valid read only SQL statement on a Cloud SQL instance.

To support the execute_sql_readonly tool, a Cloud SQL instance must meet the following requirements:

  • The value of data_api_access must be set to ALLOW_DATA_API .
  • For a MySQL instance, the database flag cloudsql_iam_authentication must be set to on . For a PostgreSQL instance, the database flag cloudsql.iam_authentication must be set to on .
  • An IAM user account or IAM service account ( CLOUD_IAM_USER or CLOUD_IAM_SERVICE_ACCOUNT ) is required to call the execute_sql_readonly tool. The tool executes the SQL statements using the privileges of the database user logged with IAM database authentication.

After you use the create_instance tool to create an instance, you can use the create_user tool to create an IAM user account for the user currently logged in to the project.

The execute_sql_readonly tool has the following limitations:

  • If a SQL statement returns a response larger than 10 MB, then the response will be truncated.
  • The tool has a default timeout of 30 seconds. If a query runs longer than 30 seconds, then the tool returns a DEADLINE_EXCEEDED error.
  • The tool isn't supported for SQL Server.

If you receive errors similar to "IAM authentication is not enabled for the instance", then you can use the get_instance tool to check the value of the IAM database authentication flag for the instance.

If you receive errors like "The instance doesn't allow using executeSql to access this instance", then you can use get_instance tool to check the data_api_access setting.

When you receive authentication errors:

  1. Check if the currently logged-in user account exists as an IAM user on the instance using the list_users tool.
  2. If the IAM user account doesn't exist, then use the create_user tool to create the IAM user account for the logged-in user.
  3. If the currently logged in user doesn't have the proper database user roles, then you can use update_user tool to grant database roles to the user. For example, cloudsqlsuperuser role can provide an IAM user with many required permissions.
  4. Check if the currently logged in user has the correct IAM permissions assigned for the project. You can use gcloud projects get-iam-policy [PROJECT_ID] command to check if the user has the proper IAM roles or permissions assigned for the project.

    • The user must have cloudsql.instance.login permission to do automatic IAM database authentication.
    • The user must have cloudsql.instances.executeSql permission to execute SQL statements using the execute_sql_readonly tool or executeSql API.
    • Common IAM roles that contain the required permissions: Cloud SQL Instance User ( roles/cloudsql.instanceUser ) or Cloud SQL Admin ( roles/cloudsql.admin )

When receiving an ExecuteSqlResponse , always check the message and status fields within the response body. A successful HTTP status code doesn't guarantee full success of all SQL statements. The message and status fields will indicate if there were any partial errors or warnings during SQL statement execution.

Get the status of a long-running operation. A long-running operation can take several minutes to complete. If an operation takes an extended amount of time, then use a command line tool to pause for 30 seconds before rechecking the status of the operation.

Create a database user for a Cloud SQL instance.

  • This tool returns a long-running operation. Use the get_operation tool to poll its status until the operation completes.
  • When you use the create_user tool, specify the type of user: CLOUD_IAM_USER , CLOUD_IAM_SERVICE_ACCOUNT , or BUILT_IN .
  • By default the newly created user is assigned the cloudsqlsuperuser role, unless you specify other database roles explicitly in the request.
  • You can use a newly created user with the execute_sql tool if the user is a currently logged in IAM user. The execute_sql tool executes the SQL statements using the privileges of the database user logged in using IAM database authentication.

The create_user tool has the following limitations:

  • To create a built-in user with password, use the password_secret_version field to provide password using the Google Cloud Secret Manager. The value of password_secret_version should be the resource name of the secret version, like projects/12345/locations/us-central1/secrets/my-password-secret/versions/1 or projects/12345/locations/us-central1/secrets/my-password-secret/versions/latest . The caller needs to have secretmanager.secretVersions.access permission on the secret version.
  • The create_user tool doesn't support creating a user for SQL Server.

To create an IAM user in PostgreSQL:

  • The database username must be the IAM user's email address and all lowercase. For example, to create user for PostgreSQL IAM user example-user@example.com , you can use the following request:
 {
  "name": "example-user@example.com",
  "type": "CLOUD_IAM_USER",
  "instance":"test-instance",
  "project": "test-project"
} 

The created database username for the IAM user is example-user@example.com .

To create an IAM service account in PostgreSQL:

  • The database username must be created without the .gserviceaccount.com suffix even though the full email address for the account is service-account-name@project-id.iam.gserviceaccount.com . For example, to create an IAM service account for PostgreSQL you can use the following request format:
 {
   "name": "test@test-project.iam",
   "type": "CLOUD_IAM_SERVICE_ACCOUNT",
   "instance": "test-instance",
   "project": "test-project"
} 

The created database username for the IAM service account is test@test-project.iam .

To create an IAM user or IAM service account in MySQL:

  • When Cloud SQL for MySQL stores a username, it truncates the @ and the domain name from the user or service account's email address. For example, example-user@example.com becomes example-user .
  • For this reason, you can't add two IAM users or service accounts with the same username but different domain names to the same Cloud SQL instance.
  • For example, to create user for the MySQL IAM user example-user@example.com , use the following request:
 {
   "name": "example-user@example.com",
   "type": "CLOUD_IAM_USER",
   "instance": "test-instance",
   "project": "test-project"
} 

The created database username for the IAM user is example-user .

  • For example, to create the MySQL IAM service account service-account-name@project-id.iam.gserviceaccount.com , use the following request:
 {
   "name": "service-account-name@project-id.iam.gserviceaccount.com",
   "type": "CLOUD_IAM_SERVICE_ACCOUNT",
   "instance": "test-instance",
   "project": "test-project"
} 

The created database username for the IAM service account is service-account-name .

Update a database user for a Cloud SQL instance. A common use case for the update_user is to grant a user the cloudsqlsuperuser role, which can provide a user with many required permissions.

This tool only supports updating users to assign database roles.

  • This tool returns a long-running operation. Use the get_operation tool to poll its status until the operation completes.
  • Before calling the update_user tool, always check the existing configuration of the user such as the user type with list_users tool.
  • As a special case for MySQL, if the list_users tool returns a full email address for the iamEmail field, for example {name=test-account, iamEmail=test-account@project-id.iam.gserviceaccount.com} , then in your update_user request, use the full email address in the iamEmail field in the name field of your toolrequest. For example, name=test-account@project-id.iam.gserviceaccount.com .

Key parameters for updating user roles:

  • database_roles : A list of database roles to be assigned to the user.
  • revokeExistingRoles : A boolean field (default: false) that controls how existing roles are handled.

How role updates work:

  1. If revokeExistingRoles is true:

    • Any existing roles granted to the user but NOT in the provided database_roles list will be REVOKED.
    • Revoking only applies to non-system roles. System roles like cloudsqliamuser etc won't be revoked.
    • Any roles in the database_roles list that the user does NOT already have will be GRANTED.
    • If database_roles is empty, then ALL existing non-system roles are revoked.
  2. If revokeExistingRoles is false (default):

    • Any roles in the database_roles list that the user does NOT already have will be GRANTED.
    • Existing roles NOT in the database_roles list are KEPT.
    • If database_roles is empty, then there is no change to the user's roles.

Examples:

  • Existing Roles: [roleA, roleB]

    • Request: database_roles: [roleB, roleC], revokeExistingRoles: true
    • Result: Revokes roleA , Grants roleC . User roles become [roleB, roleC] .
    • Request: database_roles: [roleB, roleC], revokeExistingRoles: false
    • Result: Grants roleC . User roles become [roleA, roleB, roleC] .
    • Request: database_roles: [], revokeExistingRoles: true
    • Result: Revokes roleA , Revokes roleB . User roles become [] .
    • Request: database_roles: [], revokeExistingRoles: false
    • Result: No change. User roles remain [roleA, roleB] .

Create a Cloud SQL instance as a clone of a source instance.

  • This tool returns a long-running operation. Use the get_operation tool to poll its status until the operation completes.
  • The clone operation can take several minutes. Use a command line tool to pause for 30 seconds before rechecking the status.

Partially updates the configuration settings of a Cloud SQL instance.

  • This tool returns a long-running operation. Use the get_operation tool to poll its status until the operation completes.
  • Some update operations, such as changing the edition upgrade or instance tier, etc might cause the instance to restart, resulting in downtime. Before you proceed with such operations, get confirmation from the user.
List all database users for a Cloud SQL instance.
Takes a backup on a Cloud SQL instance. Always populate the project and instance fields on the request. The location (region) and description of the backup may also be optionally provided, in which case the corresponding request fields should also be populated.

Restores a backup to a Cloud SQL instance.

The target_instance and target_project must be provided and populated in the request.

The backup identifier can be provided in several ways:

  1. A backup_run_id (which is an integer).
  2. A backup URI of the format projects/{project-id}/backups/{backup-uid} .
  3. A backup URI of the format projects/{project-id}/locations/{location}/backupVaults/{backupvault}/dataSources/{datasource}/backups/{backup-uid} .

Use the identifier to populate the backup_id field in the request.

The source_project must be populated in the request. If the identifier is a backup_run_id, the source_project will be provided. If the identifier is a backup URI, the source_project may need to be extracted from the URI. Do not confuse the extracted source_project with the target_project, which will be provided in other ways.

In addition, if the identifier is a backup_run_id, the source_instance must be provided and populated in the request.

Do not try to create the instance before the restore, the restore itself will create the instance if needed.

Confirm the parameters with the user before executing the restore.

Import data into a Cloud SQL instance.

If the file doesn't start with gs:// , then the assumption is that the file is stored locally. If the file is local, then the file must be uploaded to Cloud Storage before you can make the actual import_data call. To upload the file to Cloud Storage, you can use the gcloud or gsutil commands.

Before you upload the file to Cloud Storage, consider whether you want to use an existing bucket or create a new bucket in the provided project.

After the file is uploaded to Cloud Storage, the instance service account must have sufficient permissions to read the uploaded file from the Cloud Storage bucket.

This can be accomplished as follows:

  1. Use the get_instance tool to get the email address of the instance service account. From the output of the tool, get the value of the serviceAccountEmailAddress field.
  2. Grant the instance service account the storage.objectAdmin role on the provided Cloud Storage bucket. Use a command like gcloud storage buckets add-iam-policy-binding or a request to the Cloud Storage API. It can take from two to up to seven minutes or more for the role to be granted and the permissions to be propagated to the service account in Cloud Storage. If you encounter a permissions error after updatingthe IAM policy, then wait a few minutes and try again.

After permissions are granted, you can import the data. We recommend that you leave optional parameters empty and use the system defaults. The file type can typically be determined by the file extension. For example, if the file is a SQL file, .sql or .csv for CSV file.

The following is a sample SQL importContext for MySQL.

 {
  "uri": "gs://sample-gcs-bucket/sample-file.sql",
  "kind": "sql#importContext",
  "fileType": "SQL"
} 

There is no database parameter present for MySQL since the database name is expected to be present in the SQL file. Specify only one URI. No other fields are required outside of importContext .

For PostgreSQL, the database field is required. The following is a sample PostgreSQL importContext with the database field specified.

 {
  "uri": "gs://sample-gcs-bucket/sample-file.sql",
  "kind": "sql#importContext",
  "fileType": "SQL",
  "database": "sample-db"
} 

The import_data tool returns a long-running operation. Use the get_operation tool to poll its status until the operation completes.

Checks if a Cloud SQL for PostgreSQL instance is ready for a major version upgrade to the specified target version.

The target_database_version MUST be provided in the request (e.g., POSTGRES_15 ).

This tool helps identify potential issues before attempting the actual upgrade, reducing the risk of failure or downtime.

This tool is only supported for PostgreSQL primary instances and does not run on read replicas.

The precheck typically evaluates:

  • Database schema compatibility with the target version.
  • Cloud SQL limitations and unsupported features.
  • Instance resource constraints (e.g., number of relations).
  • Compatibility of current database settings and extensions.
  • Overall instance health and readiness.

This tool returns a long-running operation. Use the get_operation tool with the operation name returned by this call to poll its status.

IMPORTANT: Once the operation status is DONE, the detailed precheck results are available within the Operation resource. You will need to inspect the response from get_operation . The findings are located in the pre_check_major_version_upgrade_context.pre_check_response field.

The findings are structured, indicating:

  • INFO: General information.
  • WARNING: Potential issues that don't block the upgrade but should be reviewed.
  • ERROR: Critical issues that MUST be resolved before attempting the upgrade.

Each finding should include a message and any required actions. Addressing any reported issues is crucial before proceeding with the major version upgrade. If pre_check_response is empty or missing, it indicates that no issues were identified during the precheck.

Running this precheck does not impact the instance's availability.

Get MCP tool specifications

To get the MCP tool specifications for all tools in an MCP server, use the tools/list method. The following example demonstrates how to use curl to list all tools and their specifications currently available within the MCP server.

Curl Request
  
curl  
--location  
 'https://sqladmin.googleapis.com/mcp' 
  
 \ 
--header  
 'content-type: application/json' 
  
 \ 
--header  
 'accept: application/json, text/event-stream' 
  
 \ 
--data  
 '{ 
 "method": "tools/list", 
 "jsonrpc": "2.0", 
 "id": 1 
 }' 
  
Create a Mobile Website
View Site in Mobile | Classic
Share by: