This page lists known issues with Cloud SQL for PostgreSQL, along with
ways you can avoid or recover from these issues.
If you are experiencing issues with your instance, make sure you also review
the information inDiagnosing Issues.
Instance connection issues
Expired SSL/TLS certificates
If your instance is configured to use SSL, go to theCloud SQL Instances pagein the Google Cloud console and open the instance. Open itsConnectionspage, select theSecuritytab and make sure that your server certificate is valid. If it has expired, you must
add a new certificate and rotate to it.
Cloud SQL Auth Proxy version
If you are connecting using the Cloud SQL Auth Proxy, make sure you are using the
most recent version. For more information, seeKeeping the Cloud SQL Auth Proxy up to date.
Not authorized to connect
If you try to connect to an instance that does not exist in that project,
the error message only says that you are not authorized to access that
instance.
Can't create a Cloud SQL instance
If you see theFailed to create subnetwork. Router status is temporarily
unavailable. Please try again later. Help Token: [token-ID]error
message, try to create the Cloud SQL instance again.
The following only works with the default user ('postgres'):gcloud sql connect --user
If you try to connect using this command with any other user, the error
message saysFATAL: database 'user' does not exist. The
workaround is to connect using the default user ('postgres'), then use
the"\c"psql command to reconnect as the different user.
PostgreSQL connections hang when IAM db proxy authentication is enabled.
When theCloud SQL Auth Proxy is started using TCP socketsand with the-enable_iam_loginflag,
then a PostgreSQL client hangs during TCP connection. One workaround
is to usesslmode=disablein the PostgreSQL connection
string. For example:
Only one long-running Cloud SQL import or export operation can run at a time on an instance. When you start an operation, make sure you don't need to perform other operations on the instance. Also, when you start the operation, you cancancel it.
PostgreSQL imports data in a single transaction. Therefore, if you cancel the import operation, then Cloud SQL doesn't persist data from the import.
Issues with importing and exporting data
If your Cloud SQL instance uses PostgreSQL 17, but your databases use PostgreSQL 16 and earlier, then you can't use Cloud SQL to import these databases into your instance. To do this, useDatabase Migration Service.
If you use Database Migration Service to import a PostgreSQL 17 database into Cloud SQL, then it's imported as a PostgreSQL 16 database.
For PostgreSQL versions 15 and later, if the target database is created fromtemplate0, then importing data might fail and you might see apermission denied for schema publicerror message. To resolve this issue, provide public schema privileges to thecloudsqlsuperuseruser by running theGRANT ALL ON SCHEMA public TO cloudsqlsuperuserSQL command.
Exporting many large objects cause instance to become unresponsive
If your database contains many large objects (blobs), exporting the database
can consume so much memory that the instance becomes unresponsive. This can
happen even if the blobs are empty.
Cloud SQL doesn't support customized tablespaces but it does support data migration from customized tablespaces to the default tablespace,pg_default, in destination instance. For example, if you own a tablespace nameddbspaceis located at/home/data, after migration, all the data insidedbspaceis migrated to thepg_default. But Cloud SQL will not create a tablespace named "dbspace" on its disk.
If you're trying to import and export data from a large database (for example,
a database that has 500 GB of data or greater), then the import and export
operations might take a long time to complete. In addition, other operations
(for example, the backup operation) aren't available for you to perform
while the import or export is occurring. A potential option to improve the
performance of the import and export process is torestore a previous backupusinggcloudor the API.
Cloud Storage supports amaximum single-object size up five terabytes.
If you have databases larger than 5TB, the export operation to
Cloud Storage fails. In this case, you need to break down your
export files into smaller segments.
Transaction logs and disk growth
Logs are purged once daily, not continuously. When the number of days of log
retention is configured to be the same as the number of backups, a day of
logging might be lost, depending on when the backup occurs. For example, setting
log retention to seven days and backup retention to seven backups means that
between six and seven days of logs will be retained.
We recommend setting the number of backups to at least one more than the days of
log retention to guarantee a minimum of specified days of log retention.
Issues related to Cloud Monitoring or Cloud Logging
Instances with the following region names are displayed incorrectly in certain
contexts, as follows:
us-central1is displayed asus-central
europe-west1is displayed aseurope
asia-east1is displayed asasia
This issue occurs in the following contexts:
Alerting in Cloud Monitoring
Metrics Explorer
Cloud Logging
You can mitigate the issue for Alerting in Cloud Monitoring, and for Metrics
Explorer, by usingResource metadata labels.
Use the system metadata labelregioninstead of thecloudsql_databasemonitored resource labelregion.
Issue related to deleting a PostgreSQL database
When you delete a database created in Google Cloud console using yourpsqlclient, you may encounter the following error:
ERROR: must be owner of database [DATABASE_NAME]
This is a permission error since the owner of a database created using apsqlclient doesn't have Cloud SQLsuperuserattributes.
Databases created using the Google Cloud console are owned bycloudsqlsuperuserand databases created using apsqlclient are owned
by users connected to that database. Since Cloud SQL is a managed service,
customers cannot create or have access to users withsuperuserattributes.
For more information, seeSuperuser restrictions and privileges.
Due to this limitation, databases created using the Google Cloud console can
only be deleted using the Google Cloud console, and databases created using
apsqlclient can only be deleted by connecting as the owner of the
database.
To find the owner of a database, use the following command:
DATABASE_NAME: the name of the database that you want to
find owner information for.
If the owner of your database iscloudsqlsuperuser, then use
Google Cloud console to delete your database. If the owner of the database
is apsqlclient database user, then connect as the database owner and
run theDROP DATABASEcommand.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["# Known issues\n\n\u003cbr /\u003e\n\n[MySQL](/sql/docs/mysql/known-issues \"View this page for the MySQL database engine\") \\| PostgreSQL \\| [SQL Server](/sql/docs/sqlserver/known-issues \"View this page for the SQL Server database engine\")\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nThis page lists known issues with Cloud SQL for PostgreSQL, along with\nways you can avoid or recover from these issues.\nIf you are experiencing issues with your instance, make sure you also review the information in [Diagnosing Issues](/sql/docs/postgres/diagnose-issues).\n\n### Instance connection issues\n\n- Expired SSL/TLS certificates\n\n\n If your instance is configured to use SSL, go to the\n [Cloud SQL Instances page](https://console.cloud.google.com/sql/instances)\n in the Google Cloud console and open the instance. Open its **Connections** page, select the\n **Security** tab and make sure that your server certificate is valid. If it has expired, you must\n add a new certificate and rotate to it.\n\n \u003cbr /\u003e\n\n- Cloud SQL Auth Proxy version\n\n If you are connecting using the Cloud SQL Auth Proxy, make sure you are using the\n most recent version. For more information, see\n [Keeping the Cloud SQL Auth Proxy up to date](/sql/docs/postgres/sql-proxy#keep-current).\n- Not authorized to connect\n\n If you try to connect to an instance that does not exist in that project,\n the error message only says that you are not authorized to access that\n instance.\n- Can't create a Cloud SQL instance\n\n If you see the `Failed to create subnetwork. Router status is temporarily\n unavailable. Please try again later. Help Token: [token-ID]` error\n message, try to create the Cloud SQL instance again.\n\n\u003c!-- --\u003e\n\n- The following only works with the default user ('postgres'):\n `gcloud sql connect --user`\n\n If you try to connect using this command with any other user, the error\n message says \u003cvar translate=\"no\"\u003eFATAL: database 'user' does not exist\u003c/var\u003e. The\n workaround is to connect using the default user ('postgres'), then use\n the `\"\\c\"` psql command to reconnect as the different user.\n\n\u003c!-- --\u003e\n\n- PostgreSQL connections hang when IAM db proxy authentication is enabled.\n\n When the [Cloud SQL Auth Proxy is started using TCP sockets](/sql/docs/postgres/connect-auth-proxy#start-proxy) and with the `-enable_iam_login` flag,\n then a PostgreSQL client hangs during TCP connection. One workaround\n is to use `sslmode=disable` in the PostgreSQL connection\n string. For example: \n\n ```bash\n psql \"host=127.0.0.1 dbname=postgres user=me@google.com sslmode=disable\"\n ```\n\n Another workaround is to [start the Cloud SQL Auth Proxy using Unix sockets](/sql/docs/postgres/connect-auth-proxy#start-proxy).\n This turns off PostgreSQL SSL encryption and lets the Cloud SQL Auth Proxy do the SSL\n encryption instead.\n\n### Administrative issues\n\n- Only one long-running Cloud SQL import or export operation can run at a time on an instance. When you start an operation, make sure you don't need to perform other operations on the instance. Also, when you start the operation, you can [cancel it](/sql/docs/postgres/import-export/cancel-import-export).\n\n PostgreSQL imports data in a single transaction. Therefore, if you cancel the import operation, then Cloud SQL doesn't persist data from the import.\n\n### Issues with importing and exporting data\n\n- If your Cloud SQL instance uses PostgreSQL 17, but your databases use PostgreSQL 16 and earlier, then you can't use Cloud SQL to import these databases into your instance. To do this, use [Database Migration Service](/database-migration/docs).\n\n- If you use Database Migration Service to import a PostgreSQL 17 database into Cloud SQL, then it's imported as a PostgreSQL 16 database.\n\n- For PostgreSQL versions 15 and later, if the target database is created from `template0`, then importing data might fail and you might see a `permission denied for schema public` error message. To resolve this issue, provide public schema privileges to the `cloudsqlsuperuser` user by running the `GRANT ALL ON SCHEMA public TO cloudsqlsuperuser` SQL command.\n\n- Exporting many large objects cause instance to become unresponsive\n\n If your database contains many large objects (blobs), exporting the database\n can consume so much memory that the instance becomes unresponsive. This can\n happen even if the blobs are empty.\n\n \u003cbr /\u003e\n\n- Cloud SQL doesn't support customized tablespaces but it does support data migration from customized tablespaces to the default tablespace, `pg_default`, in destination instance. For example, if you own a tablespace named `dbspace` is located at `/home/data`, after migration, all the data inside `dbspace` is migrated to the `pg_default`. But Cloud SQL will not create a tablespace named \"dbspace\" on its disk.\n\n- If you're trying to import and export data from a large database (for example,\n a database that has 500 GB of data or greater), then the import and export\n operations might take a long time to complete. In addition, other operations\n (for example, the backup operation) aren't available for you to perform\n while the import or export is occurring. A potential option to improve the\n performance of the import and export process is to [restore a previous backup](/sql/docs/postgres/backup-recovery/restoring#projectid) using `gcloud`\n or the API.\n\n\u003c!-- --\u003e\n\n- Cloud Storage supports a [maximum single-object size up five terabytes](/storage-transfer/docs/known-limitations-transfer#object-limit). If you have databases larger than 5TB, the export operation to Cloud Storage fails. In this case, you need to break down your export files into smaller segments.\n\n### Transaction logs and disk growth\n\nLogs are purged once daily, not continuously. When the number of days of log\nretention is configured to be the same as the number of backups, a day of\nlogging might be lost, depending on when the backup occurs. For example, setting\nlog retention to seven days and backup retention to seven backups means that\nbetween six and seven days of logs will be retained.\n\nWe recommend setting the number of backups to at least one more than the days of\nlog retention to guarantee a minimum of specified days of log retention.\n| **Note:** Replica instances see a storage increase when replication is suspended and then resumed later. The increase is caused when the primary instance sends the replica the transaction logs for the period of time when replication was suspended. The transaction logs updates the replica to the current state of the primary instance.\n\n\u003cbr /\u003e\n\n### Issues related to Cloud Monitoring or Cloud Logging\n\nInstances with the following region names are displayed incorrectly in certain\ncontexts, as follows:\n\n- `us-central1` is displayed as `us-central`\n- `europe-west1` is displayed as `europe`\n- `asia-east1` is displayed as `asia`\n\nThis issue occurs in the following contexts:\n\n- Alerting in Cloud Monitoring\n- Metrics Explorer\n- Cloud Logging\n\nYou can mitigate the issue for Alerting in Cloud Monitoring, and for Metrics\nExplorer, by using\n[Resource metadata labels](https://cloud.google.com/monitoring/api/v3/metric-model#meta-labels).\nUse the system metadata label `region` instead of the\n[cloudsql_database](https://cloud.google.com/monitoring/api/resources#tag_cloudsql_database)\nmonitored resource label `region`.\n\n### Issue related to deleting a PostgreSQL database\n\nWhen you delete a database created in Google Cloud console using your\n`psql` client, you may encounter the following error: \n\n ERROR: must be owner of database [DATABASE_NAME]\n\nThis is a permission error since the owner of a database created using a\n`psql` client doesn't have Cloud SQL `superuser` attributes.\nDatabases created using the Google Cloud console are owned by\n`cloudsqlsuperuser` and databases created using a `psql` client are owned\nby users connected to that database. Since Cloud SQL is a managed service,\ncustomers cannot create or have access to users with `superuser` attributes.\nFor more information, see\n[Superuser restrictions and privileges](/sql/docs/postgres/users#superuser-restrictions-and-privileges).\n\nDue to this limitation, databases created using the Google Cloud console can\nonly be deleted using the Google Cloud console, and databases created using\na `psql` client can only be deleted by connecting as the owner of the\ndatabase.\n\nTo find the owner of a database, use the following command: \n\n SELECT d.datname as Name,\n pg_catalog.pg_get_userbyid(d.datdba) as Owner\n FROM pg_catalog.pg_database d\n WHERE d.datname = '\u003cvar translate=\"no\"\u003eDATABASE_NAME\u003c/var\u003e';\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eDATABASE_NAME\u003c/var\u003e: the name of the database that you want to find owner information for.\n\nIf the owner of your database is `cloudsqlsuperuser`, then use\nGoogle Cloud console to delete your database. If the owner of the database\nis a `psql` client database user, then connect as the database owner and\nrun the `DROP DATABASE` command."]]