Database Migration Service is a service that makes it easier for you to migrate your data to Google Cloud. Database Migration Service helps you lift and shift your PostgreSQL workloads into Cloud SQL.
Database Migration Service supports PostgreSQL-to-Cloud SQL migrations across any major
version, where the destination is the same or higher version than the source database.
Which data, schema, and metadata components are migrated?
Database Migration Service migrates schema, data, and metadata from the source to the destination. All of the
following data, schema, and metadata components are migrated as part of the database migration:Data Migration
All schemas and all tables from the selected database.
Schema Migration
Naming
Primary key
Data type
Ordinal position
Default value
Nullability
Auto-increment attributes
Secondary indexes
Metadata Migration
Stored Procedures
Functions
Triggers
Views
Foreign key constraints
Which changes are replicated during continuous migration?
Only DML changes are automatically updated during the migration. Managing DDL so that the source and
destination database(s) remain compatible is the responsibility of the user, and can be achieved in
two ways:
Stop writes to the source and run the DDL commands in both the source and the destination. Before
running DDL commands on the destination, grant thecloudsqlexternalsyncrole to the Cloud SQL user applying
the DDL changes. To enable querying or changing the data, grant thecloudsqlexternalsyncrole to
the relevant Cloud SQL users.
Use thepglogical.replicate_ddl_commandto run DDL on the source and destination at a consistent
point. The user running this command must have the same username on both the source and the destination, and should be the superuser or the owner of the artifact being migrated (for example, the table, sequence, view, or database).
Here are a few examples of using thepglogical.replicate_ddl_command.
To add a column to a database table, run the following command:
To add users to the Cloud SQL destination instance, navigate to the instance and add users
from theUserstab, or add them from the PostgreSQL client. Learn more aboutcreating
and managing PostgreSQL users.
Large objectscan't be
replicated because PostgreSQL's logical decoding facility doesn't
support decoding changes to large objects. For tables that havecolumn type oidreferencing large
objects, the rows are still synced, and new rows are replicated. However, trying to access
the large object on the destination database
(read usinglo_get,
export usinglo_export, or check
the catalogpg_largeobjectfor the given oid), fails with a message saying that the large
object doesn't exist.
For tables that don't have primary keys, Database Migration Service supports migration of theinitial snapshot andINSERTstatements during the change data capture (CDC) phase. You should migrateUPDATEandDELETEstatements manually.
Database Migration Service doesn't migrate data from materialized views, just the view schema. To populate the views, run the following command:REFRESH MATERIALIZED VIEWview_name.
TheSEQUENCEstates (for example,last_value) on the new Cloud SQL destination might vary from the sourceSEQUENCEstates.
Which networking methods are used?
To create a migration in Database Migration Service, connectivity must be established
between the source and the Cloud SQL destination instance. There are a variety of methods supported.
Choose the one that works best for the specific workload.
Networking method
Description
Pros
Cons
IP allowlist
Works by configuring the source database server to accept connections from the public IP of
the Cloud SQL instance. If you choose this method, then Database Migration Service guides you through the setup
process during the migration creation.
Easy to configure.
Recommended for short-lived migration scenarios (POC or small database migrations).
Firewall configuration may require assistance from IT.
Exposes the source database to a public IP.
The connection isn't encrypted by default. Requires enabling SSL on the source database
to encrypt the connection.
Reverse SSH tunnel through cloud-hosted VM
Establishes connectivity from the destination to the source through a secure reverse SSH tunnel.
Requires a bastion host VM in the Google Cloud project and a machine
(for example, a laptop on the network) that has connectivity to the source. Database Migration Service collects the
required information at migration creation time, and auto-generates the script for setting it up.
Easy to configure.
Doesn't require any custom firewall configuration.
Recommended for short-lived migration scenarios (POC or small database migrations).
You own and manage the Bastion VM.
May incur additional costs.
VPC peering
This method works by configuring the VPCs to communicate with one another. This is only
applicable if both the source and destination are hosted in Google Cloud. Recommended for
long-running or high-volume migrations.
Google Cloud solution.
Easy to configure.
High-bandwidth
Only available when the source is hosted in Google Cloud.
VPN
Sets up an IPSec VPN tunnel connecting the internal network and Google Cloud VPC through a
secure connection over the public Internet. Use Google Cloud VPN or any VPN solution that is
set up for the internal network.
Robust and scalable connectivity solution.
Medium-high bandwidth.
Security built-in.
Offered as Google Cloud solutions or from other 3rd parties.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[[["\u003cp\u003eDatabase Migration Service simplifies migrating PostgreSQL workloads to Cloud SQL, supporting various source types like Amazon RDS, Aurora, self-managed PostgreSQL, and others.\u003c/p\u003e\n"],["\u003cp\u003eThe service migrates data, schema, and metadata, including schemas, tables, stored procedures, functions, triggers, and views, with DML changes automatically updated during continuous migration.\u003c/p\u003e\n"],["\u003cp\u003ePostgreSQL-to-Cloud SQL migrations are supported across any major version, provided the destination is the same or higher than the source.\u003c/p\u003e\n"],["\u003cp\u003eSeveral networking methods are available, including IP allowlist, reverse SSH tunnel, VPC peering, VPN, and Cloud Interconnect, each with its own set of pros and cons.\u003c/p\u003e\n"],["\u003cp\u003eLarge objects, data from materialized views, and \u003ccode\u003eUPDATE\u003c/code\u003e/\u003ccode\u003eDELETE\u003c/code\u003e statements for tables without primary keys are not migrated by the service.\u003c/p\u003e\n"]]],[],null,["# Database Migration Service for PostgreSQL FAQ\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n[MySQL](/database-migration/docs/mysql/faq \"View this page for the MySQL version of Database Migration Service.\") \\| PostgreSQL \\| [PostgreSQL to AlloyDB](/database-migration/docs/postgresql-to-alloydb/faq \"View this page for the PostgreSQL to AlloyDB version of Database Migration Service.\")\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n- [What is Database Migration Service?](#whatisdms)\n- [Which sources are supported?](#sources)\n- [Which destinations are supported?](#destinations)\n- [Is there cross-version support?](#crossversion)\n- [Which data, schema, and metadata components are migrated?](#migrated)\n- [Which changes are replicated during continuous migration?](#replicated)\n- [What isn't migrated?](#notmigrated)\n- [Which networking methods are used?](#networking)\n- [What are the known limitations?](#limitations)\n\n\u003cbr /\u003e\n\nWhat is Database Migration Service?\n: Database Migration Service is a service that makes it easier for you to migrate your data to Google Cloud. Database Migration Service helps you lift and shift your PostgreSQL workloads into Cloud SQL.\n\nWhich sources are supported?\n:\n\n\n - Amazon RDS 9.6.10+, 10.5+, 11.1+, 12, 13, 14, 15, 16, 17.\n - Amazon Aurora 10.11+, 11.6+, 12.4+, 13.3+, 14.6+, 15.2+, 16, 17.\n - Self-managed PostgreSQL (on premises or on any cloud VM that you fully control) 9.4, 9.5, 9.6, 10, 11, 12, 13, 14, 15, 16, 17.\n - Cloud SQL for PostgreSQL 9.6, 10, 11, 12, 13, 14, 15, 16, 17.\n - Microsoft Azure Database for PostgreSQL Flexible Server: 11+\n\n\nWhich destinations are supported?\n:\n\n\n - Cloud SQL for PostgreSQL 9.6, 10, 11, 12, 13, 14, 15, 16, 17.\n\n\nIs there cross-version support?\n:\n\n Database Migration Service supports PostgreSQL-to-Cloud SQL migrations across any major\n version, where the destination is the same or higher version than the source database.\n\nWhich data, schema, and metadata components are migrated?\n\n: Database Migration Service migrates schema, data, and metadata from the source to the destination. All of the following data, schema, and metadata components are migrated as part of the database migration: \u003cbr /\u003e\n\n Data Migration\n\n - All schemas and all tables from the selected database.\n\n Schema Migration\n\n \u003c!-- --\u003e\n\n - Naming\n - Primary key\n - Data type\n - Ordinal position\n - Default value\n - Nullability\n - Auto-increment attributes\n - Secondary indexes\n\n Metadata Migration\n\n \u003c!-- --\u003e\n\n - Stored Procedures\n - Functions\n - Triggers\n - Views\n - Foreign key constraints\n\nWhich changes are replicated during continuous migration?\n:\n\n Only DML changes are automatically updated during the migration. Managing DDL so that the source and\n destination database(s) remain compatible is the responsibility of the user, and can be achieved in\n two ways:\n\n 1. Stop writes to the source and run the DDL commands in both the source and the destination. Before running DDL commands on the destination, grant the `cloudsqlexternalsync` role to the Cloud SQL user applying the DDL changes. To enable querying or changing the data, grant the `cloudsqlexternalsync` role to the relevant Cloud SQL users.\n 2. Use the `pglogical.replicate_ddl_command` to run DDL on the source and destination at a consistent\n point. The user running this command must have the same username on both the source and the destination, and should be the superuser or the owner of the artifact being migrated (for example, the table, sequence, view, or database).\n\n Here are a few examples of using the `pglogical.replicate_ddl_command`.\n\n To add a column to a database table, run the following command:\n\n `select pglogical.replicate_ddl_command('ALTER TABLE `\u003cvar translate=\"no\"\u003e[schema].[table]\u003c/var\u003e` add column surname varchar(20)', '{default}');`\n\n To change the name of a database table, run the following command:\n\n `select pglogical.replicate_ddl_command('ALTER TABLE `\u003cvar translate=\"no\"\u003e[schema].[table]\u003c/var\u003e` RENAME TO `\u003cvar translate=\"no\"\u003e[table_name]\u003c/var\u003e`','{default}');`\n\n To create a database table, run the following commands:\n 1. `select pglogical.replicate_ddl_command(command := 'CREATE TABLE `\u003cvar translate=\"no\"\u003e[schema].[table]\u003c/var\u003e` (id INTEGER PRIMARY KEY, name VARCHAR);', replication_sets := ARRAY['default'']);`\n 2. `select pglogical.replication_set_add_table('default', '`\u003cvar translate=\"no\"\u003e[schema].[table]\u003c/var\u003e`');`\n\nWhat isn't migrated?\n\n: To add users to the Cloud SQL destination instance, navigate to the instance and add users\n from the **Users** tab, or add them from the PostgreSQL client. Learn more about [creating\n and managing PostgreSQL users](https://cloud.google.com/sql/docs/postgres/create-manage-users).\n\n [Large objects](https://www.postgresql.org/docs/current/largeobjects.html) can't be\n replicated because PostgreSQL's logical decoding facility doesn't\n support decoding changes to large objects. For tables that have [column type oid](https://www.postgresql.org/docs/current/datatype-oid.html) referencing large\n objects, the rows are still synced, and new rows are replicated. However, trying to access\n the large object on the destination database\n (read using [lo_get](https://www.postgresql.org/docs/current/lo-funcs.html),\n export using [lo_export](https://www.postgresql.org/docs/current/lo-funcs.html), or check\n the catalog `pg_largeobject` for the given oid), fails with a message saying that the large\n object doesn't exist.\n\n For tables that don't have primary keys, Database Migration Service supports migration of the **initial snapshot and `INSERT` statements during the change data capture (CDC) phase** . You should migrate `UPDATE` and `DELETE` statements manually.\n\n Database Migration Service doesn't migrate data from materialized views, just the view schema. To populate the views, run the following command: `REFRESH MATERIALIZED VIEW `\u003cvar translate=\"no\"\u003eview_name\u003c/var\u003e.\n\n The `SEQUENCE` states (for example, `last_value`) on the new Cloud SQL destination might vary from the source `SEQUENCE` states.\n\nWhich networking methods are used?\n: To create a migration in Database Migration Service, connectivity must be established\n between the source and the Cloud SQL destination instance. There are a variety of methods supported.\n Choose the one that works best for the specific workload.\n\n\nWhat are the known limitations?\n: See [Known limitations](/database-migration/docs/postgres/known-limitations)."]]