This page provides an overview of ways to migrate data into and out of AlloyDB for PostgreSQL.
Importing data
You can migrate data into AlloyDB through two methods:
-  Manually importing files containing your data. 
-  Using Google Cloud Database Migration Service to migrate the contents of a whole, active PostgreSQL instance into a new AlloyDB cluster. 
Import archived data from files
AlloyDB lets you import data from files located in a Cloud Storage bucket. AlloyDB supports a variety of file formats for data import, including the following:
-  CSV , with the contents of one table per file, loaded using psql.
-  DMP , a binary archive of a whole PostgreSQL database, imported using pg_restore.
-  SQL , a plain-text reconstruction of a PostgreSQL database, processed with psql.
Migrate an active instance with Database Migration Service
As an alternative to manually importing data-dump files, you can use Database Migration Service for AlloyDB . This service lets you migrate the entire contents of an active PostgreSQL instance—including all of its databases and metadata—into a new AlloyDB cluster. Database Migration Service can help you transition your existing applications to using AlloyDB as their new datastore with minimal downtime.
Supported locales for imported data
AlloyDB supports the following locales :
- The full set of ICU-provided locales.
- A limited set of locales provided by libc:-  C.UTF-8
-  en_US.utf8
 
-  
If the database you want to import into AlloyDB defaults
to a libc 
locale other than the two listed here, you can still import
your data, but that default won't carry over. To ensure that SQL queries
using ORDER BY 
sort their results properly, you might need to take
additional steps after importing your data.
We recommend allowing your new database to use the default locale for
AlloyDB: C.UTF-8 
, provided by libc 
. Then, associate collations 
with any
columns that your application's queries might involve in ORDER BY 
clauses, naming an appropriate ICU-based collation for each one. You can
do this through  ALTER TABLE 
DDL
queries 
, such as the following:
  ALTER 
  
 TABLE 
  
  TABLE_NAME 
 
  
 ALTER 
  
 COLUMN 
  
  COLUMN_NAME 
 
  
 SET 
  
 DATA 
  
 TYPE 
  
  COLUMN_DATA_TYPE 
 
  
 COLLATE 
  
 " COLLATION_NAME 
" 
 ; 
 
 
A new AlloyDB cluster defines hundreds of collations
based on ICU-provided locales, and you can add more using the PostgreSQL CREATE COLLATION 
facility 
. To
see the full list of ICU-based collation names defined on an
AlloyDB cluster,
run the following query on any of its instances:
  SELECT 
  
 collname 
  
 FROM 
  
 pg_collation 
  
 WHERE 
  
 collprovider 
  
 = 
  
 'i' 
 ; 
 
 
In addition to ICU-based collations, AlloyDB supports the
PostgreSQL built-in collation named ucs_basic 
. This collation uses
the standard order of Unicode code points to enable especially efficient
sorting. We recommend its use with columns whose appropriate sort order
matches that of the Unicode code-point list.
Exporting data
You can use command-line utilities to export your AlloyDB data into files stored on a Cloud Storage bucket, in a variety of formats:
-  CSV , exporting one table per file, using psql.
-  DMP , using pg_dumpto create a portable, binary archive of your whole database.
-  SQL , using pg_dumpto create a list of DDL and SQL statements to reconstruct your database.

