Troubleshoot an error
The migration job process might incur errors during runtime.
- Some errors, such as a bad password on the source database, are recoverable, meaning they can be fixed and the migration job resumes automatically.
- Some are unrecoverable, such as a lost binlog position, meaning the migration job needs to be restarted from the beginning.
When an error occurs, the migration job status changes to Failed
, and the substatus reflects the last status before failure.
To troubleshoot an error, navigate to the failed migration job to view the error and follow the steps outlined in the error message.
To view more details about the error, navigate to Cloud Monitoring using the link on the migration job. The logs are filtered to the specific migration job.
In the following table, you can find some examples of issues and how they can be solved:
Error processing table {table_name}: MySQL Error 1118
(42000): Row size too large (>8126).
VARCHAR
columns might have rows that exceed
the maximum size allowed by InnoDB
(the default storage engine used in MySQL).innodb_strict_mode
flag to off
.
See Error 1118: row size too large
.ERROR 1064 (42000) at {line_number}: You have an error in your
SQL syntax; check the manual that corresponds to your MySQL server version for
the right syntax to use near {reserved_word} at {line_number}.
For example, in MySQL 5.7 you can use the word function
in
column names. But in MySQL 8.0, function
is a reserved word
, so the destination database
can't parse your sources.
``
) to escape the syntax. When complete, retry
the migration job.ERROR 1109 (42S02): Unknown table in <schema name here>
mysql
, performance_schema
, information_schema
, ndbinfo
, or sys
system schemas. Your migration job might fail if the source database contains objects that reference tables from any of these schemas.
Unknown table 'COLUMN_STATISTICS' in information_schema (1109)
mysqldump
only.MySQL databases in versions earlier than 8 don't have the COLUMN_STATISTICS
table. The mysqldump
utility in versions 8 and later tries to export this table by default. The export fails because the column doesn't exist.
Specified key was too long; max key length is 767 bytes
.innodb_large_prefix
set.innodb_large_prefix
flag
to ON
when creating the destination instance, or update the existing destination instance with the flag.Table definition has changed
.Access denied; you need (at least one of) the SUPER privilege(s) for this operation
.Definer user 'x' does not exist. Please create the user on the replica.
ERROR 1045 (28000) at line {line_number}: Access denied for user 'cloudsqlimport'@'localhost
.DEFINER
in the source database that doesn't exist in the replica.Lost connection to MySQL server during query when dumping table
.Got packet bigger than 'max_allowed_packet' bytes when dumping table
.mysqld check failed: data disk is full
.- Check the dump path to ensure a proper file is there, or change the path.
- If you change the path, then use a
PATCH
request to ensure the job uses it. - Restart the migration job.
The destination instance contains existing data or user defined
entities (for example databases, tables, or functions). You can only
migrate to empty instances. Clear your destination instance and retry
the migration job.
Unable to connect to source database server.
Timeout waiting for no write traffic on source.
SUPERUSER
privileges results in a brief downtime during the beginning of the migration.ERROR 1146 (42S02) at line {line_number}: Table '{table_name}' doesn't exist.
lower_case_table_names
flag for the source database and the value of the flag for the destination Cloud SQL instance.ERROR 1109 (42S02) at line {line_number}: Unknown table '{table}' in {database}.
ERROR 1045: Access denied for user '{user_name}'@'{replica_IP}' (using password: YES)". Check if MySQL replication user and password are correct. Not attempting further retries.
Confirm whether the username, password, and SSL settings are correct for the source instance by using the mysql
client
.
If the source instance is Cloud SQL, then see Requiring SSL/TLS to verify whether SSL/TLS is required for TCP connections.
The write load is too high for the replica to handle. Replication lag takes place when the Cloud SQL for MySQL thread on a replica can't keep up with the I/O thread. Some types of queries or workloads can cause a temporary or permanent high replication lag for a given schema. Some typical causes of replication lag are:
- Slow queries are performed on the replica. Find and fix them.
- All tables must have a unique, primary key. Every update on such a table that doesn't have this key causes full table scans on the replica.
- Queries such as
DELETE ... WHERE field < 50000000
cause replication lag with row-based replication because a huge number of updates are piled up on the replica.
Some possible solutions include:
- Configure parallel replication if the replica is MySQL version 5.7 (or above).
- Set the
innodb_flush_log_at_trx_commit
flag on the replica to 2. See Tips for working with flags for more information about this flag. - Edit the Cloud SQL for MySQL instance to increase the size of the replica.
- Reduce the load on the source database.
- Index tables on the replica for any tables that don't have primary, unique keys.
- Index tables on the source database. This requires restarting the migration job.
- Identify and fix slow queries on the source database.
'Character set '#255' is not a compiled character set and is not specified in the '/usr/share/mysql/charsets/Index.xml' file' on query.
utf8mb4_0900_as_ci
collation, which isn't supported in Cloud SQL for MySQL 5.7.- If the migration fails during the full dump phase, then check the source table definition if unsupported characters sets or collations are used. If possible, then update to supported character sets or collations.
- If the migration fails during the CDC phase, then check the replica status , find the binlog that causes the migration to fail, and examine the source database and application to locate the SQL statements that use the character sets or collations. If possible, then update to supported character sets or collations.
Error 1108: row size too large
Tables with columns that store variable-length strings can have rows that exceed the default InnoDB maximum row size .Things to try
Adjust source table schema
This issue can reoccur whenever you perform any INSERT statements into the tables that exceed the maximum size row limit. To avoid future problems, it's best if you adjust your tables before you retry the migration:
- Change your table
ROW_FORMAT
toDYNAMIC
orCOMPRESSED
by executing the following query:ALTER TABLE TABLE_NAME ROW_FORMAT= FORMAT_NAME ;
- TABLE_NAME is the name of the table whose rows exceed the maximum row size limit
- FORMAT_NAME
is
DYNAMIC
orCOMPRESSED
ALTER TABLE mytable ROW_FORMAT=DYNAMIC;
- Convert row data to BLOB or TEXT. One way you can achieve this operation
is with the
CONVERT()
function .
Disable InnoDB strict mode
If it's not possible for you to adjust the source table schema, you can temporarily disable InnoDB validation to complete the migration job. Keep in mind that the issue can reoccur during future database write attempts, so it's best to adjust your table schema when feasible.
To temporarily disable InnoDB validation for the purposes of completing your migration job, follow these steps:
- Re-create your migration job .
- At the end of the Define a destinationstep, click Show optional configurations.
- In the Flagssection, select the
innodb_strict_mode
flag and set it toOff
. - After your migration is complete, set the
innodb_strict_mode
flag toOn
. See Configure database flags in Cloud SQL for MySQL documentation.
- In Cloud SQL, navigate to your destination instance and set the
innodb_strict_mode
flag toOff
. See Configure database flags in Cloud SQL for MySQL documentation. - Re-create your migration job .
- After your migration is complete, set the
innodb_strict_mode
flag toOn
. See Configure database flags in Cloud SQL for MySQL documentation.
Clear extra data from your existing destination instance
When you migrate to an existing destination instance
, you receive the following error message: The destination instance contains existing data or user defined
entities (for example databases, tables, or functions). You can only
migrate to empty instances. Clear your destination instance and retry
the migration job.
This issue can occur if your destination instance contains extra data. You can only migrate to existing instances that are empty. See Known limitations .
Things to try
Clear extra data from your destination instance and start the migration job again by performing the following steps:
- Stop the migration job .
- At this point, your destination Cloud SQL instance is in
read-only
mode. Promote the destination instance to gain write access. - Connect to your destination Cloud SQL instance .
- Remove extra data from your destination instance databases. Your
destination can only contain system configuration data. Destination databases
can't contain user data (such as tables). There are different SQL statements
you can run on your databases to find non-system data, for example:
SELECT schema_name FROM information_schema . SCHEMATA WHERE schema_name NOT IN ( 'information_schema' , 'sys' , 'performance_schema' , 'mysql' );
- Start the migration job .
Terraform configuration drift
When you migrate to an existing destination database , Database Migration Service modifies certain settings of your destination database to execute the migration job. For databases provisioned with Terraform, this interaction might cause a configuration drift where the actual destination database configuration is different from the configuration set in your Terraform files.Things to try
Don't attempt to re-apply Terraform configuration when the migration job is running. You can safely adjust the necessary configuration after your destination database is promoted. Database Migration Service performs the following modifications to your destination Cloud SQL instance:- Backup configuration is set to default values.
- Point-in-time recovery is reset to default values.
ERROR 1109 (42S02): Unknown table in <schema name here>
Migration jobs fail with the following message: ERROR 1109 (42S02): Unknown table in <schema name here>
, for example: ERROR 1109 (42S02) at line X: Unknown table 'GLOBAL_STATUS' in information_schema
.
The issue might be
Database Migration Service doesn't migrate the mysql
, performance_schema
, information_schema
, ndbinfo
, or sys
system schemas
(see Known limitations
).
Your migration job might fail if the source database contains objects that
reference tables from any of these schemas.
Things to try
Check your source database for objects that reference tables from the system schemas. On your source database, execute the following queries:# Query to check routines or functions definitions . SELECT ROUTINE_SCHEMA , ROUTINE_NAME FROM information_schema . routines WHERE ROUTINE_SCHEMA NOT IN ( 'information_schema' , 'mysql' , 'ndbinfo' , 'performance_schema' , 'sys' ) AND ROUTINE_DEFINITION like '% OBJECT_NAME_REPORTED_IN_THE_ERROR_MESSAGE %' # Query to check view definitions . SELECT TABLE_SCHEMA , TABLE_NAME FROM information_schema . views WHERE TABLE_SCHEMA NOT IN ( 'information_schema' , 'mysql' , 'ndbinfo' , 'performance_schema' , 'sys' ) AND view_definition like '% OBJECT_NAME_REPORTED_IN_THE_ERROR_MESSAGE %' # Query to check trigger definitions . SELECT TRIGGER_SCHEMA , TRIGGER_NAME FROM information_schema . TRIGGERS WHERE TRIGGER_SCHEMA NOT IN ( 'information_schema' , 'mysql' , 'ndbinfo' , 'performance_schema' , 'sys' ) AND event_object_table = ' OBJECT_NAME_REPORTED_IN_THE_ERROR_MESSAGE ' # Query to check constraint definitions . SELECT TABLE_SCHEMA , TABLE_NAME FROM information_schema . KEY_COLUMN_USAGE WHERE TABLE_SCHEMA NOT IN ( 'information_schema' , 'mysql' , 'ndbinfo' , 'performance_schema' , 'sys' ) AND REFERENCED_TABLE_NAME = ' OBJECT_NAME_REPORTED_IN_THE_ERROR_MESSAGE '
Unknown table 'COLUMN_STATISTICS' in information_schema
When running the mysqldump
utility version 8 or later to export a MySQL database version earlier than 8, you encounter this error: Unknown table 'COLUMN_STATISTICS' in information_schema (1109)
.
The issue might be
MySQL databases in versions earlier than 8 don't have the COLUMN_STATISTICS
table. The mysqldump
utility in versions 8 and later try to export this table by default. The export fails because the column doesn't exist.
Things to try
Add the --column-statistics=0
flag to your mysqldump
command to remove the COLUMN_STATISTICS
table from the export. For more information, see Exporting a MySQL database using mysqldump
.
Specified key was too long; max key length is 767 bytes
You see the error Specified key was too long; max key length is 767 bytes.
The issue might be
The source database may have the variable innodb_large_prefixset. This allows index key prefixes longer than 767 bytes. The default value is OFF
for MySQL 5.6.
Things to try
Set the innodb_large_prefix
flag to ON
when creating the destination database, or update the existing destination database with the flag.
Table definition has changed
You see the error Table definition has changed
.
The issue might be
There were DDL changes during the dump process.
Things to try
Don't modify tables or perform any other DDL changes during the dump process.You can use a script to verify that DDL operations are stopped.
Access denied; you need (at least one of) the SUPER privilege(s) for this operation
You see the error Access denied; you need (at least one of) the SUPER privilege(s) for this operation
.
The issue might be
There could be an event, a view, a function, or a procedure in the source database using super user@localhost (such as root@localhost). This is not supported by Cloud SQL.
Things to try
Refer to this
document on migrating a database with DEFINER
clauses.
Error message: Definer user 'x' does not exist. Please create the user on the replica.
You see the error Definer user 'x' does not exist. Please create the user on the replica.
The issue might be
The root cause is that a user in the source database with the DEFINER
clause doesn't exist on the replica database.
Things to try
Refer to this
document on migrating a database with DEFINER
clauses. You may need
to create the user in the replica database.
Error message: ERROR 1045 (28000) at line {line_number}: Access denied for user 'cloudsqlimport'@'localhost
You see the error ERROR 1045 (28000) at line {line_number}: Access denied for user 'cloudsqlimport'@'localhost
.
The issue might be
The root cause is that a user in the source database with the DEFINER
clause doesn't exist on the replica database and that user
is cross-referenced in the object definitions in the source database.
Things to try
Refer to this
document on migrating a database with DEFINER
clauses. You may need
to create one or more users in the replica database.
Lost connection to MySQL server during query when dumping table
You see the error Lost connection to MySQL server during query when dumping table
.
The issue might be
- The source database instance may have become unreachable from the destination instance.
- The source database may have tables with large blobs or long strings which require setting the
max_allowed_packet
to a larger number on the source database.
Things to try
- Verify the source database instance is up and reachable.
- Configure the
max-allowed-packet
flag in the migration job , and then restart the migration job. Or, generate a manual dump with themax_allowed_packet
option to dump the data and migrate with the dump file. - Increasing
max_allowed_packet
, will most likely require tweaking thenet_read_timeout
andnet_write_timeout
settings on the source database (generally it should be increased till the connection error stops).
Got packet bigger than 'max_allowed_packet' bytes when dumping table
You see the error Got packet bigger than 'max_allowed_packet' bytes when dumping table
.
The issue might be
The packet was larger than allowed by settings.
Things to try
Create a migration job using a manual dump
with the max_allowed_packet
option to dump the data and migrate with the dump file.
No data is being replicated
Initial data migration was successful, but no data is being replicated.
The issue might be
A possible root cause could be your source database has replication flags which result in some or all database changes not being replicated over.
Things to try
Make sure the replication flags such as binlog-do-db
, binlog-ignore-db
, replicate-do-db
or replicate-ignore-db
are not set in a conflicting way.
Run the command show master status
on the source database to see the current settings.
Initial data migration was successful but data replication stopped working after a while
The initial data migration was successful but data replication stopped working after a while.
The issue might be
There can be many root causes for this issue.
Things to try
- Check the replication metrics for your destination instance in the Cloud Monitoring UI.
- The errors from the MySQL IO thread or SQL thread can be found in Cloud logging in the mysql.err log files.
- The error can also be found when connecting to the destination instance. Run the command
SHOW REPLICA STATUS
, and check for these fields in the output:- Replica_IO_Running
- Replica_SQL_Running
- Last_IO_Error
- Last_SQL_Error
Note: The
SHOW REPLICA STATUS
is an alias introduced in MySQL 8.0.22. For previous versions (MySQL 5.7, MySQL 8.0), use the old alias of the status command. For more information, see Status statement in MySQL documentation.If you got the error
fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file' in Last_IO_Error
, this could be due to insufficient binlog retention setting on source instance.If you got the error
error connecting to master 'USER_NAME@SOURCE_HOST:SOURCE_PORT' - retry-time: RETRY_TIME retries: RETRIES
, this could be due to the destination instance failing to reconnect to the source because of a connectivity or authentication issue.
mysqld check failed: data disk is full
You see the error mysqld check failed: data disk is full
.
The issue might be
The data disk of the destination instance is probably full.
Things to try
Increase the disk size of the destination instance.
Failure connecting to source database
Failure connecting to source database.
The issue might be
There was a connectivity issue between the source database instance and the destination instance.
Things to try
Follow the steps in the debugging connectivity article .
Migration from a managed database (Amazon RDS/Aurora) doesn't start
The migration job fails to start.
The issue might be
Migrating from a managed source database without SUPERUSER
privileges
requires a brief downtime in the beginning of the migration.
Things to try
- For Amazon RDS, follow the steps in this article .
- For Amazon Aurora, follow the steps in this article .
Binlog is configured incorrectly on the source database
You see an error indicating a problem with the binary logs.
The issue might be
This can happen for continuous MySQL migration jobs if the binlog configuration is incorrect on the source database.
Things to try
Make sure you follow the definitions here .
Failure reading the provided dump file
You see an error indicating a problem with the dump file.
The issue might be
DMS is unable to find the dump file provided.
Things to try
- Check the dump path to ensure a proper file is there, or change the path
- If you change the path, then use a
PATCH
request to ensure the job uses it. - Restart the migration job.
Unable to resume replication as binlog position was lost
The binlog position is lost.
The issue might be
This error can occur when the replication process is paused for a long time, which causes the binlog position to be lost. Migration jobs shouldn't be paused for periods of time that approach the binlog retention period.
Things to try
Restart the migration job.
Failure running migration job due to incompatible source and destination database versions
The source and destination database versions are not a supported combination.
The issue might be
The source database version provided is incompatible with the destination database version.
Things to try
Make sure that the destination database version is the same or one major version above the source destination version, then create a new migration job.
Can't connect to the source database server
You see the error Unable to connect to source database server
.
The issue might be
Database Migration Service can't establish a connection to the source database server.
Things to try
Verify that the source and destination database instances can communicate with each other. Then, make sure that you've completed all of the required prerequisites that appeared when you defined the settings for your migration job .
Cloud SQL destination instance disk usage drops to zero
The disk usage suddenly drops to zero during migration.
The issue might be
There may be a failure when importing the full dump data. When this happens, the migration process tries to perform another load of the data. This process first wipes out existing data on the destination instance (This is why you see the disk usage going down to zero.), and then tries to reload the data.
Things to try
Go to the Logs Explorer
, and select your destination instance from the resource list.
Look for a similar log message: DUMP_STAGE(RETRY): Attempt .../...: import failed: error..."; Clearing database and trying again."
Find the message after the import failed:
text and try to resolve the underlying issue..