- Resource: MigrationWorkflow
- MigrationTask
- AssessmentTaskDetails
- AssessmentFeatureHandle
- TranslationTaskDetails
- TeradataOptions
- BteqOptions
- DatasetReference
- TranslationFileMapping
- FileEncoding
- IdentifierSettings
- IdentifierCase
- IdentifierRewriteMode
- TokenType
- Filter
- TranslationConfigDetails
- ObjectNameMappingList
- ObjectNameMapping
- NameMappingKey
- Type
- NameMappingValue
- Dialect
- BigQueryDialect
- HiveQLDialect
- RedshiftDialect
- TeradataDialect
- Mode
- OracleDialect
- SparkSQLDialect
- SnowflakeDialect
- NetezzaDialect
- AzureSynapseDialect
- VerticaDialect
- SQLServerDialect
- PostgresqlDialect
- PrestoDialect
- MySQLDialect
- DB2Dialect
- SQLiteDialect
- GreenplumDialect
- SourceEnv
- SourceTargetLocationMapping
- SourceLocation
- TargetLocation
- TranslationDetails
- SourceTargetMapping
- SourceSpec
- Literal
- TargetSpec
- SourceEnvironment
- MetadataCaching
- State
- MigrationTaskOrchestrationResult
- AssessmentOrchestrationResultDetails
- TranslationTaskResult
- GcsReportLogMessage
- MigrationTaskResult
- State
- Methods
Resource: MigrationWorkflow
A migration workflow which specifies what needs to be done for an EDW migration.
JSON representation |
---|
{ "name" : string , "displayName" : string , "tasks" : { string : { object ( |
Fields | |
---|---|
name
|
Output only. Immutable. Identifier. The unique identifier for the migration workflow. The ID is server-generated. Example: |
displayName
|
The display name of the workflow. This can be set to give a workflow a descriptive name. There is no guarantee or enforcement of uniqueness. |
tasks
|
The tasks in a workflow in a named map. The name (i.e. key) has no meaning and is merely a convenient way to address a specific task in a workflow. |
state
|
Output only. That status of the workflow. |
createTime
|
Output only. Time when the workflow was created. |
lastUpdateTime
|
Output only. Time when the workflow was last updated. |
MigrationTask
A single task for a migration which has details about the configuration of the task.
JSON representation |
---|
{ "id" : string , "type" : string , "details" : { "@type" : string , field1 : ... , ... } , "state" : enum ( |
id
string
Output only. Immutable. The unique identifier for the migration task. The ID is server-generated.
type
string
The type of the task. This must be one of the supported task types: Translation_Teradata2BQ, Translation_Redshift2BQ, Translation_Bteq2BQ, Translation_Oracle2BQ, Translation_HiveQL2BQ, Translation_SparkSQL2BQ, Translation_Snowflake2BQ, Translation_Netezza2BQ, Translation_AzureSynapse2BQ, Translation_Vertica2BQ, Translation_SQLServer2BQ, Translation_Presto2BQ, Translation_MySQL2BQ, Translation_Postgresql2BQ, Translation_SQLite2BQ, Translation_Greenplum2BQ.
details
object
DEPRECATED! Use one of the task_details below. The details of the task. The type URL must be one of the supported task details messages and correspond to the Task's type.
state
enum (
State
)
Output only. The current state of the task.
processingError
object (
ErrorInfo
)
Output only. An explanation that may be populated when the task is in FAILED state.
createTime
string (
Timestamp
format)
Output only. Time when the task was created.
lastUpdateTime
string (
Timestamp
format)
Output only. Time when the task was last updated.
orchestrationResult
(deprecated)
object (
MigrationTaskOrchestrationResult
)
Output only. Deprecated: Use the taskResult field below instead. Additional information about the orchestration.
resourceErrorDetails[]
object (
ResourceErrorDetail
)
Output only. Provides details to errors and issues encountered while processing the task. Presence of error details does not mean that the task failed.
resourceErrorCount
integer
The number or resources with errors. Note: This is not the total number of errors as each resource can have more than one error. This is used to indicate truncation by having a resourceErrorCount
that is higher than the size of resourceErrorDetails
.
metrics[]
object (
TimeSeries
)
Output only. The metrics for the task.
taskResult
object (
MigrationTaskResult
)
Output only. The result of the task.
totalProcessingErrorCount
integer
Output only. Count of all the processing errors in this task and its subtasks.
totalResourceErrorCount
integer
Output only. Count of all the resource errors in this task and its subtasks.
task_details
. The details of the task. task_details
can be only one of the following:assessmentTaskDetails
object (
AssessmentTaskDetails
)
Task configuration for Assessment.
translationTaskDetails
object (
TranslationTaskDetails
)
Task configuration for Batch SQL Translation.
translationConfigDetails
object (
TranslationConfigDetails
)
Task configuration for CW Batch/Offline SQL Translation.
translationDetails
object (
TranslationDetails
)
Task details for unified SQL Translation.
AssessmentTaskDetails
Assessment task config.
JSON representation |
---|
{
"inputPath"
:
string
,
"outputDataset"
:
string
,
"querylogsPath"
:
string
,
"dataSource"
:
string
,
"featureHandle"
:
{
object (
|
Fields | |
---|---|
inputPath
|
Required. The Cloud Storage path for assessment input files. |
outputDataset
|
Required. The BigQuery dataset for output. |
querylogsPath
|
Optional. An optional Cloud Storage path to write the query logs (which is then used as an input path on the translation task) |
dataSource
|
Required. The data source or data warehouse type (eg: TERADATA/REDSHIFT) from which the input data is extracted. |
featureHandle
|
Optional. A collection of additional feature flags for this assessment. |
AssessmentFeatureHandle
User-definable feature flags for assessment tasks.
JSON representation |
---|
{ "addShareableDataset" : boolean } |
Fields | |
---|---|
addShareableDataset
|
Optional. Whether to create a dataset containing non-PII data in addition to the output dataset. |
TranslationTaskDetails
The translation task config to capture necessary settings for a translation task and subtask.
JSON representation |
---|
{ "inputPath" : string , "outputPath" : string , "filePaths" : [ { object ( |
inputPath
string
The Cloud Storage path for translation input files.
outputPath
string
The Cloud Storage path for translation output files.
filePaths[]
object (
TranslationFileMapping
)
Cloud Storage files to be processed for translation.
schemaPath
string
The Cloud Storage path to DDL files as table schema to assist semantic translation.
fileEncoding
enum (
FileEncoding
)
The file encoding type.
identifierSettings
object (
IdentifierSettings
)
The settings for SQL identifiers.
specialTokenMap
map (key: string, value: enum (
TokenType
))
The map capturing special tokens to be replaced during translation. The key is special token in string. The value is the token data type. This is used to translate SQL query template which contains special token as place holder. The special token makes a query invalid to parse. This map will be applied to annotate those special token with types to let parser understand how to parse them into proper structure with type information.
filter
object (
Filter
)
The filter applied to translation details.
translationExceptionTable
string
Specifies the exact name of the bigquery table ("dataset.table") to be used for surfacing raw translation errors. If the table does not exist, we will create it. If it already exists and the schema is the same, we will re-use. If the table exists and the schema is different, we will throw an error.
TeradataOptions
This type has no fields.
Teradata SQL specific translation task related settings.
BteqOptions
BTEQ translation task related settings.
JSON representation |
---|
{
"projectDataset"
:
{
object (
|
Fields | |
---|---|
projectDataset
|
Specifies the project and dataset in BigQuery that will be used for external table creation during the translation. |
defaultPathUri
|
The Cloud Storage location to be used as the default path for files that are not otherwise specified in the file replacement map. |
fileReplacementMap
|
Maps the local paths that are used in BTEQ scripts (the keys) to the paths in Cloud Storage that should be used in their stead in the translation (the value). |
DatasetReference
Reference to a BigQuery dataset.
JSON representation |
---|
{ "datasetId" : string , "projectId" : string , "datasetIdAlternative" : [ string ] , "projectIdAlternative" : [ string ] } |
Fields | |
---|---|
datasetId
|
A unique ID for this dataset, without the project name. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters. |
projectId
|
The ID of the project containing this dataset. |
datasetIdAlternative[]
|
The alternative field that will be used when the service is not able to translate the received data to the datasetId field. |
projectIdAlternative[]
|
The alternative field that will be used when the service is not able to translate the received data to the projectId field. |
TranslationFileMapping
Mapping between an input and output file to be translated in a subtask.
JSON representation |
---|
{ "inputPath" : string , "outputPath" : string } |
Fields | |
---|---|
inputPath
|
The Cloud Storage path for a file to translation in a subtask. |
outputPath
|
The Cloud Storage path to write back the corresponding input file to. |
FileEncoding
The file encoding types.
Enums | |
---|---|
FILE_ENCODING_UNSPECIFIED
|
File encoding setting is not specified. |
UTF_8
|
File encoding is UTF_8. |
ISO_8859_1
|
File encoding is ISO_8859_1. |
US_ASCII
|
File encoding is US_ASCII. |
UTF_16
|
File encoding is UTF_16. |
UTF_16LE
|
File encoding is UTF_16LE. |
UTF_16BE
|
File encoding is UTF_16BE. |
IdentifierSettings
Settings related to SQL identifiers.
JSON representation |
---|
{ "outputIdentifierCase" : enum ( |
Fields | |
---|---|
outputIdentifierCase
|
The setting to control output queries' identifier case. |
identifierRewriteMode
|
Specifies the rewrite mode for SQL identifiers. |
IdentifierCase
The identifier case type.
Enums | |
---|---|
IDENTIFIER_CASE_UNSPECIFIED
|
The identifier case is not specified. |
ORIGINAL
|
Identifiers' cases will be kept as the original cases. |
UPPER
|
Identifiers will be in upper cases. |
LOWER
|
Identifiers will be in lower cases. |
IdentifierRewriteMode
The SQL identifier rewrite mode.
Enums | |
---|---|
IDENTIFIER_REWRITE_MODE_UNSPECIFIED
|
SQL Identifier rewrite mode is unspecified. |
NONE
|
SQL identifiers won't be rewrite. |
REWRITE_ALL
|
All SQL identifiers will be rewrite. |
TokenType
The special token data type.
Enums | |
---|---|
TOKEN_TYPE_UNSPECIFIED
|
Token type is not specified. |
STRING
|
Token type as string. |
INT64
|
Token type as integer. |
NUMERIC
|
Token type as numeric. |
BOOL
|
Token type as boolean. |
FLOAT64
|
Token type as float. |
DATE
|
Token type as date. |
TIMESTAMP
|
Token type as timestamp. |
Filter
The filter applied to fields of translation details.
JSON representation |
---|
{ "inputFileExclusionPrefixes" : [ string ] } |
Fields | |
---|---|
inputFileExclusionPrefixes[]
|
The list of prefixes used to exclude processing for input files. |
TranslationConfigDetails
The translation config to capture necessary settings for a translation task and subtask.
JSON representation |
---|
{ "sourceDialect" : { object ( |
sourceDialect
object (
Dialect
)
The dialect of the input files.
targetDialect
object (
Dialect
)
The target dialect for the engine to translate the input to.
sourceEnv
object (
SourceEnv
)
The default source environment values for the translation.
sourceTargetLocationMapping[]
object (
SourceTargetLocationMapping
)
The mapping from source location paths to target location paths.
requestSource
string
The indicator to show translation request initiator.
targetTypes[]
string
The types of output to generate, e.g. sql, metadata etc. If not specified, a default set of targets will be generated. Some additional target types may be slower to generate. See the documentation for the set of available target types.
source_location
. The chosen path where the source for input files will be found. source_location
can be only one of the following:gcsSourcePath
string
The Cloud Storage path for a directory of files to translate in a task.
target_location
. The chosen path where the destination for output files will be found. target_location
can be only one of the following:gcsTargetPath
string
The Cloud Storage path to write back the corresponding input files to.
output_name_mapping
. The mapping of full SQL object names from their current state to the desired output. output_name_mapping
can be only one of the following:nameMappingList
object (
ObjectNameMappingList
)
The mapping of objects to their desired output names in list form.
ObjectNameMappingList
Represents a map of name mappings using a list of key:value proto messages of existing name to desired output name.
JSON representation |
---|
{
"nameMap"
:
[
{
object (
|
Fields | |
---|---|
nameMap[]
|
The elements of the object name map. |
ObjectNameMapping
Represents a key-value pair of NameMappingKey to NameMappingValue to represent the mapping of SQL names from the input value to desired output.
JSON representation |
---|
{ "source" : { object ( |
Fields | |
---|---|
source
|
The name of the object in source that is being mapped. |
target
|
The desired target name of the object that is being mapped. |
NameMappingKey
The potential components of a full name mapping that will be mapped during translation in the source data warehouse.
JSON representation |
---|
{
"type"
:
enum (
|
Fields | |
---|---|
type
|
The type of object that is being mapped. |
database
|
The database name (BigQuery project ID equivalent in the source data warehouse). |
schema
|
The schema name (BigQuery dataset equivalent in the source data warehouse). |
relation
|
The relation name (BigQuery table or view equivalent in the source data warehouse). |
attribute
|
The attribute name (BigQuery column equivalent in the source data warehouse). |
Type
The type of the object that is being mapped.
Enums | |
---|---|
TYPE_UNSPECIFIED
|
Unspecified name mapping type. |
DATABASE
|
The object being mapped is a database. |
SCHEMA
|
The object being mapped is a schema. |
RELATION
|
The object being mapped is a relation. |
ATTRIBUTE
|
The object being mapped is an attribute. |
RELATION_ALIAS
|
The object being mapped is a relation alias. |
ATTRIBUTE_ALIAS
|
The object being mapped is a an attribute alias. |
FUNCTION
|
The object being mapped is a function. |
NameMappingValue
The potential components of a full name mapping that will be mapped during translation in the target data warehouse.
JSON representation |
---|
{ "database" : string , "schema" : string , "relation" : string , "attribute" : string } |
Fields | |
---|---|
database
|
The database name (BigQuery project ID equivalent in the target data warehouse). |
schema
|
The schema name (BigQuery dataset equivalent in the target data warehouse). |
relation
|
The relation name (BigQuery table or view equivalent in the target data warehouse). |
attribute
|
The attribute name (BigQuery column equivalent in the target data warehouse). |
Dialect
The possible dialect options for translation.
JSON representation |
---|
{ // Union field |
dialect_value
. The possible dialect options that this message represents. dialect_value
can be only one of the following:bigqueryDialect
object (
BigQueryDialect
)
The BigQuery dialect
hiveqlDialect
object (
HiveQLDialect
)
The HiveQL dialect
redshiftDialect
object (
RedshiftDialect
)
The Redshift dialect
teradataDialect
object (
TeradataDialect
)
The Teradata dialect
oracleDialect
object (
OracleDialect
)
The Oracle dialect
sparksqlDialect
object (
SparkSQLDialect
)
The SparkSQL dialect
snowflakeDialect
object (
SnowflakeDialect
)
The Snowflake dialect
netezzaDialect
object (
NetezzaDialect
)
The Netezza dialect
azureSynapseDialect
object (
AzureSynapseDialect
)
The Azure Synapse dialect
verticaDialect
object (
VerticaDialect
)
The Vertica dialect
sqlServerDialect
object (
SQLServerDialect
)
The SQL Server dialect
postgresqlDialect
object (
PostgresqlDialect
)
The Postgresql dialect
prestoDialect
object (
PrestoDialect
)
The Presto dialect
mysqlDialect
object (
MySQLDialect
)
The MySQL dialect
db2Dialect
object (
DB2Dialect
)
DB2 dialect
sqliteDialect
object (
SQLiteDialect
)
SQLite dialect
greenplumDialect
object (
GreenplumDialect
)
Greenplum dialect
BigQueryDialect
This type has no fields.
The dialect definition for BigQuery.
HiveQLDialect
This type has no fields.
The dialect definition for HiveQL.
RedshiftDialect
This type has no fields.
The dialect definition for Redshift.
TeradataDialect
The dialect definition for Teradata.
JSON representation |
---|
{
"mode"
:
enum (
|
Fields | |
---|---|
mode
|
Which Teradata sub-dialect mode the user specifies. |
Mode
The sub-dialect options for Teradata.
Enums | |
---|---|
MODE_UNSPECIFIED
|
Unspecified mode. |
SQL
|
Teradata SQL mode. |
BTEQ
|
BTEQ mode (which includes SQL). |
OracleDialect
This type has no fields.
The dialect definition for Oracle.
SparkSQLDialect
This type has no fields.
The dialect definition for SparkSQL.
SnowflakeDialect
This type has no fields.
The dialect definition for Snowflake.
NetezzaDialect
This type has no fields.
The dialect definition for Netezza.
AzureSynapseDialect
This type has no fields.
The dialect definition for Azure Synapse.
VerticaDialect
This type has no fields.
The dialect definition for Vertica.
SQLServerDialect
This type has no fields.
The dialect definition for SQL Server.
PostgresqlDialect
This type has no fields.
The dialect definition for Postgresql.
PrestoDialect
This type has no fields.
The dialect definition for Presto.
MySQLDialect
This type has no fields.
The dialect definition for MySQL.
DB2Dialect
This type has no fields.
The dialect definition for DB2.
SQLiteDialect
This type has no fields.
The dialect definition for SQLite.
GreenplumDialect
This type has no fields.
The dialect definition for Greenplum.
SourceEnv
Represents the default source environment values for the translation.
JSON representation |
---|
{ "defaultDatabase" : string , "schemaSearchPath" : [ string ] , "metadataStoreDataset" : string } |
Fields | |
---|---|
defaultDatabase
|
The default database name to fully qualify SQL objects when their database name is missing. |
schemaSearchPath[]
|
The schema search path. When SQL objects are missing schema name, translation engine will search through this list to find the value. |
metadataStoreDataset
|
Optional. Expects a valid BigQuery dataset ID that exists, e.g., project-123.metadata_store_123. If specified, translation will search and read the required schema information from a metadata store in this dataset. If metadata store doesn't exist, translation will parse the metadata file and upload the schema info to a temp table in the dataset to speed up future translation jobs. |
SourceTargetLocationMapping
Represents one mapping from a source location path to an optional target location path.
JSON representation |
---|
{ "sourceLocation" : { object ( |
Fields | |
---|---|
sourceLocation
|
The path to the location of the source data. |
targetLocation
|
The path to the location of the target data. |
SourceLocation
Represents one path to the location that holds source data.
JSON representation |
---|
{ // Union field |
location
. The location of the source data. location
can be only one of the following:gcsPath
string
The Cloud Storage path for a directory of files.
TargetLocation
// Represents one path to the location that holds target data.
JSON representation |
---|
{ // Union field |
location
. The location of the target data. location
can be only one of the following:gcsPath
string
The Cloud Storage path for a directory of files.
TranslationDetails
The translation details to capture the necessary settings for a translation job.
JSON representation |
---|
{ "sourceTargetMapping" : [ { object ( |
Fields | |
---|---|
sourceTargetMapping[]
|
The mapping from source to target SQL. |
targetBaseUri
|
The base URI for all writes to persistent storage. |
sourceEnvironment
|
The default source environment values for the translation. |
targetReturnLiterals[]
|
The list of literal targets that will be directly returned to the response. Each entry consists of the constructed path, EXCLUDING the base path. Not providing a targetBaseUri will prevent writing to persistent storage. |
targetTypes[]
|
The types of output to generate, e.g. sql, metadata, lineage_from_sql_scripts, etc. If not specified, a default set of targets will be generated. Some additional target types may be slower to generate. See the documentation for the set of available target types. |
SourceTargetMapping
Represents one mapping from a source SQL to a target SQL.
JSON representation |
---|
{ "sourceSpec" : { object ( |
Fields | |
---|---|
sourceSpec
|
The source SQL or the path to it. |
targetSpec
|
The target SQL or the path for it. |
SourceSpec
Represents one path to the location that holds source data.
JSON representation |
---|
{ "encoding" : string , // Union field |
encoding
string
Optional. The optional field to specify the encoding of the sql bytes.
source
. The specific source SQL. source
can be only one of the following:baseUri
string
The base URI for all files to be read in as sources for translation.
literal
object (
Literal
)
Source literal.
Literal
Literal data.
JSON representation |
---|
{ "relativePath" : string , // Union field |
relativePath
string
Required. The identifier of the literal entry.
literal_data
. The literal SQL contents. literal_data
can be only one of the following:literalString
string
Literal string data.
literalBytes
string ( bytes
format)
Literal byte data.
TargetSpec
Represents one path to the location that holds target data.
JSON representation |
---|
{ "relativePath" : string } |
Fields | |
---|---|
relativePath
|
The relative path for the target data. Given source file |
SourceEnvironment
Represents the default source environment values for the translation.
JSON representation |
---|
{
"defaultDatabase"
:
string
,
"schemaSearchPath"
:
[
string
]
,
"metadataStoreDataset"
:
string
,
"metadataCaching"
:
{
object (
|
Fields | |
---|---|
defaultDatabase
|
The default database name to fully qualify SQL objects when their database name is missing. |
schemaSearchPath[]
|
The schema search path. When SQL objects are missing schema name, translation engine will search through this list to find the value. |
metadataStoreDataset
|
Optional. Expects a validQ BigQuery dataset ID that exists, e.g., project-123.metadata_store_123. If specified, translation will search and read the required schema information from a metadata store in this dataset. If metadata store doesn't exist, translation will parse the metadata file and upload the schema info to a temp table in the dataset to speed up future translation jobs. |
metadataCaching
|
Optional. Metadata caching settings. If specified, translation will cache the metadata. Otherwise, metadata will be parsed from the metadata file. The cache is stored on the service side. Hence, enabling this feature will store data from the provided metadata file on the service side for up to 7 days. |
MetadataCaching
Metadata caching settings.
JSON representation |
---|
{ "maxCacheAge" : string } |
Fields | |
---|---|
maxCacheAge
|
Optional. The maximum age of the metadata cache. If the cache is older than this value, the cache will be refreshed. A cache will not be kept for longer than 7 days. Providing no value or a value larger than 7 days will result in using the cache if available (i.e. the same as setting the value to 7 days). Setting the duration to 0 or a negative value will refresh the cache. |
State
Possible states of a migration task.
Enums | |
---|---|
STATE_UNSPECIFIED
|
The state is unspecified. |
PENDING
|
The task is waiting for orchestration. |
ORCHESTRATING
|
The task is assigned to an orchestrator. |
RUNNING
|
The task is running, i.e. its subtasks are ready for execution. |
PAUSED
|
The task is paused. Assigned subtasks can continue, but no new subtasks will be scheduled. |
SUCCEEDED
|
The task finished successfully. |
FAILED
|
The task finished unsuccessfully. |
MigrationTaskOrchestrationResult
Additional information from the orchestrator when it is done with the task orchestration.
JSON representation |
---|
{ // Union field |
details
. Details specific to the task type. details
can be only one of the following:assessmentDetails
object (
AssessmentOrchestrationResultDetails
)
Details specific to assessment task types.
translationTaskResult
object (
TranslationTaskResult
)
Details specific to translation task types.
AssessmentOrchestrationResultDetails
Details for an assessment task orchestration result.
JSON representation |
---|
{ "outputTablesSchemaVersion" : string , "reportUri" : string , "additionalReportUris" : { string : string , ... } } |
Fields | |
---|---|
outputTablesSchemaVersion
|
Optional. The version used for the output table schemas. |
reportUri
|
Optional. The URI of the Data Studio report. |
additionalReportUris
|
Optional. Mapping with additional report URIs. This gives a mapping of report names to their URIs. The possible values for the keys are documented in the user guide. |
TranslationTaskResult
Translation specific result details from the migration task.
JSON representation |
---|
{ "translatedLiterals" : [ { object ( |
Fields | |
---|---|
translatedLiterals[]
|
The list of the translated literals. |
consoleUri
|
The Cloud Console URI for the migration workflow. |
GcsReportLogMessage
A record in the aggregate CSV report for a migration workflow
JSON representation |
---|
{ "severity" : string , "category" : string , "filePath" : string , "filename" : string , "sourceScriptLine" : integer , "sourceScriptColumn" : integer , "message" : string , "scriptContext" : string , "action" : string , "effect" : string , "objectName" : string } |
Fields | |
---|---|
severity
|
Severity of the translation record. |
category
|
Category of the error/warning. Example: SyntaxError |
filePath
|
The file path in which the error occurred |
filename
|
The file name in which the error occurred |
sourceScriptLine
|
Specifies the row from the source text where the error occurred (0 based, -1 for messages without line location). Example: 2 |
sourceScriptColumn
|
Specifies the column from the source texts where the error occurred. (0 based, -1 for messages without column location) example: 6 |
scriptContext
|
The script context (obfuscated) in which the error occurred |
action
|
Category of the error/warning. Example: SyntaxError |
effect
|
Effect of the error/warning. Example: COMPATIBILITY |
objectName
|
Name of the affected object in the log message. |
MigrationTaskResult
The migration task result.
JSON representation |
---|
{ // Union field |
details
. Details specific to the task type. details
can be only one of the following:assessmentDetails
object (
AssessmentOrchestrationResultDetails
)
Details specific to assessment task types.
translationTaskResult
object (
TranslationTaskResult
)
Details specific to translation task types.
State
Possible migration workflow states.
Enums | |
---|---|
STATE_UNSPECIFIED
|
Workflow state is unspecified. |
DRAFT
|
Workflow is in draft status, i.e. tasks are not yet eligible for execution. |
RUNNING
|
Workflow is running (i.e. tasks are eligible for execution). |
PAUSED
|
Workflow is paused. Tasks currently in progress may continue, but no further tasks will be scheduled. |
COMPLETED
|
Workflow is complete. There should not be any task in a non-terminal state, but if they are (e.g. forced termination), they will not be scheduled. |
Methods |
|
---|---|
|
Creates a migration workflow. |
|
Deletes a migration workflow by name. |
|
Gets a previously created migration workflow. |
|
Lists previously created migration workflow. |
|
Starts a previously created migration workflow. |