- 1.73.0 (latest)
- 1.72.0
- 1.71.1
- 1.70.0
- 1.69.0
- 1.68.0
- 1.67.1
- 1.66.0
- 1.65.0
- 1.63.0
- 1.62.0
- 1.60.0
- 1.59.0
- 1.58.0
- 1.57.0
- 1.56.0
- 1.55.0
- 1.54.1
- 1.53.0
- 1.52.0
- 1.51.0
- 1.50.0
- 1.49.0
- 1.48.0
- 1.47.0
- 1.46.0
- 1.45.0
- 1.44.0
- 1.43.0
- 1.39.0
- 1.38.1
- 1.37.0
- 1.36.4
- 1.35.0
- 1.34.0
- 1.33.1
- 1.32.0
- 1.31.1
- 1.30.1
- 1.29.0
- 1.28.1
- 1.27.1
- 1.26.1
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.1
- 1.21.0
- 1.20.0
- 1.19.1
- 1.18.3
- 1.17.1
- 1.16.1
- 1.15.1
- 1.14.0
- 1.13.1
- 1.12.1
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.1
- 1.7.1
- 1.6.2
- 1.5.0
- 1.4.3
- 1.3.0
- 1.2.0
- 1.1.1
- 1.0.1
- 0.9.0
- 0.8.0
- 0.7.1
- 0.6.0
- 0.5.1
- 0.4.0
- 0.3.1
CustomTrainingJob
(
display_name
:
str
,
script_path
:
str
,
container_uri
:
str
,
requirements
:
typing
.
Optional
[
typing
.
Sequence
[
str
]]
=
None
,
model_serving_container_image_uri
:
typing
.
Optional
[
str
]
=
None
,
model_serving_container_predict_route
:
typing
.
Optional
[
str
]
=
None
,
model_serving_container_health_route
:
typing
.
Optional
[
str
]
=
None
,
model_serving_container_command
:
typing
.
Optional
[
typing
.
Sequence
[
str
]]
=
None
,
model_serving_container_args
:
typing
.
Optional
[
typing
.
Sequence
[
str
]]
=
None
,
model_serving_container_environment_variables
:
typing
.
Optional
[
typing
.
Dict
[
str
,
str
]
]
=
None
,
model_serving_container_ports
:
typing
.
Optional
[
typing
.
Sequence
[
int
]]
=
None
,
model_description
:
typing
.
Optional
[
str
]
=
None
,
model_instance_schema_uri
:
typing
.
Optional
[
str
]
=
None
,
model_parameters_schema_uri
:
typing
.
Optional
[
str
]
=
None
,
model_prediction_schema_uri
:
typing
.
Optional
[
str
]
=
None
,
explanation_metadata
:
typing
.
Optional
[
google
.
cloud
.
aiplatform_v1
.
types
.
explanation_metadata
.
ExplanationMetadata
]
=
None
,
explanation_parameters
:
typing
.
Optional
[
google
.
cloud
.
aiplatform_v1
.
types
.
explanation
.
ExplanationParameters
]
=
None
,
project
:
typing
.
Optional
[
str
]
=
None
,
location
:
typing
.
Optional
[
str
]
=
None
,
credentials
:
typing
.
Optional
[
google
.
auth
.
credentials
.
Credentials
]
=
None
,
labels
:
typing
.
Optional
[
typing
.
Dict
[
str
,
str
]]
=
None
,
training_encryption_spec_key_name
:
typing
.
Optional
[
str
]
=
None
,
model_encryption_spec_key_name
:
typing
.
Optional
[
str
]
=
None
,
staging_bucket
:
typing
.
Optional
[
str
]
=
None
,
)
Class to launch a Custom Training Job in Vertex AI using a script.
Takes a training implementation as a python script and executes that script in Cloud Vertex AI Training.
Properties
create_time
Time this resource was created.
display_name
Display name of this resource.
encryption_spec
Customer-managed encryption key options for this Vertex AI resource.
If this is set, then all resources created by this Vertex AI resource will be encrypted with the provided encryption key.
end_time
Time when the TrainingJob resource entered the PIPELINE_STATE_SUCCEEDED
, PIPELINE_STATE_FAILED
, PIPELINE_STATE_CANCELLED
state.
error
Detailed error info for this TrainingJob resource. Only populated when
the TrainingJob's state is PIPELINE_STATE_FAILED
or PIPELINE_STATE_CANCELLED
.
gca_resource
The underlying resource proto representation.
has_failed
Returns True if training has failed.
False otherwise.
labels
User-defined labels containing metadata about this resource.
Read more about labels at https://goo.gl/xmQnxf
name
Name of this resource.
network
The full name of the Google Compute Engine network to which this CustomTrainingJob should be peered.
Takes the format projects/{project}/global/networks/{network}
. Where
{project} is a project number, as in 12345
, and {network} is a network name.
Private services access must already be configured for the network. If left unspecified, the CustomTrainingJob is not peered with any network.
resource_name
Full qualified resource name.
start_time
Time when the TrainingJob entered the PIPELINE_STATE_RUNNING
for
the first time.
state
Current training state.
update_time
Time this resource was last updated.
web_access_uris
Get the web access uris of the backing custom job.
(Dict[str, str])
Methods
CustomTrainingJob
CustomTrainingJob
(
display_name
:
str
,
script_path
:
str
,
container_uri
:
str
,
requirements
:
typing
.
Optional
[
typing
.
Sequence
[
str
]]
=
None
,
model_serving_container_image_uri
:
typing
.
Optional
[
str
]
=
None
,
model_serving_container_predict_route
:
typing
.
Optional
[
str
]
=
None
,
model_serving_container_health_route
:
typing
.
Optional
[
str
]
=
None
,
model_serving_container_command
:
typing
.
Optional
[
typing
.
Sequence
[
str
]]
=
None
,
model_serving_container_args
:
typing
.
Optional
[
typing
.
Sequence
[
str
]]
=
None
,
model_serving_container_environment_variables
:
typing
.
Optional
[
typing
.
Dict
[
str
,
str
]
]
=
None
,
model_serving_container_ports
:
typing
.
Optional
[
typing
.
Sequence
[
int
]]
=
None
,
model_description
:
typing
.
Optional
[
str
]
=
None
,
model_instance_schema_uri
:
typing
.
Optional
[
str
]
=
None
,
model_parameters_schema_uri
:
typing
.
Optional
[
str
]
=
None
,
model_prediction_schema_uri
:
typing
.
Optional
[
str
]
=
None
,
explanation_metadata
:
typing
.
Optional
[
google
.
cloud
.
aiplatform_v1
.
types
.
explanation_metadata
.
ExplanationMetadata
]
=
None
,
explanation_parameters
:
typing
.
Optional
[
google
.
cloud
.
aiplatform_v1
.
types
.
explanation
.
ExplanationParameters
]
=
None
,
project
:
typing
.
Optional
[
str
]
=
None
,
location
:
typing
.
Optional
[
str
]
=
None
,
credentials
:
typing
.
Optional
[
google
.
auth
.
credentials
.
Credentials
]
=
None
,
labels
:
typing
.
Optional
[
typing
.
Dict
[
str
,
str
]]
=
None
,
training_encryption_spec_key_name
:
typing
.
Optional
[
str
]
=
None
,
model_encryption_spec_key_name
:
typing
.
Optional
[
str
]
=
None
,
staging_bucket
:
typing
.
Optional
[
str
]
=
None
,
)
Constructs a Custom Training Job from a Python script.
job = aiplatform.CustomTrainingJob( display_name='test-train', script_path='test_script.py', requirements=['pandas', 'numpy'], container_uri='gcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest', model_serving_container_image_uri='gcr.io/my-trainer/serving:1', model_serving_container_predict_route='predict', model_serving_container_health_route='metadata, labels={'key': 'value'}, )
Usage with Dataset:
ds = aiplatform.TabularDataset( 'projects/my-project/locations/us-central1/datasets/12345')
job.run( ds, replica_count=1, model_display_name='my-trained-model', model_labels={'key': 'value'}, )
Usage without Dataset:
job.run(replica_count=1, model_display_name='my-trained-model)
To ensure your model gets saved in Vertex AI, write your saved model to os.environ["AIP_MODEL_DIR"] in your provided training script.
display_name
str
Required. The user-defined name of this TrainingPipeline.
script_path
str
Required. Local path to training script.
container_uri
str
Required: Uri of the training container image in the GCR.
requirements
Sequence[str]
List of python packages dependencies of script.
model_serving_container_image_uri
str
If the training produces a managed Vertex AI Model, the URI of the Model serving container suitable for serving the model produced by the training script.
model_serving_container_predict_route
str
If the training produces a managed Vertex AI Model, An HTTP path to send prediction requests to the container, and which must be supported by it. If not specified a default HTTP path will be used by Vertex AI.
model_serving_container_health_route
str
If the training produces a managed Vertex AI Model, an HTTP path to send health check requests to the container, and which must be supported by it. If not specified a standard HTTP path will be used by AI Platform.
model_serving_container_command
Sequence[str]
The command with which the container is run. Not executed within a shell. The Docker image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
model_serving_container_args
Sequence[str]
The arguments to the command. The Docker image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
model_serving_container_environment_variables
Dict[str, str]
The environment variables that are to be present in the container. Should be a dictionary where keys are environment variable names and values are environment variable values for those names.
model_serving_container_ports
Sequence[int]
Declaration of ports that are exposed by the container. This field is primarily informational, it gives Vertex AI information about the network connections the container uses. Listing or not a port here has no impact on whether the port is actually exposed, any port listening on the default "0.0.0.0" address inside a container will be accessible from the network.
model_description
str
The description of the Model.
model_instance_schema_uri
str
Optional. Points to a YAML file stored on Google Cloud Storage describing the format of a single instance, which are used in PredictRequest.instances
, ExplainRequest.instances
and BatchPredictionJob.input_config
. The schema is defined as an OpenAPI 3.0.2 Schema Object https://tinyurl.com/y538mdwt#schema-object
__. AutoML Models always have this field populated by AI Platform. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
model_parameters_schema_uri
str
Optional. Points to a YAML file stored on Google Cloud Storage describing the parameters of prediction and explanation via PredictRequest.parameters
, ExplainRequest.parameters
and BatchPredictionJob.model_parameters
. The schema is defined as an OpenAPI 3.0.2 Schema Object https://tinyurl.com/y538mdwt#schema-object
__. AutoML Models always have this field populated by AI Platform, if no parameters are supported it is set to an empty string. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
model_prediction_schema_uri
str
Optional. Points to a YAML file stored on Google Cloud Storage describing the format of a single prediction produced by this Model, which are returned via PredictResponse.predictions
, ExplainResponse.explanations
, and BatchPredictionJob.output_config
. The schema is defined as an OpenAPI 3.0.2 Schema Object https://tinyurl.com/y538mdwt#schema-object
__. AutoML Models always have this field populated by AI Platform. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
explanation_metadata
explain.ExplanationMetadata
Optional. Metadata describing the Model's input and output for explanation. explanation_metadata
is optional while explanation_parameters
must be specified when used. For more details, see Ref docs http://tinyurl.com/1igh60kt
explanation_parameters
explain.ExplanationParameters
Optional. Parameters to configure explaining for Model's predictions. For more details, see Ref docs http://tinyurl.com/1an4zake
project
str
Project to run training in. Overrides project set in aiplatform.init.
location
str
Location to run training in. Overrides location set in aiplatform.init.
credentials
auth_credentials.Credentials
Custom credentials to use to run call training service. Overrides credentials set in aiplatform.init.
labels
Dict[str, str]
Optional. The labels with user-defined metadata to organize TrainingPipelines. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
training_encryption_spec_key_name
Optional[str]
Optional. The Cloud KMS resource identifier of the customer managed encryption key used to protect the training pipeline. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created. If set, this TrainingPipeline will be secured by this key. Note: Model trained by this TrainingPipeline is also secured by this key if model_to_upload
is not set separately. Overrides encryption_spec_key_name set in aiplatform.init.
model_encryption_spec_key_name
Optional[str]
Optional. The Cloud KMS resource identifier of the customer managed encryption key used to protect the model. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created. If set, the trained Model will be secured by this key. Overrides encryption_spec_key_name set in aiplatform.init.
staging_bucket
str
Bucket used to stage source and training artifacts. Overrides staging_bucket set in aiplatform.init.
cancel
cancel
()
-
> None
Starts asynchronous cancellation on the TrainingJob. The server
makes a best effort to cancel the job, but success is not guaranteed.
On successful cancellation, the TrainingJob is not deleted; instead it
becomes a job with state set to CANCELLED
.
RuntimeError
delete
delete
(
sync
:
bool
=
True
)
-
> None
Deletes this Vertex AI resource. WARNING: This deletion is permanent.
sync
bool
Whether to execute this deletion synchronously. If False, this method will be executed in concurrent Future and any downstream object will be immediately returned and synced when the Future has completed.
done
done
()
-
> bool
Method indicating whether a job has completed.
get
get
(
resource_name
:
str
,
project
:
typing
.
Optional
[
str
]
=
None
,
location
:
typing
.
Optional
[
str
]
=
None
,
credentials
:
typing
.
Optional
[
google
.
auth
.
credentials
.
Credentials
]
=
None
,
)
-
> google
.
cloud
.
aiplatform
.
training_jobs
.
_TrainingJob
Get Training Job for the given resource_name.
resource_name
str
Required. A fully-qualified resource name or ID.
project
str
Optional project to retrieve training job from. If not set, project set in aiplatform.init will be used.
location
str
Optional location to retrieve training job from. If not set, location set in aiplatform.init will be used.
credentials
auth_credentials.Credentials
Custom credentials to use to upload this model. Overrides credentials set in aiplatform.init.
ValueError
get_model
get_model
(
sync
=
True
)
-
> google
.
cloud
.
aiplatform
.
models
.
Model
Vertex AI Model produced by this training, if one was produced.
sync
bool
Whether to execute this method synchronously. If False, this method will be executed in concurrent Future and any downstream object will be immediately returned and synced when the Future has completed.
RuntimeError
model
list
list
(
filter
:
typing
.
Optional
[
str
]
=
None
,
order_by
:
typing
.
Optional
[
str
]
=
None
,
project
:
typing
.
Optional
[
str
]
=
None
,
location
:
typing
.
Optional
[
str
]
=
None
,
credentials
:
typing
.
Optional
[
google
.
auth
.
credentials
.
Credentials
]
=
None
,
)
-
> typing
.
List
[
google
.
cloud
.
aiplatform
.
base
.
VertexAiResourceNoun
]
List all instances of this TrainingJob resource.
Example Usage:
aiplatform.CustomTrainingJob.list( filter='display_name="experiment_a27"', order_by='create_time desc' )
filter
str
Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported.
order_by
str
Optional. A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported fields: display_name
, create_time
, update_time
project
str
Optional. Project to retrieve list from. If not set, project set in aiplatform.init will be used.
location
str
Optional. Location to retrieve list from. If not set, location set in aiplatform.init will be used.
credentials
auth_credentials.Credentials
Optional. Custom credentials to use to retrieve list. Overrides credentials set in aiplatform.init.
run
run
(
dataset
:
typing
.
Optional
[
typing
.
Union
[
google
.
cloud
.
aiplatform
.
datasets
.
image_dataset
.
ImageDataset
,
google
.
cloud
.
aiplatform
.
datasets
.
tabular_dataset
.
TabularDataset
,
google
.
cloud
.
aiplatform
.
datasets
.
text_dataset
.
TextDataset
,
google
.
cloud
.
aiplatform
.
datasets
.
video_dataset
.
VideoDataset
,
]
]
=
None
,
annotation_schema_uri
:
typing
.
Optional
[
str
]
=
None
,
model_display_name
:
typing
.
Optional
[
str
]
=
None
,
model_labels
:
typing
.
Optional
[
typing
.
Dict
[
str
,
str
]]
=
None
,
model_id
:
typing
.
Optional
[
str
]
=
None
,
parent_model
:
typing
.
Optional
[
str
]
=
None
,
is_default_version
:
typing
.
Optional
[
bool
]
=
True
,
model_version_aliases
:
typing
.
Optional
[
typing
.
Sequence
[
str
]]
=
None
,
model_version_description
:
typing
.
Optional
[
str
]
=
None
,
base_output_dir
:
typing
.
Optional
[
str
]
=
None
,
service_account
:
typing
.
Optional
[
str
]
=
None
,
network
:
typing
.
Optional
[
str
]
=
None
,
bigquery_destination
:
typing
.
Optional
[
str
]
=
None
,
args
:
typing
.
Optional
[
typing
.
List
[
typing
.
Union
[
str
,
float
,
int
]]]
=
None
,
environment_variables
:
typing
.
Optional
[
typing
.
Dict
[
str
,
str
]]
=
None
,
replica_count
:
int
=
1
,
machine_type
:
str
=
"n1-standard-4"
,
accelerator_type
:
str
=
"ACCELERATOR_TYPE_UNSPECIFIED"
,
accelerator_count
:
int
=
0
,
boot_disk_type
:
str
=
"pd-ssd"
,
boot_disk_size_gb
:
int
=
100
,
reduction_server_replica_count
:
int
=
0
,
reduction_server_machine_type
:
typing
.
Optional
[
str
]
=
None
,
reduction_server_container_uri
:
typing
.
Optional
[
str
]
=
None
,
training_fraction_split
:
typing
.
Optional
[
float
]
=
None
,
validation_fraction_split
:
typing
.
Optional
[
float
]
=
None
,
test_fraction_split
:
typing
.
Optional
[
float
]
=
None
,
training_filter_split
:
typing
.
Optional
[
str
]
=
None
,
validation_filter_split
:
typing
.
Optional
[
str
]
=
None
,
test_filter_split
:
typing
.
Optional
[
str
]
=
None
,
predefined_split_column_name
:
typing
.
Optional
[
str
]
=
None
,
timestamp_split_column_name
:
typing
.
Optional
[
str
]
=
None
,
timeout
:
typing
.
Optional
[
int
]
=
None
,
restart_job_on_worker_restart
:
bool
=
False
,
enable_web_access
:
bool
=
False
,
enable_dashboard_access
:
bool
=
False
,
tensorboard
:
typing
.
Optional
[
str
]
=
None
,
sync
=
True
,
create_request_timeout
:
typing
.
Optional
[
float
]
=
None
,
disable_retries
:
bool
=
False
,
)
-
> typing
.
Optional
[
google
.
cloud
.
aiplatform
.
models
.
Model
]
Runs the custom training job.
Distributed Training Support: If replica count = 1 then one chief replica will be provisioned. If replica_count > 1 the remainder will be provisioned as a worker replica pool. ie: replica_count = 10 will result in 1 chief and 9 workers All replicas have same machine_type, accelerator_type, and accelerator_count
If training on a Vertex AI dataset, you can use one of the following split configurations:
Data fraction splits:
Any of training_fraction_split
, validation_fraction_split
and test_fraction_split
may optionally be provided, they must sum to up to 1. If
the provided ones sum to less than 1, the remainder is assigned to sets as
decided by Vertex AI. If none of the fractions are set, by default roughly 80%
of data will be used for training, 10% for validation, and 10% for test.
Data filter splits:
Assigns input data to training, validation, and test sets
based on the given filters, data pieces not matched by any
filter are ignored. Currently only supported for Datasets
containing DataItems.
If any of the filters in this message are to match nothing, then
they can be set as '-' (the minus sign).
If using filter splits, all of `training_filter_split`, `validation_filter_split` and
`test_filter_split` must be provided.
Supported only for unstructured Datasets.
Predefined splits:
Assigns input data to training, validation, and test sets based on the value of a provided key.
If using predefined splits, `predefined_split_column_name` must be provided.
Supported only for tabular Datasets.
Timestamp splits:
Assigns input data to training, validation, and test sets
based on a provided timestamps. The youngest data pieces are
assigned to training set, next to validation set, and the oldest
to the test set.
Supported only for tabular Datasets.
annotation_schema_uri
str
Google Cloud Storage URI points to a YAML file describing annotation schema. The schema is defined as an OpenAPI 3.0.2 Schema Object
The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/annotation/, note that the chosen schema must be consistent with metadata
of the Dataset specified by dataset_id
. Only Annotations that both match this schema and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on. When used in conjunction with annotations_filter
, the Annotations used for training are filtered by both annotations_filter
and annotation_schema_uri
.
model_display_name
str
If the script produces a managed Vertex AI Model. The display name of the Model. The name can be up to 128 characters long and can be consist of any UTF-8 characters. If not provided upon creation, the job's display_name is used.
model_labels
Dict[str, str]
Optional. The labels with user-defined metadata to organize your Models. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
model_id
str
Optional. The ID to use for the Model produced by this job, which will become the final component of the model resource name. This value may be up to 63 characters, and valid characters are [a-z0-9_-]
. The first character cannot be a number or hyphen.
parent_model
str
Optional. The resource name or model ID of an existing model. The new model uploaded by this job will be a version of parent_model
. Only set this field when training a new version of an existing model.
is_default_version
bool
Optional. When set to True, the newly uploaded model version will automatically have alias "default" included. Subsequent uses of the model produced by this job without a version specified will use this "default" version. When set to False, the "default" alias will not be moved. Actions targeting the model version produced by this job will need to specifically reference this version by ID or alias. New model uploads, i.e. version 1, will always be "default" aliased.
model_version_aliases
Sequence[str]
Optional. User provided version aliases so that the model version uploaded by this job can be referenced via alias instead of auto-generated version ID. A default version alias will be created for the first version of the model. The format is a-z][a-zA-Z0-9-]
{0,126}[a-z0-9]
model_version_description
str
Optional. The description of the model version being uploaded by this job.
base_output_dir
str
GCS output directory of job. If not provided a timestamped directory in the staging directory will be used. Vertex AI sets the following environment variables when it runs your training code: - AIP_MODEL_DIR: a Cloud Storage URI of a directory intended for saving model artifacts, i.e. <base_output_dir>/model/ - AIP_CHECKPOINT_DIR: a Cloud Storage URI of a directory intended for saving checkpoints, i.e. <base_output_dir>/checkpoints/ - AIP_TENSORBOARD_LOG_DIR: a Cloud Storage URI of a directory intended for saving TensorBoard logs, i.e. <base_output_dir>/logs/
service_account
str
Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account.
network
str
The full name of the Compute Engine network to which the job should be peered. For example, projects/12345/global/networks/myVPC. Private services access must already be configured for the network. If left unspecified, the network set in aiplatform.init will be used. Otherwise, the job is not peered with any network.
bigquery_destination
str
Provide this field if dataset
is a BigQuery dataset. The BigQuery project location where the training data is to be written to. In the given project a new dataset is created with name dataset_
where timestamp is in YYYY_MM_DDThh_mm_ss_sssZ format. All training input data will be written into that dataset. In the dataset three tables will be created, training
, validation
and test
. - AIP_DATA_FORMAT = "bigquery". - AIP_TRAINING_DATA_URI ="bigquery_destination.dataset_ .training" - AIP_VALIDATION_DATA_URI = "bigquery_destination.dataset_
.validation" - AIP_TEST_DATA_URI = "bigquery_destination.dataset_*.test"
args
List[Unions[str, int, float]]
Command line arguments to be passed to the Python script.
environment_variables
Dict[str, str]
Environment variables to be passed to the container. Should be a dictionary where keys are environment variable names and values are environment variable values for those names. At most 10 environment variables can be specified. The Name of the environment variable must be unique. environment_variables = { 'MY_KEY': 'MY_VALUE' }
replica_count
int
The number of worker replicas. If replica count = 1 then one chief replica will be provisioned. If replica_count > 1 the remainder will be provisioned as a worker replica pool.
machine_type
str
The type of machine to use for training.
accelerator_type
str
Hardware accelerator type. One of ACCELERATOR_TYPE_UNSPECIFIED, NVIDIA_TESLA_K80, NVIDIA_TESLA_P100, NVIDIA_TESLA_V100, NVIDIA_TESLA_P4, NVIDIA_TESLA_T4
accelerator_count
int
The number of accelerators to attach to a worker replica.
boot_disk_type
str
Type of the boot disk, default is pd-ssd
. Valid values: pd-ssd
(Persistent Disk Solid State Drive) or pd-standard
(Persistent Disk Hard Disk Drive).
boot_disk_size_gb
int
Size in GB of the boot disk, default is 100GB. boot disk size must be within the range of [100, 64000].
reduction_server_replica_count
int
The number of reduction server replicas, default is 0.
reduction_server_machine_type
str
Optional. The type of machine to use for reduction server.
reduction_server_container_uri
str
Optional. The Uri of the reduction server container image. See details: https://cloud.google.com/vertex-ai/docs/training/distributed-training#reduce_training_time_with_reduction_server
training_fraction_split
float
Optional. The fraction of the input data that is to be used to train the Model. This is ignored if Dataset is not provided.
validation_fraction_split
float
Optional. The fraction of the input data that is to be used to validate the Model. This is ignored if Dataset is not provided.
test_fraction_split
float
Optional. The fraction of the input data that is to be used to evaluate the Model. This is ignored if Dataset is not provided.
training_filter_split
str
Optional. A filter on DataItems of the Dataset. DataItems that match this filter are used to train the Model. A filter with same syntax as the one used in DatasetService.ListDataItems may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order. This is ignored if Dataset is not provided.
validation_filter_split
str
Optional. A filter on DataItems of the Dataset. DataItems that match this filter are used to validate the Model. A filter with same syntax as the one used in DatasetService.ListDataItems may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order. This is ignored if Dataset is not provided.
test_filter_split
str
Optional. A filter on DataItems of the Dataset. DataItems that match this filter are used to test the Model. A filter with same syntax as the one used in DatasetService.ListDataItems may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order. This is ignored if Dataset is not provided.
predefined_split_column_name
str
Optional. The key is a name of one of the Dataset's data columns. The value of the key (either the label's value or value in the column) must be one of { training
, validation
, test
}, and it defines to which set the given piece of data is assigned. If for a piece of data the key is not present or has an invalid value, that piece is ignored by the pipeline. Supported only for tabular and time series Datasets.
timestamp_split_column_name
str
Optional. The key is a name of one of the Dataset's data columns. The value of the key values of the key (the values in the column) must be in RFC 3339 date-time
format, where time-offset
= "Z"
(e.g. 1985-04-12T23:20:50.52Z). If for a piece of data the key is not present or has an invalid value, that piece is ignored by the pipeline. Supported only for tabular and time series Datasets.
timeout
int
The maximum job running time in seconds. The default is 7 days.
restart_job_on_worker_restart
bool
Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
enable_web_access
bool
Whether you want Vertex AI to enable interactive shell access to training containers. https://cloud.google.com/vertex-ai/docs/training/monitor-debug-interactive-shell
enable_dashboard_access
bool
Whether you want Vertex AI to enable access to the customized dashboard to training containers.
tensorboard
str
Optional. The name of a Vertex AI Tensorboard
resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
The training script should write Tensorboard to following Vertex AI environment variable: AIP_TENSORBOARD_LOG_DIR service_account
is required with provided tensorboard
. For more information on configuring your service account please visit: https://cloud.google.com/vertex-ai/docs/experiments/tensorboard-training
create_request_timeout
float
Optional. The timeout for the create request in seconds.
disable_retries
bool
Indicates if the job should retry for internal errors after the job starts running. If True, overrides restart_job_on_worker_restart
to False.
sync
bool
Whether to execute this method synchronously. If False, this method will be executed in concurrent Future and any downstream object will be immediately returned and synced when the Future has completed.
model
submit
submit
(
dataset
:
typing
.
Optional
[
typing
.
Union
[
google
.
cloud
.
aiplatform
.
datasets
.
image_dataset
.
ImageDataset
,
google
.
cloud
.
aiplatform
.
datasets
.
tabular_dataset
.
TabularDataset
,
google
.
cloud
.
aiplatform
.
datasets
.
text_dataset
.
TextDataset
,
google
.
cloud
.
aiplatform
.
datasets
.
video_dataset
.
VideoDataset
,
]
]
=
None
,
annotation_schema_uri
:
typing
.
Optional
[
str
]
=
None
,
model_display_name
:
typing
.
Optional
[
str
]
=
None
,
model_labels
:
typing
.
Optional
[
typing
.
Dict
[
str
,
str
]]
=
None
,
model_id
:
typing
.
Optional
[
str
]
=
None
,
parent_model
:
typing
.
Optional
[
str
]
=
None
,
is_default_version
:
typing
.
Optional
[
bool
]
=
True
,
model_version_aliases
:
typing
.
Optional
[
typing
.
Sequence
[
str
]]
=
None
,
model_version_description
:
typing
.
Optional
[
str
]
=
None
,
base_output_dir
:
typing
.
Optional
[
str
]
=
None
,
service_account
:
typing
.
Optional
[
str
]
=
None
,
network
:
typing
.
Optional
[
str
]
=
None
,
bigquery_destination
:
typing
.
Optional
[
str
]
=
None
,
args
:
typing
.
Optional
[
typing
.
List
[
typing
.
Union
[
str
,
float
,
int
]]]
=
None
,
environment_variables
:
typing
.
Optional
[
typing
.
Dict
[
str
,
str
]]
=
None
,
replica_count
:
int
=
1
,
machine_type
:
str
=
"n1-standard-4"
,
accelerator_type
:
str
=
"ACCELERATOR_TYPE_UNSPECIFIED"
,
accelerator_count
:
int
=
0
,
boot_disk_type
:
str
=
"pd-ssd"
,
boot_disk_size_gb
:
int
=
100
,
reduction_server_replica_count
:
int
=
0
,
reduction_server_machine_type
:
typing
.
Optional
[
str
]
=
None
,
reduction_server_container_uri
:
typing
.
Optional
[
str
]
=
None
,
training_fraction_split
:
typing
.
Optional
[
float
]
=
None
,
validation_fraction_split
:
typing
.
Optional
[
float
]
=
None
,
test_fraction_split
:
typing
.
Optional
[
float
]
=
None
,
training_filter_split
:
typing
.
Optional
[
str
]
=
None
,
validation_filter_split
:
typing
.
Optional
[
str
]
=
None
,
test_filter_split
:
typing
.
Optional
[
str
]
=
None
,
predefined_split_column_name
:
typing
.
Optional
[
str
]
=
None
,
timestamp_split_column_name
:
typing
.
Optional
[
str
]
=
None
,
timeout
:
typing
.
Optional
[
int
]
=
None
,
restart_job_on_worker_restart
:
bool
=
False
,
enable_web_access
:
bool
=
False
,
enable_dashboard_access
:
bool
=
False
,
tensorboard
:
typing
.
Optional
[
str
]
=
None
,
sync
=
True
,
create_request_timeout
:
typing
.
Optional
[
float
]
=
None
,
disable_retries
:
bool
=
False
,
)
-
> typing
.
Optional
[
google
.
cloud
.
aiplatform
.
models
.
Model
]
Submits the custom training job without blocking until completion.
Distributed Training Support: If replica count = 1 then one chief replica will be provisioned. If replica_count > 1 the remainder will be provisioned as a worker replica pool. ie: replica_count = 10 will result in 1 chief and 9 workers All replicas have same machine_type, accelerator_type, and accelerator_count
If training on a Vertex AI dataset, you can use one of the following split configurations:
Data fraction splits:
Any of training_fraction_split
, validation_fraction_split
and test_fraction_split
may optionally be provided, they must sum to up to 1. If
the provided ones sum to less than 1, the remainder is assigned to sets as
decided by Vertex AI. If none of the fractions are set, by default roughly 80%
of data will be used for training, 10% for validation, and 10% for test.
Data filter splits:
Assigns input data to training, validation, and test sets
based on the given filters, data pieces not matched by any
filter are ignored. Currently only supported for Datasets
containing DataItems.
If any of the filters in this message are to match nothing, then
they can be set as '-' (the minus sign).
If using filter splits, all of `training_filter_split`, `validation_filter_split` and
`test_filter_split` must be provided.
Supported only for unstructured Datasets.
Predefined splits:
Assigns input data to training, validation, and test sets based on the value of a provided key.
If using predefined splits, `predefined_split_column_name` must be provided.
Supported only for tabular Datasets.
Timestamp splits:
Assigns input data to training, validation, and test sets
based on a provided timestamps. The youngest data pieces are
assigned to training set, next to validation set, and the oldest
to the test set.
Supported only for tabular Datasets.
annotation_schema_uri
str
Google Cloud Storage URI points to a YAML file describing annotation schema. The schema is defined as an OpenAPI 3.0.2 Schema Object
The schema files that can be used here are found in gs://google-cloud-aiplatform/schema/dataset/annotation/, note that the chosen schema must be consistent with metadata
of the Dataset specified by dataset_id
. Only Annotations that both match this schema and belong to DataItems not ignored by the split method are used in respectively training, validation or test role, depending on the role of the DataItem they are on. When used in conjunction with annotations_filter
, the Annotations used for training are filtered by both annotations_filter
and annotation_schema_uri
.
model_display_name
str
If the script produces a managed Vertex AI Model. The display name of the Model. The name can be up to 128 characters long and can be consist of any UTF-8 characters. If not provided upon creation, the job's display_name is used.
model_labels
Dict[str, str]
Optional. The labels with user-defined metadata to organize your Models. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
model_id
str
Optional. The ID to use for the Model produced by this job, which will become the final component of the model resource name. This value may be up to 63 characters, and valid characters are [a-z0-9_-]
. The first character cannot be a number or hyphen.
parent_model
str
Optional. The resource name or model ID of an existing model. The new model uploaded by this job will be a version of parent_model
. Only set this field when training a new version of an existing model.
is_default_version
bool
Optional. When set to True, the newly uploaded model version will automatically have alias "default" included. Subsequent uses of the model produced by this job without a version specified will use this "default" version. When set to False, the "default" alias will not be moved. Actions targeting the model version produced by this job will need to specifically reference this version by ID or alias. New model uploads, i.e. version 1, will always be "default" aliased.
model_version_aliases
Sequence[str]
Optional. User provided version aliases so that the model version uploaded by this job can be referenced via alias instead of auto-generated version ID. A default version alias will be created for the first version of the model. The format is a-z][a-zA-Z0-9-]
{0,126}[a-z0-9]
model_version_description
str
Optional. The description of the model version being uploaded by this job.
base_output_dir
str
GCS output directory of job. If not provided a timestamped directory in the staging directory will be used. Vertex AI sets the following environment variables when it runs your training code: - AIP_MODEL_DIR: a Cloud Storage URI of a directory intended for saving model artifacts, i.e. <base_output_dir>/model/ - AIP_CHECKPOINT_DIR: a Cloud Storage URI of a directory intended for saving checkpoints, i.e. <base_output_dir>/checkpoints/ - AIP_TENSORBOARD_LOG_DIR: a Cloud Storage URI of a directory intended for saving TensorBoard logs, i.e. <base_output_dir>/logs/
service_account
str
Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account.
network
str
The full name of the Compute Engine network to which the job should be peered. For example, projects/12345/global/networks/myVPC. Private services access must already be configured for the network. If left unspecified, the network set in aiplatform.init will be used. Otherwise, the job is not peered with any network.
bigquery_destination
str
Provide this field if dataset
is a BigQuery dataset. The BigQuery project location where the training data is to be written to. In the given project a new dataset is created with name dataset_
where timestamp is in YYYY_MM_DDThh_mm_ss_sssZ format. All training input data will be written into that dataset. In the dataset three tables will be created, training
, validation
and test
. - AIP_DATA_FORMAT = "bigquery". - AIP_TRAINING_DATA_URI ="bigquery_destination.dataset_ .training" - AIP_VALIDATION_DATA_URI = "bigquery_destination.dataset_
.validation" - AIP_TEST_DATA_URI = "bigquery_destination.dataset_*.test"
args
List[Unions[str, int, float]]
Command line arguments to be passed to the Python script.
environment_variables
Dict[str, str]
Environment variables to be passed to the container. Should be a dictionary where keys are environment variable names and values are environment variable values for those names. At most 10 environment variables can be specified. The Name of the environment variable must be unique. environment_variables = { 'MY_KEY': 'MY_VALUE' }
replica_count
int
The number of worker replicas. If replica count = 1 then one chief replica will be provisioned. If replica_count > 1 the remainder will be provisioned as a worker replica pool.
machine_type
str
The type of machine to use for training.
accelerator_type
str
Hardware accelerator type. One of ACCELERATOR_TYPE_UNSPECIFIED, NVIDIA_TESLA_K80, NVIDIA_TESLA_P100, NVIDIA_TESLA_V100, NVIDIA_TESLA_P4, NVIDIA_TESLA_T4
accelerator_count
int
The number of accelerators to attach to a worker replica.
boot_disk_type
str
Type of the boot disk, default is pd-ssd
. Valid values: pd-ssd
(Persistent Disk Solid State Drive) or pd-standard
(Persistent Disk Hard Disk Drive).
boot_disk_size_gb
int
Size in GB of the boot disk, default is 100GB. boot disk size must be within the range of [100, 64000].
reduction_server_replica_count
int
The number of reduction server replicas, default is 0.
reduction_server_machine_type
str
Optional. The type of machine to use for reduction server.
reduction_server_container_uri
str
Optional. The Uri of the reduction server container image. See details: https://cloud.google.com/vertex-ai/docs/training/distributed-training#reduce_training_time_with_reduction_server
training_fraction_split
float
Optional. The fraction of the input data that is to be used to train the Model. This is ignored if Dataset is not provided.
validation_fraction_split
float
Optional. The fraction of the input data that is to be used to validate the Model. This is ignored if Dataset is not provided.
test_fraction_split
float
Optional. The fraction of the input data that is to be used to evaluate the Model. This is ignored if Dataset is not provided.
training_filter_split
str
Optional. A filter on DataItems of the Dataset. DataItems that match this filter are used to train the Model. A filter with same syntax as the one used in DatasetService.ListDataItems may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order. This is ignored if Dataset is not provided.
validation_filter_split
str
Optional. A filter on DataItems of the Dataset. DataItems that match this filter are used to validate the Model. A filter with same syntax as the one used in DatasetService.ListDataItems may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order. This is ignored if Dataset is not provided.
test_filter_split
str
Optional. A filter on DataItems of the Dataset. DataItems that match this filter are used to test the Model. A filter with same syntax as the one used in DatasetService.ListDataItems may be used. If a single DataItem is matched by more than one of the FilterSplit filters, then it is assigned to the first set that applies to it in the training, validation, test order. This is ignored if Dataset is not provided.
predefined_split_column_name
str
Optional. The key is a name of one of the Dataset's data columns. The value of the key (either the label's value or value in the column) must be one of { training
, validation
, test
}, and it defines to which set the given piece of data is assigned. If for a piece of data the key is not present or has an invalid value, that piece is ignored by the pipeline. Supported only for tabular and time series Datasets.
timestamp_split_column_name
str
Optional. The key is a name of one of the Dataset's data columns. The value of the key values of the key (the values in the column) must be in RFC 3339 date-time
format, where time-offset
= "Z"
(e.g. 1985-04-12T23:20:50.52Z). If for a piece of data the key is not present or has an invalid value, that piece is ignored by the pipeline. Supported only for tabular and time series Datasets.
timeout
int
The maximum job running time in seconds. The default is 7 days.
restart_job_on_worker_restart
bool
Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
enable_web_access
bool
Whether you want Vertex AI to enable interactive shell access to training containers. https://cloud.google.com/vertex-ai/docs/training/monitor-debug-interactive-shell
enable_dashboard_access
bool
Whether you want Vertex AI to enable access to the customized dashboard to training containers.
tensorboard
str
Optional. The name of a Vertex AI Tensorboard
resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
The training script should write Tensorboard to following Vertex AI environment variable: AIP_TENSORBOARD_LOG_DIR service_account
is required with provided tensorboard
. For more information on configuring your service account please visit: https://cloud.google.com/vertex-ai/docs/experiments/tensorboard-training
create_request_timeout
float
Optional. The timeout for the create request in seconds.
disable_retries
bool
Indicates if the job should retry for internal errors after the job starts running. If True, overrides restart_job_on_worker_restart
to False.
sync
bool
Whether to execute this method synchronously. If False, this method will be executed in concurrent Future and any downstream object will be immediately returned and synced when the Future has completed.
model
to_dict
to_dict
()
-
> typing
.
Dict
[
str
,
typing
.
Any
]
Returns the resource proto as a dictionary.
wait
wait
()
Helper method that blocks until all futures are complete.
wait_for_resource_creation
wait_for_resource_creation
()
-
> None
Waits until resource has been created.