- 1.35.0 (latest)
- 1.34.0
- 1.33.0
- 1.32.1
- 1.31.0
- 1.30.0
- 1.26.0
- 1.23.0
- 1.22.0
- 1.21.0
- 1.20.0
- 1.19.0
- 1.18.0
- 1.17.0
- 1.16.0
- 1.15.0
- 1.14.0
- 1.13.1
- 1.12.0
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.0
- 1.7.0
- 1.6.0
- 1.5.0
- 1.4.0
- 1.3.0
- 1.2.0
- 1.1.0
- 1.0.0
- 0.39.0
- 0.38.0
- 0.37.1
- 0.32.0
- 0.31.0
- 0.30.0
- 0.29.0
- 0.28.0
- 0.27.0
- 0.26.2
- 0.25.0
- 0.24.0
- 0.23.0
- 0.22.0
- 0.21.0
- 0.20.0
- 0.19.0
- 0.18.0
- 0.17.0
- 0.16.0
- 0.15.0
- 0.13.0
- 0.12.0
- 0.11.1
- 0.10.0
Reference documentation and code samples for the Google Cloud Ai Platform V1 Client class Model.
A trained machine learning Model.
Generated from protobuf message google.cloud.aiplatform.v1.Model
Namespace
Google \ Cloud \ AIPlatform \ V1Methods
__construct
Constructor.
data
array
Optional. Data for populating the Message object.
↳ name
string
The resource name of the Model.
↳ version_id
string
Output only. Immutable. The version ID of the model. A new version is committed when a new model version is uploaded or trained under an existing model id. It is an auto-incrementing decimal number in string representation.
↳ version_aliases
array
User provided version aliases so that a model version can be referenced via alias (i.e. projects/{project}/locations/{location}/models/{model_id}@{version_alias}
instead of auto-generated version id (i.e. projects/{project}/locations/{location}/models/{model_id}@{version_id})
. The format is [a-z][a-zA-Z0-9-]{0,126}[a-z0-9] to distinguish from version_id. A default version alias will be created for the first version of the model, and there must be exactly one default version alias for a model.
↳ version_create_time
↳ version_update_time
↳ display_name
string
Required. The display name of the Model. The name can be up to 128 characters long and can consist of any UTF-8 characters.
↳ description
string
The description of the Model.
↳ version_description
string
The description of this version.
↳ default_checkpoint_id
string
The default checkpoint id of a model version.
↳ predict_schemata
PredictSchemata
The schemata that describe formats of the Model's predictions and explanations as given and returned via PredictionService.Predict and PredictionService.Explain .
↳ metadata_schema_uri
string
Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Model, that is specific to it. Unset if the Model does not have any additional information. The schema is defined as an OpenAPI 3.0.2 Schema Object . AutoML Models always have this field populated by Vertex AI, if no additional metadata is needed, this field is set to an empty string. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
↳ metadata
Google\Protobuf\Value
Immutable. An additional information about the Model; the schema of the metadata can be found in metadata_schema . Unset if the Model does not have any additional information.
↳ supported_export_formats
array< Model\ExportFormat
>
Output only. The formats in which this Model may be exported. If empty, this Model is not available for export.
↳ training_pipeline
string
Output only. The resource name of the TrainingPipeline that uploaded this Model, if any.
↳ pipeline_job
string
Optional. This field is populated if the model is produced by a pipeline job.
↳ container_spec
ModelContainerSpec
Input only. The specification of the container that is to be used when deploying this Model. The specification is ingested upon ModelService.UploadModel , and all binaries it contains are copied and stored internally by Vertex AI. Not required for AutoML Models.
↳ artifact_uri
string
Immutable. The path to the directory containing the Model artifact and any of its supporting files. Not required for AutoML Models.
↳ supported_deployment_resources_types
array
Output only. When this Model is deployed, its prediction resources are described by the prediction_resources
field of the Endpoint.deployed_models
object. Because not all Models support all resource configuration types, the configuration types this Model supports are listed here. If no configuration types are listed, the Model cannot be deployed to an Endpoint
and does not support online predictions ( PredictionService.Predict
or PredictionService.Explain
). Such a Model can serve predictions by using a BatchPredictionJob
, if it has at least one entry each in supported_input_storage_formats
and supported_output_storage_formats
.
↳ supported_input_storage_formats
array
Output only. The formats this Model supports in BatchPredictionJob.input_config
. If PredictSchemata.instance_schema_uri
exists, the instances should be given as per that schema. The possible formats are: * * jsonl
The JSON Lines format, where each instance is a single line. Uses GcsSource
. * * csv
The CSV format, where each instance is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses GcsSource
. * * tf-record
The TFRecord format, where each instance is a single record in tfrecord syntax. Uses GcsSource
. * * tf-record-gzip
Similar to tf-record
, but the file is gzipped. Uses GcsSource
. * * bigquery
Each instance is a single row in BigQuery. Uses BigQuerySource
. * * file-list
Each line of the file is the location of an instance to process, uses gcs_source
field of the InputConfig
object. If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob
. However, if it has supported_deployment_resources_types
, it could serve online predictions by using PredictionService.Predict
or PredictionService.Explain
.
↳ supported_output_storage_formats
array
Output only. The formats this Model supports in BatchPredictionJob.output_config
. If both PredictSchemata.instance_schema_uri
and PredictSchemata.prediction_schema_uri
exist, the predictions are returned together with their instances. In other words, the prediction has the original instance data first, followed by the actual prediction content (as per the schema). The possible formats are: * * jsonl
The JSON Lines format, where each prediction is a single line. Uses GcsDestination
. * * csv
The CSV format, where each prediction is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses GcsDestination
. * * bigquery
Each prediction is a single row in a BigQuery table, uses BigQueryDestination
. If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob
. However, if it has supported_deployment_resources_types
, it could serve online predictions by using PredictionService.Predict
or PredictionService.Explain
.
↳ create_time
↳ update_time
↳ deployed_models
array< DeployedModelRef
>
Output only. The pointers to DeployedModels created from this Model. Note that Model could have been deployed to Endpoints in different Locations.
↳ explanation_spec
ExplanationSpec
The default explanation specification for this Model. The Model can be used for requesting explanation after being deployed if it is populated. The Model can be used for batch explanation if it is populated. All fields of the explanation_spec can be overridden by explanation_spec of DeployModelRequest.deployed_model , or explanation_spec of BatchPredictionJob . If the default explanation specification is not set for this Model, this Model can still be used for requesting explanation by setting explanation_spec of DeployModelRequest.deployed_model and for batch explanation by setting explanation_spec of BatchPredictionJob .
↳ etag
string
Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
↳ labels
array| Google\Protobuf\Internal\MapField
The labels with user-defined metadata to organize your Models. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
↳ data_stats
Model\DataStats
Stats of data used for training or evaluating the Model. Only populated when the Model is trained by a TrainingPipeline with data_input_config .
↳ encryption_spec
EncryptionSpec
Customer-managed encryption key spec for a Model. If set, this Model and all sub-resources of this Model will be secured by this key.
↳ model_source_info
ModelSourceInfo
Output only. Source of a model. It can either be automl training pipeline, custom training pipeline, BigQuery ML, or saved and tuned from Genie or Model Garden.
↳ original_model_info
Model\OriginalModelInfo
Output only. If this Model is a copy of another Model, this contains info about the original.
↳ metadata_artifact
string
Output only. The resource name of the Artifact that was created in MetadataStore when creating the Model. The Artifact resource name pattern is projects/{project}/locations/{location}/metadataStores/{metadata_store}/artifacts/{artifact}
.
↳ base_model_source
Model\BaseModelSource
Optional. User input field to specify the base model source. Currently it only supports specifing the Model Garden models and Genie models.
↳ satisfies_pzs
bool
Output only. Reserved for future use.
↳ satisfies_pzi
bool
Output only. Reserved for future use.
↳ checkpoints
getName
The resource name of the Model.
string
setName
The resource name of the Model.
var
string
$this
getVersionId
Output only. Immutable. The version ID of the model.
A new version is committed when a new model version is uploaded or trained under an existing model id. It is an auto-incrementing decimal number in string representation.
string
setVersionId
Output only. Immutable. The version ID of the model.
A new version is committed when a new model version is uploaded or trained under an existing model id. It is an auto-incrementing decimal number in string representation.
var
string
$this
getVersionAliases
User provided version aliases so that a model version can be referenced via alias (i.e.
projects/{project}/locations/{location}/models/{model_id}@{version_alias}
instead of auto-generated version id (i.e. projects/{project}/locations/{location}/models/{model_id}@{version_id})
.
The format is [a-z][a-zA-Z0-9-]{0,126}[a-z0-9] to distinguish from
version_id. A default version alias will be created for the first version
of the model, and there must be exactly one default version alias for a
model.
setVersionAliases
User provided version aliases so that a model version can be referenced via alias (i.e.
projects/{project}/locations/{location}/models/{model_id}@{version_alias}
instead of auto-generated version id (i.e. projects/{project}/locations/{location}/models/{model_id}@{version_id})
.
The format is [a-z][a-zA-Z0-9-]{0,126}[a-z0-9] to distinguish from
version_id. A default version alias will be created for the first version
of the model, and there must be exactly one default version alias for a
model.
var
string[]
$this
getVersionCreateTime
Output only. Timestamp when this version was created.
hasVersionCreateTime
clearVersionCreateTime
setVersionCreateTime
Output only. Timestamp when this version was created.
$this
getVersionUpdateTime
Output only. Timestamp when this version was most recently updated.
hasVersionUpdateTime
clearVersionUpdateTime
setVersionUpdateTime
Output only. Timestamp when this version was most recently updated.
$this
getDisplayName
Required. The display name of the Model.
The name can be up to 128 characters long and can consist of any UTF-8 characters.
string
setDisplayName
Required. The display name of the Model.
The name can be up to 128 characters long and can consist of any UTF-8 characters.
var
string
$this
getDescription
The description of the Model.
string
setDescription
The description of the Model.
var
string
$this
getVersionDescription
The description of this version.
string
setVersionDescription
The description of this version.
var
string
$this
getDefaultCheckpointId
The default checkpoint id of a model version.
string
setDefaultCheckpointId
The default checkpoint id of a model version.
var
string
$this
getPredictSchemata
The schemata that describe formats of the Model's predictions and explanations as given and returned via PredictionService.Predict and PredictionService.Explain .
hasPredictSchemata
clearPredictSchemata
setPredictSchemata
The schemata that describe formats of the Model's predictions and explanations as given and returned via PredictionService.Predict and PredictionService.Explain .
$this
getMetadataSchemaUri
Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Model, that is specific to it. Unset if the Model does not have any additional information. The schema is defined as an OpenAPI 3.0.2 Schema Object .
AutoML Models always have this field populated by Vertex AI, if no additional metadata is needed, this field is set to an empty string. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
string
setMetadataSchemaUri
Immutable. Points to a YAML file stored on Google Cloud Storage describing additional information about the Model, that is specific to it. Unset if the Model does not have any additional information. The schema is defined as an OpenAPI 3.0.2 Schema Object .
AutoML Models always have this field populated by Vertex AI, if no additional metadata is needed, this field is set to an empty string. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access.
var
string
$this
getMetadata
Immutable. An additional information about the Model; the schema of the metadata can be found in metadata_schema .
Unset if the Model does not have any additional information.
hasMetadata
clearMetadata
setMetadata
Immutable. An additional information about the Model; the schema of the metadata can be found in metadata_schema .
Unset if the Model does not have any additional information.
$this
getSupportedExportFormats
Output only. The formats in which this Model may be exported. If empty, this Model is not available for export.
setSupportedExportFormats
Output only. The formats in which this Model may be exported. If empty, this Model is not available for export.
$this
getTrainingPipeline
Output only. The resource name of the TrainingPipeline that uploaded this Model, if any.
string
setTrainingPipeline
Output only. The resource name of the TrainingPipeline that uploaded this Model, if any.
var
string
$this
getPipelineJob
Optional. This field is populated if the model is produced by a pipeline job.
string
setPipelineJob
Optional. This field is populated if the model is produced by a pipeline job.
var
string
$this
getContainerSpec
Input only. The specification of the container that is to be used when deploying this Model. The specification is ingested upon ModelService.UploadModel , and all binaries it contains are copied and stored internally by Vertex AI.
Not required for AutoML Models.
hasContainerSpec
clearContainerSpec
setContainerSpec
Input only. The specification of the container that is to be used when deploying this Model. The specification is ingested upon ModelService.UploadModel , and all binaries it contains are copied and stored internally by Vertex AI.
Not required for AutoML Models.
$this
getArtifactUri
Immutable. The path to the directory containing the Model artifact and any of its supporting files. Not required for AutoML Models.
string
setArtifactUri
Immutable. The path to the directory containing the Model artifact and any of its supporting files. Not required for AutoML Models.
var
string
$this
getSupportedDeploymentResourcesTypes
Output only. When this Model is deployed, its prediction resources are
described by the prediction_resources
field of the Endpoint.deployed_models
object. Because not all Models support all resource configuration types,
the configuration types this Model supports are listed here. If no
configuration types are listed, the Model cannot be deployed to an Endpoint
and does not support
online predictions
( PredictionService.Predict
or PredictionService.Explain
).
Such a Model can serve predictions by using a BatchPredictionJob , if it has at least one entry each in supported_input_storage_formats and supported_output_storage_formats .
setSupportedDeploymentResourcesTypes
Output only. When this Model is deployed, its prediction resources are
described by the prediction_resources
field of the Endpoint.deployed_models
object. Because not all Models support all resource configuration types,
the configuration types this Model supports are listed here. If no
configuration types are listed, the Model cannot be deployed to an Endpoint
and does not support
online predictions
( PredictionService.Predict
or PredictionService.Explain
).
Such a Model can serve predictions by using a BatchPredictionJob , if it has at least one entry each in supported_input_storage_formats and supported_output_storage_formats .
var
int[]
$this
getSupportedInputStorageFormats
Output only. The formats this Model supports in BatchPredictionJob.input_config .
If PredictSchemata.instance_schema_uri exists, the instances should be given as per that schema. The possible formats are:
-
jsonl
The JSON Lines format, where each instance is a single line. Uses GcsSource . -
csv
The CSV format, where each instance is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses GcsSource . -
tf-record
The TFRecord format, where each instance is a single record in tfrecord syntax. Uses GcsSource . -
tf-record-gzip
Similar totf-record
, but the file is gzipped. Uses GcsSource . -
bigquery
Each instance is a single row in BigQuery. Uses BigQuerySource . -
file-list
Each line of the file is the location of an instance to process, usesgcs_source
field of the InputConfig object. If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob . However, if it has supported_deployment_resources_types , it could serve online predictions by using PredictionService.Predict or PredictionService.Explain .
setSupportedInputStorageFormats
Output only. The formats this Model supports in BatchPredictionJob.input_config .
If PredictSchemata.instance_schema_uri exists, the instances should be given as per that schema. The possible formats are:
-
jsonl
The JSON Lines format, where each instance is a single line. Uses GcsSource . -
csv
The CSV format, where each instance is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses GcsSource . -
tf-record
The TFRecord format, where each instance is a single record in tfrecord syntax. Uses GcsSource . -
tf-record-gzip
Similar totf-record
, but the file is gzipped. Uses GcsSource . -
bigquery
Each instance is a single row in BigQuery. Uses BigQuerySource . -
file-list
Each line of the file is the location of an instance to process, usesgcs_source
field of the InputConfig object. If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob . However, if it has supported_deployment_resources_types , it could serve online predictions by using PredictionService.Predict or PredictionService.Explain .
var
string[]
$this
getSupportedOutputStorageFormats
Output only. The formats this Model supports in BatchPredictionJob.output_config .
If both PredictSchemata.instance_schema_uri and PredictSchemata.prediction_schema_uri exist, the predictions are returned together with their instances. In other words, the prediction has the original instance data first, followed by the actual prediction content (as per the schema). The possible formats are:
-
jsonl
The JSON Lines format, where each prediction is a single line. Uses GcsDestination . -
csv
The CSV format, where each prediction is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses GcsDestination . -
bigquery
Each prediction is a single row in a BigQuery table, uses BigQueryDestination . If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob . However, if it has supported_deployment_resources_types , it could serve online predictions by using PredictionService.Predict or PredictionService.Explain .
setSupportedOutputStorageFormats
Output only. The formats this Model supports in BatchPredictionJob.output_config .
If both PredictSchemata.instance_schema_uri and PredictSchemata.prediction_schema_uri exist, the predictions are returned together with their instances. In other words, the prediction has the original instance data first, followed by the actual prediction content (as per the schema). The possible formats are:
-
jsonl
The JSON Lines format, where each prediction is a single line. Uses GcsDestination . -
csv
The CSV format, where each prediction is a single comma-separated line. The first line in the file is the header, containing comma-separated field names. Uses GcsDestination . -
bigquery
Each prediction is a single row in a BigQuery table, uses BigQueryDestination . If this Model doesn't support any of these formats it means it cannot be used with a BatchPredictionJob . However, if it has supported_deployment_resources_types , it could serve online predictions by using PredictionService.Predict or PredictionService.Explain .
var
string[]
$this
getCreateTime
Output only. Timestamp when this Model was uploaded into Vertex AI.
hasCreateTime
clearCreateTime
setCreateTime
Output only. Timestamp when this Model was uploaded into Vertex AI.
$this
getUpdateTime
Output only. Timestamp when this Model was most recently updated.
hasUpdateTime
clearUpdateTime
setUpdateTime
Output only. Timestamp when this Model was most recently updated.
$this
getDeployedModels
Output only. The pointers to DeployedModels created from this Model. Note that Model could have been deployed to Endpoints in different Locations.
setDeployedModels
Output only. The pointers to DeployedModels created from this Model. Note that Model could have been deployed to Endpoints in different Locations.
$this
getExplanationSpec
The default explanation specification for this Model.
The Model can be used for requesting explanation after being deployed if it is populated. The Model can be used for batch explanation if it is populated. All fields of the explanation_spec can be overridden by explanation_spec of DeployModelRequest.deployed_model , or explanation_spec of BatchPredictionJob . If the default explanation specification is not set for this Model, this Model can still be used for requesting explanation by setting explanation_spec of DeployModelRequest.deployed_model and for batch explanation by setting explanation_spec of BatchPredictionJob .
hasExplanationSpec
clearExplanationSpec
setExplanationSpec
The default explanation specification for this Model.
The Model can be used for requesting explanation after being deployed if it is populated. The Model can be used for batch explanation if it is populated. All fields of the explanation_spec can be overridden by explanation_spec of DeployModelRequest.deployed_model , or explanation_spec of BatchPredictionJob . If the default explanation specification is not set for this Model, this Model can still be used for requesting explanation by setting explanation_spec of DeployModelRequest.deployed_model and for batch explanation by setting explanation_spec of BatchPredictionJob .
$this
getEtag
Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
string
setEtag
Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
var
string
$this
getLabels
The labels with user-defined metadata to organize your Models.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
setLabels
The labels with user-defined metadata to organize your Models.
Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
$this
getDataStats
Stats of data used for training or evaluating the Model.
Only populated when the Model is trained by a TrainingPipeline with data_input_config .
hasDataStats
clearDataStats
setDataStats
Stats of data used for training or evaluating the Model.
Only populated when the Model is trained by a TrainingPipeline with data_input_config .
$this
getEncryptionSpec
Customer-managed encryption key spec for a Model. If set, this Model and all sub-resources of this Model will be secured by this key.
hasEncryptionSpec
clearEncryptionSpec
setEncryptionSpec
Customer-managed encryption key spec for a Model. If set, this Model and all sub-resources of this Model will be secured by this key.
$this
getModelSourceInfo
Output only. Source of a model. It can either be automl training pipeline, custom training pipeline, BigQuery ML, or saved and tuned from Genie or Model Garden.
hasModelSourceInfo
clearModelSourceInfo
setModelSourceInfo
Output only. Source of a model. It can either be automl training pipeline, custom training pipeline, BigQuery ML, or saved and tuned from Genie or Model Garden.
$this
getOriginalModelInfo
Output only. If this Model is a copy of another Model, this contains info about the original.
hasOriginalModelInfo
clearOriginalModelInfo
setOriginalModelInfo
Output only. If this Model is a copy of another Model, this contains info about the original.
$this
getMetadataArtifact
Output only. The resource name of the Artifact that was created in
MetadataStore when creating the Model. The Artifact resource name pattern
is projects/{project}/locations/{location}/metadataStores/{metadata_store}/artifacts/{artifact}
.
string
setMetadataArtifact
Output only. The resource name of the Artifact that was created in
MetadataStore when creating the Model. The Artifact resource name pattern
is projects/{project}/locations/{location}/metadataStores/{metadata_store}/artifacts/{artifact}
.
var
string
$this
getBaseModelSource
Optional. User input field to specify the base model source. Currently it only supports specifing the Model Garden models and Genie models.
hasBaseModelSource
clearBaseModelSource
setBaseModelSource
Optional. User input field to specify the base model source. Currently it only supports specifing the Model Garden models and Genie models.
$this
getSatisfiesPzs
Output only. Reserved for future use.
bool
setSatisfiesPzs
Output only. Reserved for future use.
var
bool
$this
getSatisfiesPzi
Output only. Reserved for future use.
bool
setSatisfiesPzi
Output only. Reserved for future use.
var
bool
$this
getCheckpoints
Optional. Output only. The checkpoints of the model.
setCheckpoints
Optional. Output only. The checkpoints of the model.
$this