- 1.122.0 (latest)
- 1.121.0
- 1.120.0
- 1.119.0
- 1.118.0
- 1.117.0
- 1.116.0
- 1.115.0
- 1.114.0
- 1.113.0
- 1.112.0
- 1.111.0
- 1.110.0
- 1.109.0
- 1.108.0
- 1.107.0
- 1.106.0
- 1.105.0
- 1.104.0
- 1.103.0
- 1.102.0
- 1.101.0
- 1.100.0
- 1.99.0
- 1.98.0
- 1.97.0
- 1.96.0
- 1.95.1
- 1.94.0
- 1.93.1
- 1.92.0
- 1.91.0
- 1.90.0
- 1.89.0
- 1.88.0
- 1.87.0
- 1.86.0
- 1.85.0
- 1.84.0
- 1.83.0
- 1.82.0
- 1.81.0
- 1.80.0
- 1.79.0
- 1.78.0
- 1.77.0
- 1.76.0
- 1.75.0
- 1.74.0
- 1.73.0
- 1.72.0
- 1.71.1
- 1.70.0
- 1.69.0
- 1.68.0
- 1.67.1
- 1.66.0
- 1.65.0
- 1.63.0
- 1.62.0
- 1.60.0
- 1.59.0
- 1.58.0
- 1.57.0
- 1.56.0
- 1.55.0
- 1.54.1
- 1.53.0
- 1.52.0
- 1.51.0
- 1.50.0
- 1.49.0
- 1.48.0
- 1.47.0
- 1.46.0
- 1.45.0
- 1.44.0
- 1.43.0
- 1.39.0
- 1.38.1
- 1.37.0
- 1.36.4
- 1.35.0
- 1.34.0
- 1.33.1
- 1.32.0
- 1.31.1
- 1.30.1
- 1.29.0
- 1.28.1
- 1.27.1
- 1.26.1
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.1
- 1.21.0
- 1.20.0
- 1.19.1
- 1.18.3
- 1.17.1
- 1.16.1
- 1.15.1
- 1.14.0
- 1.13.1
- 1.12.1
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.1
- 1.7.1
- 1.6.2
- 1.5.0
- 1.4.3
- 1.3.0
- 1.2.0
- 1.1.1
- 1.0.1
- 0.9.0
- 0.8.0
- 0.7.1
- 0.6.0
- 0.5.1
- 0.4.0
- 0.3.1
API documentation for preview
package.
Classes
VertexModel
mixin class that can be used to add Vertex AI remote execution to a custom model.
Modules
generative_models
Classes for working with the Gemini models.
language_models
Classes for working with language models.
vision_models
Classes for working with vision models.
Packages Functions
end_run
end_run
(
state
:
google
.
cloud
.
aiplatform_v1
.
types
.
execution
.
Execution
.
State
=
State
.
COMPLETE
,
)
Ends the the current experiment run.
aiplatform.start_run('my-run')
...
aiplatform.end_run()
from_pretrained
from_pretrained
(
*
,
model_name
:
typing
.
Optional
[
str
]
=
None
,
custom_job_name
:
typing
.
Optional
[
str
]
=
None
,
foundation_model_name
:
typing
.
Optional
[
str
]
=
None
)
-
> typing
.
Union
[
sklearn
.
base
.
BaseEstimator
,
tf
.
Module
,
torch
.
nn
.
Module
]
Pulls a model from Model Registry or from a CustomJob ID for retraining.
The returned model is wrapped with a Vertex wrapper for running remote jobs on Vertex, unless an unwrapped model was registered to Model Registry.
get_experiment_df
get_experiment_df
(
experiment
:
typing
.
Optional
[
str
]
=
None
)
-
> pd
.
DataFrame
Returns a Pandas DataFrame of the parameters and metrics associated with one experiment.
Example:
aiplatform.init(experiment='exp-1')
aiplatform.start_run(run='run-1')
aiplatform.log_params({'learning_rate': 0.1})
aiplatform.log_metrics({'accuracy': 0.9})
aiplatform.start_run(run='run-2')
aiplatform.log_params({'learning_rate': 0.2})
aiplatform.log_metrics({'accuracy': 0.95})
aiplatform.get_experiments_df()
Will result in the following DataFrame:
experiment_name | run_name | param.learning_rate | metric.accuracy
exp-1 | run-1 | 0.1 | 0.9
exp-1 | run-2 | 0.2 | 0.95
experiment
Name of the Experiment to filter results. If not set, return results of current active experiment.
init
init
(
*
,
remote
:
typing
.
Optional
[
bool
]
=
None
,
autolog
:
typing
.
Optional
[
bool
]
=
None
,
cluster
:
typing
.
Optional
[
vertexai
.
preview
.
_workflow
.
shared
.
configs
.
PersistentResourceConfig
]
=
None
)
Updates preview global parameters for Vertex remote execution.
remote
Optional. A global flag to indicate whether or not a method will be executed remotely. Default is Flase. The method level remote flag has higher priority than this global flag.
autolog
Optional. Whether or not to turn on autologging feature for remote execution. To learn more about the autologging feature, see https://cloud.google.com/vertex-ai/docs/experiments/autolog-data .
cluster
Optional. If passed, check if the cluster exists. If not, create a default one (single node, "n1-standard-4", no GPU) with the given name. Then use the cluster to run CustomJobs. Default is None. Example usage: from vertexai.preview.shared.configs import PersistentResourceConfig cluster = PersistentResourceConfig( name="my-cluster-1", resource_pools=[ ResourcePool(replica_count=1,), ResourcePool( machine_type="n1-standard-8", replica_count=2, accelerator_type="NVIDIA_TESLA_P100", accelerator_count=1, ), ] )
log_classification_metrics
log_classification_metrics
(
*
,
labels
:
typing
.
Optional
[
typing
.
List
[
str
]]
=
None
,
matrix
:
typing
.
Optional
[
typing
.
List
[
typing
.
List
[
int
]]]
=
None
,
fpr
:
typing
.
Optional
[
typing
.
List
[
float
]]
=
None
,
tpr
:
typing
.
Optional
[
typing
.
List
[
float
]]
=
None
,
threshold
:
typing
.
Optional
[
typing
.
List
[
float
]]
=
None
,
display_name
:
typing
.
Optional
[
str
]
=
None
)
-
> (
google
.
cloud
.
aiplatform
.
metadata
.
schema
.
google
.
artifact_schema
.
ClassificationMetrics
)
Create an artifact for classification metrics and log to ExperimentRun. Currently support confusion matrix and ROC curve.
my_run = aiplatform.ExperimentRun('my-run', experiment='my-experiment')
classification_metrics = my_run.log_classification_metrics(
display_name='my-classification-metrics',
labels=['cat', 'dog'],
matrix=[[9, 1], [1, 9]],
fpr=[0.1, 0.5, 0.9],
tpr=[0.1, 0.7, 0.9],
threshold=[0.9, 0.5, 0.1],
)
labels
Optional. List of label names for the confusion matrix. Must be set if 'matrix' is set.
matrix
Optional. Values for the confusion matrix. Must be set if 'labels' is set.
fpr
Optional. List of false positive rates for the ROC curve. Must be set if 'tpr' or 'thresholds' is set.
tpr
Optional. List of true positive rates for the ROC curve. Must be set if 'fpr' or 'thresholds' is set.
threshold
Optional. List of thresholds for the ROC curve. Must be set if 'fpr' or 'tpr' is set.
display_name
Optional. The user-defined name for the classification metric artifact.
log_metrics
log_metrics
(
metrics
:
typing
.
Dict
[
str
,
typing
.
Union
[
float
,
int
,
str
]])
Log single or multiple Metrics with specified key and value pairs.
Metrics with the same key will be overwritten.
aiplatform.start_run('my-run', experiment='my-experiment')
aiplatform.log_metrics({'accuracy': 0.9, 'recall': 0.8})
metrics
Required. Metrics key/value pairs.
log_params
log_params
(
params
:
typing
.
Dict
[
str
,
typing
.
Union
[
float
,
int
,
str
]])
Log single or multiple parameters with specified key and value pairs.
Parameters with the same key will be overwritten.
aiplatform.start_run('my-run')
aiplatform.log_params({'learning_rate': 0.1, 'dropout_rate': 0.2})
params
Required. Parameter key/value pairs.
log_time_series_metrics
log_time_series_metrics
(
metrics
:
typing
.
Dict
[
str
,
float
],
step
:
typing
.
Optional
[
int
]
=
None
,
wall_time
:
typing
.
Optional
[
google
.
protobuf
.
timestamp_pb2
.
Timestamp
]
=
None
,
)
Logs time series metrics to to this Experiment Run.
Requires the experiment or experiment run has a backing Vertex Tensorboard resource.
my_tensorboard = aiplatform.Tensorboard(...)
aiplatform.init(experiment='my-experiment', experiment_tensorboard=my_tensorboard)
aiplatform.start_run('my-run')
# increments steps as logged
for i in range(10):
aiplatform.log_time_series_metrics({'loss': loss})
# explicitly log steps
for i in range(10):
aiplatform.log_time_series_metrics({'loss': loss}, step=i)
metrics
Required. Dictionary of where keys are metric names and values are metric values.
step
Optional. Step index of this data point within the run. If not provided, the latest step amongst all time series metrics already logged will be used.
wall_time
Optional. Wall clock timestamp when this data point is generated by the end user. If not provided, this will be generated based on the value from time.time()
register
register
(
model
:
typing
.
Union
[
sklearn
.
base
.
BaseEstimator
,
tf
.
Module
,
torch
.
nn
.
Module
],
use_gpu
:
bool
=
False
,
)
-
> google
.
cloud
.
aiplatform
.
models
.
Model
Registers a model and returns a Model representing the registered Model resource.
model
Required. An OSS model. Supported frameworks: sklearn, tensorflow, pytorch.
use_gpu
Optional. Whether to use GPU for model serving. Default to False.
remote
remote
(
cls_or_method
:
typing
.
Any
)
-
> typing
.
Any
Takes a class or method and add Vertex remote execution support.
ex:
LogisticRegression = vertexai.preview.remote(LogisticRegression)
model = LogisticRegression()
model.fit.vertex.remote_config.staging_bucket = REMOTE_JOB_BUCKET
model.fit.vertex.remote=True
model.fit(X_train, y_train)
cls_or_method
Required. A class or method that will be added Vertex remote execution support.
start_run
start_run
(
run
:
str
,
*
,
tensorboard
:
typing
.
Optional
[
typing
.
Union
[
google
.
cloud
.
aiplatform
.
tensorboard
.
tensorboard_resource
.
Tensorboard
,
str
]
]
=
None
,
resume
=
False
)
-
> google
.
cloud
.
aiplatform
.
metadata
.
experiment_run_resource
.
ExperimentRun
Start a run to current session.
aiplatform.init(experiment='my-experiment')
aiplatform.start_run('my-run')
aiplatform.log_params({'learning_rate':0.1})
Use as context manager. Run will be ended on context exit:
aiplatform.init(experiment='my-experiment')
with aiplatform.start_run('my-run') as my_run:
my_run.log_params({'learning_rate':0.1})
Resume a previously started run:
aiplatform.init(experiment='my-experiment')
with aiplatform.start_run('my-run', resume=True) as my_run:
my_run.log_params({'learning_rate':0.1})
run
Required. Name of the run to assign current session with.
resume
Whether to resume this run. If False a new run will be created.

