As part of the data science team, you want to try different modeling approaches during experimentation phase.To guarantee reproducibility, each approach has different parameters that you need to manually track. Agent Platform SDK for Python autologging, which is a one-line code SDK capability leveraging MLflow, provides automatic metrics and parameters tracking associated with your Agent Platform Experiments and experiment runs.
Notebook: Agent Platform Experiments Autologging
In the "Agent Platform Experiments: Autologging" notebook, you'll learn how to use Agent Platform Experiments to:
- Enable autologging in the Agent Platform SDK for Python.
- Train scikit-learn model and see the resulting experiment run with metrics and parameters autologged to Agent Platform Experiments without setting an experiment run.
- Train TensorFlow model, check autologged metrics and parameters to
Agent Platform Experiments by manually setting an experiment run with
aiplatform.start_run()andaiplatform.end_run(). - Disable autologging in the Agent Platform SDK for Python, train a PyTorch model and check that none of the parameters or metrics are logged.

