Exam Professional Machine Learning Engineer topic 1 question 172 discussion - ExamTopics


AI Summary Hide AI Generated Summary

Problem:

The question presents a scenario where an ML pipeline needs optimization by exploring the tradeoffs between different input parameters: input dataset, maximum tree depth of the boosted tree regressor, and optimizer learning rate. The goal is to compare pipeline performance using F1 score, training time, and model complexity, ensuring reproducibility and tracking all runs on a single platform.

Options:

  • A. BigQueryML: Use BigQueryML's hyperparameter tuning with grid search.
  • B. Vertex AI Pipeline with Bayesian Optimization: Create a Vertex AI pipeline with a custom model training job and employ Bayesian optimization with F1 score as the target.
  • C. Vertex AI Workbench Notebooks: Create separate notebooks for different datasets, running local training jobs with varying parameters, and storing results in a BigQuery table.
  • D. Vertex AI Experiments and Pipelines: Create an experiment in Vertex AI Experiments and utilize a Vertex AI pipeline with a custom model training job, submitting multiple runs with different parameter values to the experiment.

Suggested Answer:

The suggested answer is D, leveraging Vertex AI Experiments and Pipelines. This approach allows for organized experimentation and tracking of multiple runs with various parameter combinations within a unified platform.

Sign in to unlock more AI features Sign in with Google

You created an ML pipeline with multiple input parameters. You want to investigate the tradeoffs between different parameter combinations. The parameter options are • Input dataset • Max tree depth of the boosted tree regressor • Optimizer learning rate

You need to compare the pipeline performance of the different parameter combinations measured in F1 score, time to train, and model complexity. You want your approach to be reproducible, and track all pipeline runs on the same platform. What should you do?

  • A. 1. Use BigQueryML to create a boosted tree regressor, and use the hyperparameter tuning capability. 2. Configure the hyperparameter syntax to select different input datasets: max tree depths, and optimizer learning rates. Choose the grid search option.
  • B. 1. Create a Vertex AI pipeline with a custom model training job as part of the pipeline. Configure the pipeline’s parameters to include those you are investigating. 2. In the custom training step, use the Bayesian optimization method with F1 score as the target to maximize.
  • C. 1. Create a Vertex AI Workbench notebook for each of the different input datasets. 2. In each notebook, run different local training jobs with different combinations of the max tree depth and optimizer learning rate parameters. 3. After each notebook finishes, append the results to a BigQuery table.
  • D. 1. Create an experiment in Vertex AI Experiments. 2. Create a Vertex AI pipeline with a custom model training job as part of the pipeline. Configure the pipeline’s parameters to include those you are investigating. 3. Submit multiple runs to the same experiment, using different values for the parameters.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Was this article displayed correctly? Not happy with what you see?


Share this article with your
friends and colleagues.

Facebook



Share this article with your
friends and colleagues.

Facebook