Mastering Machine Learning
Welcome to “Mastering Configuration Management in Machine Learning with Hydra”! This comprehensive tutorial is designed to take you from the basics of Hydra to advanced techniques for managing configurations in your ML projects. We will also explore the integration of Hydra with high-performance computing environments and popular machine-learning frameworks. Whether a machine learning novice or a seasoned practitioner, this tutorial will equip you with the knowledge and skills to supercharge your machine learning workflow.
· I. Introduction
· II. Hydra Basics
∘ Installation of Hydra
∘ Anatomy of a Hydra Application
∘ Understanding Hydra’s Main Components
· III. Hierarchical Configurations
∘ Defining and Understanding Hierarchical Configuration Files
· IV. Configuration Groups
∘ Understanding the Concept of Configuration Groups
∘ Defining Different Setups: Development, Staging, Production
∘ Showcasing the Impact on Reproducibility and Debugging
· V. Dynamic Configurations
∘ Explanation of Dynamic Configurations
∘ Creating Rules for Dynamic Adjustment of Hyperparameters
∘ Implementing Dynamic Configurations in a Machine Learning Context
· VI. Environment Variables
∘ The Need for Environment Variables in Hydra
∘ Handling Sensitive or Frequently Changing Data
∘ Using Environment Variables in Hydra: A Step-by-Step Guide
· VII. Configuring Logging
∘ The Importance of Logging in Machine Learning Experiments
∘ Using Hydra to Configure Python’s Logging Framework
∘ How to Create Log Files for Different Modules with Varying Levels of Verbosity
· VIII. Multirun and Sweeps
∘ Introduction to Hydra’s Multirun Feature
∘ Designing and Configuring Hyperparameter Sweeps
∘ Applying Multirun and Sweeps to Machine Learning Projects
· IX. Error Handling
∘ Importance of Error Handling in Configuration Management
∘ Using Hydra for Advanced Error Handling
∘ Customizing Behavior for Missing or Incorrect Configurations
· X. Command Line Overrides
∘ Understanding Command Line Overrides in Hydra
∘ Modifying Configurations at Runtime Using Command Line Arguments
∘ Practical Examples of Using Command Line Overrides in Machine Learning Experiments
· XI. Using Hydra on a Slurm-Based HPC Cluster
∘ Hydra and SLURM: A Brief Overview
∘ Installation
∘ Configuration
∘ Running Your Application
∘ Advanced Topics: Parallel Runs with Slurm
· XII. Hydra with Containerization (Docker/Kubernetes)
∘ Hydra with Docker
∘ Hydra with Kubernetes
· XIII. Integration with ML Frameworks
∘ Hydra with PyTorch
· XIV. Conclusion
· XV. Appendix: Useful Hydra Commands and Tips
∘ Commonly Used Hydra Commands
∘ Tips and Tricks
Managing configurations can be complex, from model hyperparameters to experiment settings. Keeping track of all these details can quickly become overwhelming. That’s where Facebook’s Hydra configuration library comes into play. Hydra is an open-source Python framework that simplifies the management of configurations in your applications, ensuring better reproducibility and modularity.
Hydra provides a powerful and flexible mechanism for managing configurations for complex applications. This makes it easier for developers and researchers to maintain and optimize machine learning projects.
In this tutorial, we introduce the basics of Hydra and guide you through its advanced features. By the end of this tutorial, you will be empowered to manage your project configurations effectively and efficiently.
Installation of Hydra
Hydra is a Python library and can be installed easily with pip:
pip install hydra-core
Anatomy of a Hydra Application
A Hydra application has a script and one or more configuration files. Configuration files are written in YAML and stored in a directory structure. This creates a hierarchical configuration.
# my_app.py
import hydra@hydra.main(config_name="config")
def my_app(cfg):
print(cfg.pretty())
if __name__ == "__main__":
my_app()
The accompanying YAML file might look like this:
# config.yaml
db:
driver: mysql
user: test
password: test
The Python script my_app.py
uses the @hydra.main()
decorator to indicate it’s a Hydra application. The config_name
parameter specifies the configuration file to use. Note that it assumes the file type is YAML, so there is no need to select the extension.
Understanding Hydra’s Main Components
Hydra comprises configurations, interpolations, and overrides.
Configurations are the settings of your application specified in one or more YAML files.
Interpolations are references to other parts of your configuration. For example, in the YAML file below, the value of full
interpolates name
and surname
.
name: John
surname: Doe
full: ${name} ${surname}db:
user: ${surname}.${name}
Overrides allow you to modify your configuration at runtime without changing your YAML files. You can specify overrides on the command line when running your application, as the following demonstrates:
python my_app.py db.user=root
In the command above, we’re overriding the user
value under db
in the configuration.
In the following sections, we’ll look at advanced features and how to use them in your ML projects.
Hydra offers an intuitive way to structure your configuration files hierarchically, mirroring your project’s directory structure. Hierarchical configurations are instrumental when managing complex projects, making, maintaining, extending, and reusing your configurations easier.
Defining and Understanding Hierarchical Configuration Files
The hierarchy of configurations is defined by the directory structure of your configuration files.
For instance, a project’s layout could be structured as follows:
config.yaml
preprocessing/
- standard.yaml
- minmax.yaml
model/
- linear.yaml
- svm.yaml
Hence, the standard.yaml
and minmax.yaml
files could contain different settings for data preprocessing; the linear.yaml
and svm.yaml
files could have configurations for various model types.
In config.yaml
, you can specify which preprocessing and model configurations to use by default:
defaults:
- preprocessing: standard
- model: linear
Hydra automatically merges the specified configurations, so you can still override the default choice when launching the application, as shown in the following code snippet:
python my_app.py preprocessing=minmax model=svm
The command above runs the application with the minmax
preprocessing and svm
model configurations.
Configuration groups in Hydra provide a way to manage sets of configurations that can be swapped easily. This feature is handy for maintaining various settings, environments, and setups, such as development, testing, staging, and production.
Understanding the Concept of Configuration Groups
A configuration group is a directory containing alternative configurations. When defining a configuration group, specify a default configuration in your main configuration file (config.yaml
), but you can easily override it when running your application.
Defining Different Setups: Development, Staging, Production
Consider a machine learning project where you have distinct settings for development, staging, and production environments. You can create a configuration group for each environment:
config.yaml
env/
- development.yaml
- staging.yaml
- production.yaml
Each YAML file in the env
directory would contain the settings specific to that environment. For example, the development.yaml
file might define verbose logging and debugging settings while the production.yaml
file might contain optimized performance and error logging settings.
In config.yaml
, you specify the default environment:
defaults:
- env: development
With this configuration, Hydra will automatically apply the settings from development.yaml
when running your application.
Showcasing the Impact on Reproducibility and Debugging
Configuration groups are a powerful tool for enhancing reproducibility in your projects. You can ensure your application behaves consistently across different environments by defining specific development, staging, and production setups.
Additionally, configuration groups can significantly simplify debugging. You can quickly reproduce and isolate issues by using different configuration groups for various stages of your project. For instance, if an issue arises in the staging environment, you can switch to the staging
configuration to reproduce the problem without affecting your development or production settings.
Switching between environments is as easy as specifying a different configuration group when launching your application:
python my_app.py env=production
This command runs the application with the settings defined in production.yaml
.
In addition to static configuration management, Hydra allows for dynamic configurations. Dynamic configurations are incredibly valuable in scenarios where some parameters depend on others or must be computed at runtime.
Explanation of Dynamic Configurations
Dynamic configurations in Hydra are enabled through two main features: interpolations and the OmegaConf library.
Interpolations are references to other parts of your configuration, allowing a dynamic set of values. They are denoted by ${}
in your configuration files. For instance:
name: Alice
greeting: Hello, ${name}!
In this example, the greeting
value will dynamically include the name
value.
OmegaConf is a flexible configuration library that Hydra uses. It supports not only interpolations but also variable substitutions and even complex expressions:
dimensions:
width: 10
height: 20
area: ${dimensions.width} * ${dimensions.height}
In the above example, the area
is computed dynamically based on the width
and height
under dimensions
.
Creating Rules for Dynamic Adjustment of Hyperparameters
In machine learning, dynamic configurations can be beneficial for adjusting hyperparameters. For instance, we want the learning rate to depend on the batch size. We could define a rule for this in our configuration file:
training:
batch_size: 32
learning_rate: 0.001 * ${training.batch_size}
Where learning_rate
is dynamically adjusted based on batch_size
, the learning rate will automatically increase proportionally if you improve the batch size.
Implementing Dynamic Configurations in a Machine Learning Context
Let’s consider a more complex machine learning scenario where the size of the first layer in our neural network depends on the input size of our data.
data:
input_size: 100
model:
layer1: ${data.input_size} * 2
layer2: 50
Here, the size of the first layer (layer1
) is dynamically set to be twice the input_size
. If we change the input_size
, layer1
will automatically adjust.
Dynamic configurations enable higher flexibility and adaptability for applications.
Hydra supports the use of environment variables within configuration files, providing additional flexibility and security. This functionality can be beneficial for handling sensitive or frequently changing data.
The Need for Environment Variables in Hydra
Environment variables are a common way to pass configuration information to your application. They are handy in the following situations:
- Sensitive Data: Passwords, secret keys, and access tokens should not be hard-coded into your application or configuration files. Instead, these can be stored securely as environment variables.
- Frequently Changing Data: If specific parameters change frequently or depend on the system environment (e.g., file paths that differ between development and production environments), managing them as environment variables is more convenient.
- Portability and Scalability: Environment variables can make your applications easier to move between different environments (e.g., from a local development environment to a cloud-based production environment).
Handling Sensitive or Frequently Changing Data
Sensitive information like database credentials should never be stored directly in your configuration files. Instead, you can keep these as environment variables and reference them in your Hydra configurations using interpolations. This practice enhances security by preventing sensitive data from being exposed in your code or version control system.
Similarly, frequently changing data, such as file or directory paths that vary between environments, can be managed as environment variables. This approach reduces the need for manual modifications when moving between environments.
Using Environment Variables: A Step-by-Step Guide
To use an environment variable in Hydra, follow these steps:
- Define an environment variable in your shell. For example, in a Unix-based system, you could use the
export
command:
export DATABASE_URL=mysql://user:password@localhost/db
2. Reference the environment variable in your Hydra configuration file using the ${env:VARIABLE}
syntax:
database:
url: ${env:DATABASE_URL}
In this example, the url
field in the database
configuration will be set to the value of the DATABASE_URL
environment variable.
Remember, never store sensitive information directly in your configuration files or code. Always use environment variables or another secure method for handling sensitive data.
Logging is an essential part of machine learning experiments. It provides visibility into your models’ and algorithms’ performance and behavior over time. Configuring proper logging mechanisms can help with model debugging, optimization, and understanding the learning process.
Hydra has built-in support for configuring Python’s logging module, making it easy to control the verbosity of logs, set up different handlers, and format your log messages.
The Importance of Logging in Machine Learning Experiments
Logging for machine learning can serve various purposes:
- Model Debugging: Logs can contain valuable information about model behavior, which can help diagnose and fix issues.
- Performance Tracking: Logging the metrics over time helps to observe the model’s learning process, detect overfitting or underfitting, and adjust the hyperparameters accordingly.
- Auditing and Reproducibility: Logs document the details of the training process, making it easier to reproduce results and understand what has been done in the past.
Using Hydra to Configure Python’s Logging Framework
Python’s built-in logging module is robust and highly configurable, and Hydra can help manage this complexity.
To configure logging with Hydra, create a hydra.yaml
file in your configuration directory and define your logging settings under the hydra.job_logging
key:
hydra:
job_logging:
root:
level: INFO
handlers:
console:
level: INFO
formatter: basic
file:
level: DEBUG
formatter: basic
filename: ./logs/${hydra:job.name}.log
In this configuration:
- The root logger is set to the
INFO
level, capturingINFO
,WARNING
,ERROR
, andCRITICAL
messages. - There are two handlers: one for console output and one for writing to a file. The console handler only logs
INFO
and higher-level messages, while the file handler logsDEBUG
and higher-level messages. - The
filename
of the file handler uses interpolation to dynamically create a log file for each job based on the job’s name.
How to Create Log Files for Different Modules with Varying Levels of Verbosity
You can set different log levels for different modules in your application. Suppose you have moduleA
and moduleB
modules, and you want moduleA
to log DEBUG
and higher-level messages but moduleB
to log only ERROR
and higher-level messages. Here’s how to configure it:
hydra:
job_logging:
root:
level: INFO
loggers:
moduleA:
level: DEBUG
moduleB:
level: ERROR
handlers:
console:
level: INFO
formatter: basic
file:
level: DEBUG
formatter: basic
filename: ./logs/${hydra:job.name}.log
This way, you can control the amount of log output from different application parts.
Machine learning often involves running experiments with different sets of hyperparameters to find the optimal solution. Welcome Hydra’s multirun
feature. It allows you to run your application multiple times with different configurations, which is beneficial for hyper-parameter tuning.
Introduction to Hydra’s Multirun Feature
To use multirun
, pass the -m
or --multirun
flag when running your application. Then, specify the parameters you want to vary across runs using the key=value
syntax:
python my_app.py --multirun training.batch_size=32,64,128
This will run your application three times: one training.batch_size=32
, one training.batch_size=64
, and one training.batch_size=128
.
Designing and Configuring Hyper-parameter Sweeps
A hyperparameter sweep is a series of runs with different hyperparameters.
Hydra supports different types of sweeps:
- Range Sweeps: Specifies a range of values for a parameter. For example,
learning_rate=0.01,0.001,0.0001
- Interval Sweeps: Define an interval and a step size. For example,
epoch=1:10:1
(start:end:step
) - Choice Sweeps: Define a list of values to choose from. For example,
optimizer=adam,sgd,rmsprop
- Grid Sweeps: Define multiple parameters to sweep over. This will run your application for all combinations of the parameters.
These sweep types can be combined and used in complex ways to explore your model’s hyperparameter space thoroughly.
Applying Multirun and Sweeps to Machine Learning Projects
Let’s consider a simple machine-learning project where you want to tune the learning rate and batch size. You can use the multirun
feature to configure and run this hyper-parameter sweep easily:
python my_app.py --multirun training.batch_size=32,64,128 training.learning_rate=0.01,0.001,0.0001
This command will run your application for each batch size and learning rate combination, totaling nine runs (3 batch sizes * 3 learning rates).
Hydra’s multirun
feature can significantly simplify the process of running hyperparameter sweeps, helping you to find the best configuration for your machine learning models.
Proper error handling is a crucial aspect of configuration management. It provides valuable information when things go wrong, helping to prevent or quickly diagnose issues that could affect the success of your machine learning projects. Hydra can be used to facilitate advanced error handling.
Importance of Error Handling in Configuration Management
Error handling in configuration management serves various purposes:
- Error Prevention: By validating configurations before they’re used, you can catch and correct errors early, preventing them from causing more prominent issues.
- Fast Debugging: When errors do occur, detailed error messages can help you quickly identify the cause and fix the issue.
- Robustness: Comprehensive error handling makes your code more robust and reliable, improving its ability to handle unexpected situations.
Using Hydra for Advanced Error Handling
Hydra provides several features for advanced error handling:
- Strict Validation: Hydra performs strict validation of your configurations by default. If you try to access a field not defined in your configuration, Hydra will raise an error. This can help catch typos or missing fields early.
from omegaconf import OmegaConf
import hydra@hydra.main(config_path="conf", config_name="config")
def my_app(cfg):
print(cfg.field_that_does_not_exist) # Raises an error
if __name__ == "__main__":
my_app()
- Error Messages: detailed error messages when an error occurs. These messages often include the exact location of the error in your configuration, making diagnosing and fixing the issue easier.
Customizing Behavior for Missing or Incorrect Configurations
While Hydra’s default behavior is to raise an error for missing or incorrect configurations, you can customize this behavior based on your needs. For example:
- Optional Fields: You can use the
OmegaConf.select
method to access a field in a way that won’t raise an error if the field is missing:
value = OmegaConf.select(cfg, "field_that_may_or_may_not_exist", default="default_value")
- Ignore Invalid Types: If you’re loading configurations from a file and you want Hydra to ignore fields with invalid types, you can set the
ignore_invalid_types
flag when callingOmegaConf.load
:
cfg = OmegaConf.load("config.yaml", ignore_invalid_types=True)
By utilizing Hydra’s error-handling capabilities, you can make your configuration management process more robust and easier to debug.
Command line overrides are a powerful feature that allows you to modify runtime configurations. This can be particularly useful in machine learning experiments, where you often need to adjust hyperparameters, switch between different models, or change the dataset.
Understanding Command Line Overrides
You can override any part of your configuration from the command line. To do this, pass a key=value
pair when running your application:
python my_app.py db.driver=postgresql db.user=my_user
By this, your application runs withdb.driver
set to postgresq
and db.user
set to my_user
, overriding any values defined in the configuration files or defaults.
Modifying Configurations at Runtime Using Command Line Arguments
Command line overrides can be used to modify configurations in various ways:
- Changing Single Values: As shown in the previous example, you can change the value of a single field in your configuration.
- Changing Nested Values: You can also change the value of a nested field using dot notation:
python my_app.py training.optimizer.lr=0.01
- Adding New Fields: If you specify a field that doesn’t exist in your configuration, Hydra will add it:
python my_app.py new_field=new_value
- Removing Fields: You can remove a field from your configuration by setting it to
null
:python my_app.py field_to_remove=null
- Changing Lists: You can change the value of a list field:
python my_app.py data.transforms=[transform1,transform2]
Practical Examples of Using Command Line Overrides in Machine Learning Experiments
Command line overrides are especially useful in machine learning, where you often need to adjust configurations for different experiments:
- Hyperparameter Tuning: Easily adjust hyperparameters for different runs:
python train.py model.lr=0.01 model.batch_size=64
- Model Selection: Switch between different models:
python train.py model.type=resnet50
- Data Selection: Change the dataset or split used for training:
python train.py data.dataset=cifar10 data.split=train
Using command line overrides can greatly increase the flexibility and ease of your machine-learning experiments.
High-Performance Computing (HPC) clusters are commonly used to handle large-scale machine-learning tasks. These clusters often use the Simple Linux Utility for Resource Management (Slurm) to manage job scheduling. Let’s see how we can use Hydra on a Slurm-based HPC cluster.
Hydra and SLURM: A Brief Overview
Hydra includes a plugin called hydra-submitit-launcher
, which enables seamless integration with Slurm job scheduling. With this plugin, you can submit your Hydra applications as Slurm jobs, allowing you to leverage the power of HPC clusters for your machine-learning experiments.
Installation
To use the Submitit launcher with Hydra, you’ll first need to install it:
pip install hydra-submitit-launcher
Configuration
Once you’ve installed the launcher, you can configure it in your Hydra configuration files. Here’s an example configuration:
defaults:
- hydra/launcher: submitit_slurm
hydra:
launcher:
_target_: hydra_plugins.hydra_submitit_launcher.config.SubmitterConf
slurm:
time: 60
nodes: 1
gpus_per_node: 2
tasks_per_node: 1
mem_per_node: 10GB
cpus_per_task: 10
submitit_folder: /path/to/your/log/folder
Above, we set the time limit for our jobs to 60 minutes, using one node with 2 GPUs, and dedicating 10GB of memory and 10 CPUs per task. Adjust these settings based on the resources available in your cluster.
Running Your Application
You can now run your Hydra application as usual:
python my_app.py
With the Submitit launcher configured, Hydra can submit Slurm jobs.
Advanced Topics: Parallel Runs with Slurm
Hydra’s multirun feature and the Submitit launcher allow you to run multiple jobs in parallel. For instance, you can perform a hyper-parameter sweep across several Slurm nodes:
python my_app.py --multirun model.lr=0.01,0.001,0.0001
This would submit three Slurm jobs, each with a different learning rate.
Further Reading:
For general information on using Slurm:
Containerization using tools like Docker and Kubernetes is widely used in machine learning due to its consistency, reproducibility, and scalability benefits. This section will guide you on using Hydra in conjunction with Docker or Kubernetes, showing how to generate Dockerfiles dynamically or Kubernetes manifests based on the configuration.
Hydra with Docker
When using Docker, you often need to create Dockerfiles with different configurations. Hydra can simplify this process:
1. Dockerfile
Create a Dockerfile with placeholders for configuration options. Here’s a simplified example:
FROM python:3.8
WORKDIR /appCOPY . .RUN pip install -r requirements.txtCMD ["python", "my_app.py", "${CMD_ARGS}"]
In this Dockerfile, ${CMD_ARGS}
is a placeholder for command-line arguments that Hydra will provide.
2. Hydra Configuration
In your Hydra config file, define the configuration options to pass to Docker. For example:
docker:
image: python:3.8
cmd_args: db.driver=postgresql db.user=my_user
3. Docker Run Script
Finally, create a script that uses Hydra to generate the Docker run command:
@hydra.main(config_path="config.yaml")
def main(cfg):
cmd = f'docker run -it {cfg.docker.image} python my_app.py {cfg.docker.cmd_args}'
os.system(cmd)
if __name__ == "__main__":
main()
Run this script, and Hydra will launch a Docker container with the configuration options you specified.
Hydra with Kubernetes
Using Hydra with Kubernetes is a bit more complex, but the basic idea is similar. First, you would create a Kubernetes manifest with placeholders for configuration options, then use Hydra to generate the Kubernetes apply command.
Consider using the Hydra-KubeExecutor plugin to integrate Hydra and Kubernetes directly.
Further Reading:
Hydra can significantly simplify the process of managing configurations in machine learning projects. This section will show integrate Hydra with popular machine learning frameworks like PyTorch, TensorFlow, or scikit-learn. You’ll learn how to use configuration files to manage the different stages of a machine-learning pipeline, from data preprocessing to model training and evaluation.
Hydra with PyTorch
When using PyTorch (or any other ML framework), you can use Hydra to manage configurations for your model, dataset, optimizer, and other components. Here’s a simplified example:
@hydra.main(config_path="config.yaml")
def main(cfg):
# Load dataset
dataset = load_dataset(cfg.data)
# Initialize model
model = MyModel(cfg.model) # Initialize optimizer
optimizer = torch.optim.SGD(model.parameters(), lr=cfg.optim.lr) # Train and evaluate model
train(model, dataset, optimizer, cfg.train)
evaluate(model, dataset, cfg.eval)if __name__ == "__main__":
main()
In this example, config.yaml
would contain separate sections for data
, model
, optim
, train
, and eval
. This structure keeps your configurations organized and modular, allowing you to easily adjust the configurations for different components of your machine-learning pipeline.
For example, you could define different model architectures, datasets, or training regimes in separate configuration files, then select the ones you want to use with command line overrides.
Here are example configuration groups for PyTorch:
defaults:
- model: resnet50
- dataset: imagenet
- optimizer: sgd
model:
resnet50:
num_layers: 50
alexnet:
num_layers: 8dataset:
imagenet:
root: /path/to/imagenet
cifar10:
root: /path/to/cifar10optimizer:
sgd:
lr: 0.01
momentum: 0.9
adam:
lr: 0.001
With these configurations, you could easily switch between a ResNet-50 and AlexNet, or between ImageNet and CIFAR-10 simply by changing the command line arguments when you run your application.
Further reading:
In this tutorial, we dove deep into Hydra, a powerful tool for configuration management in Python applications, including ML projects. We covered the basics, hierarchical configurations, configuration groups, and dynamic configurations. Also, we learned how to handle environment variables and use Hydra for logging, error handling, and command line overrides.
We also explored some of the more advanced features of Hydra, such as multirun and sweeps, which are particularly useful for managing machine learning experiments. Finally, we saw how Hydra could be used on an HPC, with Docker and Kubernetes, and integrated with another open-source package from Facebook to do deep learning (i.e., PyTorch). Throughout this tutorial, we’ve seen that Hydra can greatly simplify managing configurations, making your code more flexible, robust, and maintainable.
Mastering a tool like Hydra takes practice. So keep experimenting, trying new things, and pushing the boundaries of what you can do with your configurations.
Here are some commonly used Hydra commands, tips, and tricks for working with Hydra effectively in machine-learning projects.
Commonly Used Hydra Commands
- Running an application with Hydra:
python my_app.py
- Using command line overrides:
python my_app.py db.driver=postgresql
- Running an application with multirun:
python my_app.py — multirun training.batch_size=32,64,128
Tips and Tricks
1. Leverage Hierarchical Configurations: Hierarchical configurations can help you manage complex configurations and avoid duplication. Use them to define standard settings that can be shared across different parts of your application.
2. Use Command Line Overrides: Command line overrides are a powerful tool for adjusting configurations at runtime. Use them to change hyperparameters, switch models, or change datasets for different experiments.
3. Implement Error Handling: Hydra provides advanced error handling capabilities. Use them to make your code more robust and easier to debug.
4. Use Multirun for Hyperparameter Sweeps: Hydra’s multirun feature can significantly simplify the process of running hyperparameter sweeps. Use it to explore the hyperparameter space of your model.
5. Keep Exploring: Hydra has many more features to discover. Check out the Hydra documentation and GitHub for more ideas and examples.