Introduction
In the same way that it is impossible to write code that does not contain a bug within it on the first try, it is impossible to train a model that is the right one on the first try.
Those who have some experience in Machine Learning and Deep Learning know that you often have to spend a lot of time choosing the right hyperparameters of models. These hyperparameters are for example learning rate, batch size, and number of classes in output, but these are just some of the most common, a project can have hundreds of such parameters.
By changing the hyperparameters we could get different results (better or worse), and at some point keeping track of all the tests done is very hard.
Here’s what I did for a very long time: I used to write down all these hyperparameters by hand in an Excel sheet and write next to it the result of each experiment, the loss value for example. Later I “evolved” and started writing configuration files for the hyperparameters, in which I put various values that I wanted to test. I used to write custom Python functions that would read those values and put them into the training function. A YAML file is basically a hierarchically constructed file where you can insert keys and values like the following:
data:
path: "data/ESC-50"
sample_rate: 8000
train_folds: [1, 2, 3]
val_folds: [4]
test_folds: [5]
batch_size: 8model:
base_filters: 32
num_classes: 50
optim:
lr: 3e-4
seed: 0
trainer:
max_epochs: 10
I later discovered Hydra, an open-source framework which made this whole process easier and even faster.
Let’s get started!
Suppose we are developing a simple Machine Learning project using PyTorch. As usual, we create a class for the dataset, instantiate the dataloaders, create the model and train. In this example, I will use PyTorch Lightning to better organize the code, in which we have a Trainer object, similar to what you do in Keras. If you are used to PyTorch you will also understand Lightning in no time.