What Deep Learning can Learn from Cybernetics

Not many Deep Learning (DL) researchers have spent enough time reading about Cybernetics (derived from the Greek word meaning “the art of steering”). This is indeed unfortunate in that in that DL…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




The Missing Library in your Machine Learning Workflow

Machine Learning

A quick guide to using Optuna for hyperparameter optimization

Sound engineers can create the perfect blend in audio by tuning the sliders and knobs to the right positions on audio mixers.

Just like tuning music, machine learning models are also tuned to achieve the best performance.

Before we go into how we can use Optuna for tuning hyperparameters, here’s a quick intro to the topic.

For example, let’s say you’re building a motorized toy car for a racing competition from scratch. You have control over the car’s specifications, such as the size of the tires, the motor speed, the torque, etc., that will determine how it performs on a race track. If the goal is to win the race, you would configure those settings for optimal performance.

Similarly, many machine learning algorithms also have these settings (hyperparameters) where you can tune or tweak the performance of the models to obtain the best performance.

I say most because simple ones, such as Simple Linear Regression, do not have them.

Below are some examples of model hyperparameters that are configured before model training.

Now that you understand what hyperparameters are (hopefully), it’s time to understand how to optimize it.

Hyperparameter optimization is the idea of finding the right set of hyperparameters that yields an optimized model which minimizes or maximizes an objective function.

Depending on the metric you want to optimize in a machine learning model, the objective function could return the loss or the accuracy of the model, where the loss is something we want to minimize, and accuracy is something we want to maximize

Machine Learning models aren’t able to learn the right set of hyperparameters to use by themselves. This is why it’s fundamental to tune them to the right settings so that they can achieve higher predictive power.

There are various algorithms and tools that can be used to perform hyperparameter tuning.

The most common way that many intro courses reach is GridSearchCV, an exhaustive approach, where every possible combination of parameters is used to fit a model and optimize for the best performance. This approach is very expensive and takes a lot of time.

Optuna is “an automatic hyperparameter optimization software framework, particularly designed for machine learning.

It’s also framework agnostic, which supports any machine learning or deep learning framework.

In this article, we’ll be exploring what Optuna does and try it out with sklearn in a simple example.

As always, here’s where you can find the code for this article:

First, install optuna with pip.

Let’s first understand three terminologies in Optuna.

Let’s start with an example.

We have this quadratic function below, and we want to optimize it.

If you forgot your calculus, optimizing a function means finding an input to the function that results in the minimum or maximum output from the function.

To do that, you first:

Let’s now optimize it with Optuna and see the results.

First, we start by defining an objective function.

We can suggest values that we want Optuna to sample from for our hyperparameter in the function.

In our case, x is a float number. And we give a range from -10 to 10 for Optuna to sample from.

Then we call the optimize function on it and set the number of trials.

After the study is done optimizing, you can get the results of the best parameter like below.

You can also get the best value, which is zero for our function.

I coded a custom function below to print out all the useful info of a study, including the best trial of the study.

Here’s what we get when using this function on our study.

Now we have a quadratic function with three parameters (x, y, z)

With a bit of calculus or basic math intuition, you can easily figure out the value for x, y, and z that will make this equation equal to zero.

The answer: x = 1, y = 2, z = 3

Let’s use Optuna to optimize this function.

From 100 trials, it seems Optuna can’t find the best value for our variables.

Here comes the best part about Optuna.

It saves the most recent trial, and we can keep optimizing our study until we are satisfied.

Let’s optimize with 500 more trials.

With 600 trials in total, Optuna is able to get closer to the right values.

Now let’s look at Optuna’s built-in functions for visualizing the optimizations.

With this function, we can observe at which # of trials does Optuna obtain the best value.

Now let’s try Optuna on a dataset and use it with sklearn to optimize for the right classifier.

We can load the dataset using the sklearn dataset package.

We have a target value we want to classify, which are the different types of wine.

Now our goal is to predict the class and optimize for accuracy.

Below you see an example of integrating Optuna with sklearn.

First we can sample from the classifier algorithms to use — Support Vector Classifier and the Random Forest algorithm

Then, depending on which algorithm was sampled, they have their respective hyperparamers that Optuna can sample.

At the end, the score will be calculated and the accuracy is the value we want to optimize.

Since the higher the accuracy, the better, we create a study where we want to maximize, and we can tell Optuna that like below.

Running 100 trials on the study, we get an accuracy of 96.6% and it tells us the Random Forest Algorithm should be used, with 20 estimators and a maximum depth of 24.

Let’s plot the optimization history.

It seems around the 18 trial mark, the best value was already obtained.

Let’s also plot the hyperparameters as well.

So it’s essential to only optimize for the important parameters.

This was a short guide to using Optuna. If you want to dive deeper into this tool, check out the resources below.

Follow the Bitgrit Data Science Publication for more articles like this!

Follow Bitgrit’s socials 📱 to stay updated on workshops and upcoming competitions!

Add a comment

Related posts:

How Europe Treats Depression and Anxiety

Mental health is a topic some people find difficult to openly discuss and many people do not seek treatment for mental illness do to the stigma associated with it. While the stigma associated with…

Cancel the Debates

Kamala Harris should not debate Mike Pence tomorrow night. The Democratic nominee for Vice-President of the United States should not be in a room with someone who has had contact with so many people…

How to Become a Successful SAP Consultant?

An SAP Consultant is a great choice for your successful career. If you have the required technical and interpersonal skills, you will be placed for sure. SAP is a multinational software organization…