# Home assignment (Guillaume & Michael)

- Author: Romain Tavenard (@rtavenar)
- License: CC-BY-NC-SA

A home assignment from a course on Machine Learning for Time Series at ENSAI. One can find lecture notes for this course there.

## Problem statement

Design a recurrent neural network model for multi-step ahead forecasting that would optimize on a soft-DTW-based loss. Compare performance of this model with:

- a similar model that would optimize on a MSE loss;
- a fully-connected model that would optimize on the same soft-DTW-based loss.

Evaluation should be fair, *i.e.* not performed on data that was used for training the models.

Bonus points for:

- early stopping;
- comparison to other realistic baselines;
- any other meaningful innovation...

## Deadline

Deadline for this home assignment is March 1st, 2021.
You should use the link on moodle to hand in your assignment.
A single ipynb file should be provided (hence the file `soft_dtw.py`

should not be modified),
with execution traces.
This assignment is to be done by **groups of two**: each group should come up with his own implementation.

# Load the data

We use the Trace dataset from `tslearn.datasets`

.

# LSTM RNN

We build first our recurrent neural network model based on a soft-DTW loss with LSTM cells.

It is designed for multi-step ahead forecasting, and contains an "early stopping" method, like all other models we will build. The idea of early stopping is that we do not overfit our training data, so we can still have good out-of-sample performance.

For early stopping we introduced the option to use a validation set to check the performance of the model. The model will check every 5 epochs whether the validation error still decreases or not. If the validation error increases, the model stops. We use 5 epochs to smooth out the loss function, since it doesn't necesarily decrease every epoch.

Then we train our model and use the evaluation set for early stopping.

We perform a simple plot inspection to check our forecast.

Now we build a similar model, that is based on a MSE loss function.

# FULLY CONNECTED NETWORK

Here we define a fully connected model, based on a soft-DTW loss. This models contains one hidden layer.

# MODELS COMPARISON

Finally, we compare our three different models in this section, using different metrics (MSE, Soft-DTW and MAE).

Based on the three previous evaluation metrics, the fully connected layer seems to perform slightly better, even if the differences in the metrics are very low. We also find that the early stopping worsens the performance based on the three metrics. This gives us an indication that either the early stopping criteria was too strong, or that the test data looks very similar to the training data. The latter will make overfitting less harmful. A possible solution to this problem is increasing the number of epochs before comparing the loss of the validation set.

The forecast performed by the recurrents models used here is smoother (i.e less noisy), since the prediction is made by taking the mean of a normal law at each epoch.

We conclude our comparison noticing that it is possible to add confidence bounds with the recurrent models, which could be an interesting feature.