We'll go back to the simple DNN that we saw way back in week two for training on and we'll see what happens. We get a chart like this, which at least to the eyeball looks really good, but it has a very large MAE so something must be wrong. Indeed, if we zoom into the results we can see in a little bit more detail about how the forecast behaves in the original data. Our clue to the problem could be our window size. Remember earlier we said it's a 20 so our training window sizes are 20 time slices worth of data. And given that each time slice is a month in real time our window is a little under two years. But if you remember this chart, we can see that the seasonality of sunspots is far greater than two years. It's closer to 11 years. And actually some science tells us that it might even be 22 years with different cycles interleaguing with each other. So what would happen if we retrain with a window size of 132, which is 11 years worth of data as our window size. Now while this chart looks similar, we can see from the MAE that it actually got worse so increasing the window size didn't work. Why do you think that would be? Well, by looking back to the data, we can realize that it is seasonal to about 11 years, but we don't need a full season in our window. Zooming in on the data again, we'll see something like this where it's just the typical time series. Values later on are somewhat related to earlier ones, but that's a lot of noise. So maybe we don't need a huge window of time in order to train. Maybe we should go with something a little bit more like our initial 20, let's try 30. So if we look back at this code, we can change our window size to 30. But then look at the split time, the data set has around 3,500 items of data, but we're splitting it into training and validation. Now 1,000, which means only 1,000 for training and 2,500 for validation. That's a really bad split. There's not enough training data. So let's make it 3,500 instead. And then when we retrain, we'll get this. Our MAE has improved to 15 but can we make it even better? Well, one thing we can try is to edit the neural network design and height of parameters. If you remember, we had three layers of 10, 10, and 1 neurons. Our input shape is now larger at 30. So maybe try different values here, like 30, 15, and 1, and retrain. Surprisingly, this was a small step backwards, with our MAE increasing. It also wasn't worth the extra compute time for the extra neuron layers. So let's switch back to 10, 10, 1 and instead look at the learning rate. Let's tweak it a little. Now after retraining, I can see my MAE has decreased a bit which is good.