Introduction

In this tutorial, we will first present the difference between Batch Stochastic Gradient Descent algorithms. Then we will cover overfitting and validation. Overall we will,

  • discuss how GDA works,
  • compare BGD and SGD,
  • revisit the problem of overfitting,
  • apply different validation techniques to overcome the problem.

Getting Started

To follow up, you will need

  • tidyverse
  • caret
  • gridExtra
  • ISLR
  • plotly
  • house dataset.

Now, call tidyverse and other packages:

Best Linear Fit

When fitting regressions, we try to find parameters that fit the data the best. For sake of simplicity, let’s assume a simple linear regression model

\[\hat Y^i = \theta_0 + \theta_1 X^i\] whereas the actual values have an additional error term:

\[Y^i = \theta_0 + \theta_1 X^i + \epsilon_i\] Notice here that \(Y_i\) is the actual observation and \(\hat Y_i\) is the estimated one. We also refer to the estimated one as \(\hat Y_i = h_\theta (x_i)\).

Seems like all miss the point. First cannot capture the slope (because it is 0), second miss both the slope and intercept, the last two have the right slope but wrong intercept.

The distances from the line to each point are too much. The sum of distance may cancel out (some are negative, some are positive) but sum of squarred distances will be very high.

What is so special about the line below?

The line pass through the points, so that the distance between each point to the line is as small as possible.

How to determine the best line mathematically?

We can define the best line mathematically by the line that minimizes the sum of squarred loss. For example, in the first 4 plots, the distance from each point to the line are very high and if we sum the squarred errors (SSE) it will be too much. For the one above, the sum of squarred error would be minimal.

\[MSE = J(\theta) = \frac2 M \sum_{j=1}^M(h_\theta(x^i) - Y^i)^2 \\ = \frac 2 M (h_\theta(X) - Y^i)^T (h_\theta(X) - Y)\]

So, if we can define the problem as finding the optimal \(\theta\) that minimizes the loss. When defined so, we can use Gradient Descent Algorithm that finds the optimal parameter.

To be able to use the gradient descent, we need to write the gradient of the above function:

\[\nabla J(\theta) = -\frac 1 M \sum_{i=1}^M(Y^i - h_\theta(x^i) )x_j^i \ \forall j\\ \nabla J(\theta) = \frac 1 M \sum_{i=1}^M(h_\theta(x^i) - Y^i)x_j^i \\ =-\frac 1 MX^T(Y - h_\theta(X))\]

You can find some implementations that drop \(\frac 1 M\) term. This is because we can account it by adjust the learning rate, \(alpha\).

Gradient Descent Algorithm

Here in this section, we will estimate the optimal parameters that best fits the data using gradient descent algorithm. Let’s start by defining loss function and its gradient:

For presentation purposes, we will generate a dataset that expose a linear relationship, i.e. \(Y_i = 1-2X_i+\epsilon_i\)

Now, let’s guess the optimal \(\theta\) vector and calculate their MSE loss. E.g. we can try \(\theta = (0,1)\), \(\theta = (1,1.5)\) and \(\theta = (1,-2)\):

##            l1       l2        l3
## [1,] 59.05332 77.26366 0.8994777

The first model misses the intercept and slope and the second has a positive slope. The third generates the minimum loss.

Now, let’s plot the loss function that we will try to minimize using GDA:

The function has one minimum point. We can locate it by eye but algorithmically finding it is not as straightforward. So we use Gradient Descent.

The contour plot is a bird-eye of the above 3D function and will be handy when we visualize the optimization:

Gradient Descent algorithm aims to find the parameters, here \(\theta_0\) and \(\theta_1\), that minimize the function. To this end, we start with an arbitrary \(\theta\) vector and calculate the gradient.

The gradient descent algorithm is as follows:

  1. Determine step size (learning rate), \(\alpha\), and pick an initial point, \(\theta^0 = (\theta_0^0,\theta_1^0)\). Also calculate the gradient of the function, \(\nabla J(\theta_0,\theta_1)\).
  2. Calculate the gradient at the point, \(\nabla J(\theta_i)\).
  3. Multiply the gradient with the step size, \(\alpha\cdot\nabla J(\theta^i)\).
  4. Move in that direction as much distance as you calculated. The point where you will arrive at will be your new point \(\theta^{i+1} = \theta^i - \alpha\cdot\nabla J(\theta^i)\).
  5. Repeat steps 2-4 until the loss doesn’t change much, i.e. \(|J(\theta_{i+1}) - J(\theta_{i})| < \epsilon\)

Step By step Gradient Descent

Let’s pick a starting point (initial parameters), \(\theta^0 = (4,1.5)\) and a step size \(\alpha=0.1\)

Now, we calculate gradient at the point \(\theta\). Assume you did it correctly and it resulted in the below vector

## [1]  3.02984 10.81065

Now, the algorithm tells us to update the parameters using the below rule. We will repeat it iteratively so that the output of the former will be the input of the latter:

\[\theta^1 = \theta^0 - \alpha \nabla J(\theta^0)\] \[\theta^2 = \theta^1 - \alpha \nabla J(\theta^1)\]

Therefore, the new (updated) theta will be

##               th1        th2
## theta.i0 4.000000  1.5000000
## theta.i1 3.697016  0.4189355
## theta.i2 3.424330 -0.3312578

The visual of the parameter updates is:

If we repeat the above 20 times (epochs):

Therefore the optimal \(\theta\) vector calculated after 20 iterations is:

## [1]  1.379446 -2.028778

where the correct answer is \((1,-2)\).

The above is a visual for the optimization process. Now, let’s see how the each parameter list (intercept and slope) perform on the data. To save space we will plot the parameter resuults of the 5th, 10th, 15th and 20th steps:

## [1] "Finished in 20 epochs"

BGD vs SGD

As we discussed before, SGD is a modification to BGD to reduce the computational cost. However later it was shown to be more effective in finding the optimal parameters. Below we will compare the two algorithms’ behaviour in 20 epochs. Since the SGD updates the parameters at each data point and our data has 100 observations, we will not plot all the history of parameter updates for presentation purposes. Instead, we will plot 5 parameters per each epoch:

## [1] "Finished in 20 epochs"
## [1] "maximum epoch 20 was reached"

As we can see in the plots, BGD approaches the optimal point smoothly whereas SGD behaves randomly. Still we can see the trend of SGD parameter updates and it approaches to the optimal.

Overfitting

In previous weeks we discussed the problem of overfitting. Let’s repeat it here to recall. Below we will fit several models changing w.r.t. their complexity. Namely, we will fit linear, quadratic, cubic, quartic models and ones with higher degrees. As model complexity increases, the fit becomes stronger in capturing data movements. However, models with higher degree can learn the data too much which causes poor performances on test set.

To present this problem, let’s split the data into train and test with a 80/20 split:

Now, let’s define a function that fits model, predicts and fits line to the scatter plot. The below function also takes testData as an input. By default, it will take train as input and train and fit to the train set. If we specify, it will train on train and test on another dataset:

Now, we apply degrees from 1 to 4 and plot:

It seems increasing the degree can help to closely follow the output values.

What about this:

It looks we are overfitting. Such a behaviour is specific to the dataset and we don’t expect it to happen always.

For example, let’s use this model to and test it on another set:

If we increase the complexity of the model, it will learn the data ridiculously well, so that the model will expect the same behavoiur in a new dataset.

Model Complexity vs Bias

There is a tradeoff between model complexity and bias. The best model is not the one that learns the training data best, but the one that performs the best on another set. Here we will measure the performance using RMSE:

As the model complexity increases, the train loss reduces gradually and can potentially be zero. On another set, however, the loss reduces but starts increasing after a level. The minimum is the sweetspot and here it is polynomial regression with degree=5.

Validation

A way to deal with this problem is validating the performance of the models on a dataset separated from the training set. The common practice is rougly splitting your data into train/validation/test with a 70/10/20 split or 60/20/20 split depending on the dataset size.

Since we have already split the dataset into train (80) and test (20), for validation set we will split the train with 60/20 ratio.

## [1] 1149  383  384

The best model seems to be polynomial regression with \(p=9\).

Let’s see how it performs on the test set:

As we can see, the behaviour is slightly better compared to p=13. But the unnecessary movement on the right tail didn’t vanish.

Parameter Tuning with Cross-Validation

Leave-One-Out

The above example shows that validation split is not the ultimate solution. Validation set in the previous example had the same outlier at the right tail, so model with \(p=9\) performed the best. But that outlier may not always be there.

A better approach is to repeat this split the data into \(k\) pieces. Each time we will leave one peace out for validation and train on the rest. The procedure will result in 10 models each produces different loss. We will pick the one produces the minimum loss. The below displays this approach:

Source: Wikipedia

CV in Caret

To apply different cross validation techniques, we will use caret package. It has builtin functions for validating on a train set. But we still need to split the data after reading to train and test:

The syntax is the same among different methods:

LOOCV

We will use train function which takes lm as the method and the form = price ~ poly(sqft_living,10) as the formula:

## 
## Call:
## lm(formula = .outcome ~ ., data = dat)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1508281  -185928    -1532   146390  1556338 
## 
## Coefficients:
##                           Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                 794798       8211  96.796  < 2e-16 ***
## `poly(sqft_living, 10)1`  19681431     321389  61.239  < 2e-16 ***
## `poly(sqft_living, 10)2`   4386368     321389  13.648  < 2e-16 ***
## `poly(sqft_living, 10)3`  -1056697     321389  -3.288  0.00103 ** 
## `poly(sqft_living, 10)4`    981317     321389   3.053  0.00230 ** 
## `poly(sqft_living, 10)5`    400651     321389   1.247  0.21273    
## `poly(sqft_living, 10)6`   -636126     321389  -1.979  0.04796 *  
## `poly(sqft_living, 10)7`    563077     321389   1.752  0.07997 .  
## `poly(sqft_living, 10)8`   1319828     321389   4.107 4.23e-05 ***
## `poly(sqft_living, 10)9`   1610866     321389   5.012 6.01e-07 ***
## `poly(sqft_living, 10)10`   860661     321389   2.678  0.00749 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 321400 on 1521 degrees of freedom
## Multiple R-squared:  0.7252, Adjusted R-squared:  0.7234 
## F-statistic: 401.4 on 10 and 1521 DF,  p-value: < 2.2e-16

The best model in 10 trials is the one above.

k-fold CV

## 
## Call:
## lm(formula = .outcome ~ ., data = dat)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1508281  -185928    -1532   146390  1556338 
## 
## Coefficients:
##                           Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                 794798       8211  96.796  < 2e-16 ***
## `poly(sqft_living, 10)1`  19681431     321389  61.239  < 2e-16 ***
## `poly(sqft_living, 10)2`   4386368     321389  13.648  < 2e-16 ***
## `poly(sqft_living, 10)3`  -1056697     321389  -3.288  0.00103 ** 
## `poly(sqft_living, 10)4`    981317     321389   3.053  0.00230 ** 
## `poly(sqft_living, 10)5`    400651     321389   1.247  0.21273    
## `poly(sqft_living, 10)6`   -636126     321389  -1.979  0.04796 *  
## `poly(sqft_living, 10)7`    563077     321389   1.752  0.07997 .  
## `poly(sqft_living, 10)8`   1319828     321389   4.107 4.23e-05 ***
## `poly(sqft_living, 10)9`   1610866     321389   5.012 6.01e-07 ***
## `poly(sqft_living, 10)10`   860661     321389   2.678  0.00749 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 321400 on 1521 degrees of freedom
## Multiple R-squared:  0.7252, Adjusted R-squared:  0.7234 
## F-statistic: 401.4 on 10 and 1521 DF,  p-value: < 2.2e-16

Repeated CV

We will use train function which takes lm as the method and the form = price ~ poly(sqft_living,10) as the formula:

## 
## Call:
## lm(formula = .outcome ~ ., data = dat)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1508281  -185928    -1532   146390  1556338 
## 
## Coefficients:
##                           Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                 794798       8211  96.796  < 2e-16 ***
## `poly(sqft_living, 10)1`  19681431     321389  61.239  < 2e-16 ***
## `poly(sqft_living, 10)2`   4386368     321389  13.648  < 2e-16 ***
## `poly(sqft_living, 10)3`  -1056697     321389  -3.288  0.00103 ** 
## `poly(sqft_living, 10)4`    981317     321389   3.053  0.00230 ** 
## `poly(sqft_living, 10)5`    400651     321389   1.247  0.21273    
## `poly(sqft_living, 10)6`   -636126     321389  -1.979  0.04796 *  
## `poly(sqft_living, 10)7`    563077     321389   1.752  0.07997 .  
## `poly(sqft_living, 10)8`   1319828     321389   4.107 4.23e-05 ***
## `poly(sqft_living, 10)9`   1610866     321389   5.012 6.01e-07 ***
## `poly(sqft_living, 10)10`   860661     321389   2.678  0.00749 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 321400 on 1521 degrees of freedom
## Multiple R-squared:  0.7252, Adjusted R-squared:  0.7234 
## F-statistic: 401.4 on 10 and 1521 DF,  p-value: < 2.2e-16

Finetuning with CV

Now, let’s fit different regressions with different degrees.

##             Min.  1st Qu.   Median      Mean  3rd Qu.       Max. NA's
## Model01 284612.7 321858.5 342107.3  346685.8 368302.6   422991.1    0
## Model02 269984.3 302626.6 324639.4  329238.2 353629.5   392286.2    0
## Model03 269124.3 303032.2 332235.5  330894.6 353798.8   397854.9    0
## Model04 271373.2 315258.8 330550.3  332041.2 347136.2   397091.3    0
## Model05 280040.9 315571.1 328601.5  331596.8 340613.5   412780.4    0
## Model06 248981.5 314619.6 335604.0  339024.4 366154.7   449801.3    0
## Model07 250551.2 311934.9 335346.4  368024.1 366977.2  1446515.8    0
## Model08 278307.1 320561.3 332965.0  342464.2 362189.6   496181.1    0
## Model09 286268.1 309831.3 332083.0  970463.3 343815.6 15151456.8    0
## Model10 280660.7 317794.4 336441.7 1535824.7 361751.4 31376473.0    0

The 6th model achieved the minimum loss. We can use it as the best fit.