knitr::opts_chunk$set(eval=params$answers, message=FALSE)
library(ISLR)
library(tidyverse)
library(glmnet)
library(caret)
library(gridExtra)

For this lab you need the packages ISLR, ggplot2, glmnet, caret, and gridExtra.

The data set for this lab is College, check the help page for its content.

The lab is structured as follows:

  1. Explore, normalize and standardize the data

  2. Split data in train/test set

  3. Train filter, wrapper and embedded methods, and compare test MSE

  4. Introduce two-way interactions effects, and train the lasso

1 Data preparation

  1. Display a summary of the data set. Check for impossible values (e.g. percentage outside the range 0-100), and delete these cases from the data (overwrite the original data).
summary(College)

College <- College %>% 
  filter(Grad.Rate <= 100,
         PhD <= 100)
  1. Since we are going fit linear models, we want the variables in the data to have approximately normal distributions. To visually check for skew and outliers, standardize all variables in the data set (first convert the dichotomous factor Private into a numeric variable), and display the boxplots of the standardized variables in a single plot array.
College %>% 
  mutate(Private = as.numeric(Private),
         across(everything(), scale)) %>% 
  pivot_longer(everything(), values_to = "z-score", names_to = "variable") %>% 
  ggplot(aes(`z-score`, variable)) +
  geom_boxplot() +
  theme_minimal()
  1. Apply a normalizing log transformation to the variables with heavy skew (for variables with left skew, first reverse the scores). Make sure that before you transform the variables their minimum score is 1, in order to avoid taking the log of 0, which is infinite. Display the boxplots of the transformed variables to check if the transformation had the desired effect, and then save the resulting data (overwrite the previous version of College).
College <- College %>% 
  mutate(Private = as.numeric(Private),
         across(c(PhD, Top25perc, Terminal), ~{101 - .}),
         across(c(2, 3, 4, 5, 7, 8, 11, 12, 13, 14, 17), ~ log(. + 1)),
         across(everything(), scale)) 
  
College  %>% 
  pivot_longer(everything(), values_to = "z-score", names_to = "variable") %>% 
  ggplot(aes(`z-score`, variable)) +
  geom_boxplot() +
  theme_minimal()
  1. Display 16 scatterplots with Grad.Rate on the y-axis and the respective 16 numeric predictors on x-axis (so exclude Private). Check the scatterplots for non-linearity.
par(mfrow = c(4, 4))
for(i in 2:17)plot(College[, c(i, 18)], cex = .7)

2 Train/test set

  1. Create the vector train with the row numbers for the train set containing of 75% of the cases. Set the seed to 1 for reproducibility of the results.
set.seed(1)
train <- createDataPartition(y = College$Grad.Rate, p = .75, list = FALSE)

2.1 Filter method

Filter methods select a set of features before training the model. For these we will select features from the training set based on a minimum correlation with Grad.Rate. As selection criterion we use an absolute correlation larger than 0.3.

  1. Display the last column of the correlation matrix of the training set (the 18th column pertaining to outcome variable Grad.Rate). Which features satisfy the criterion?
cor(College[train, ])[-18, 18] %>%  round(3) 
  1. Make a character vector vars containing the names of the features that satisfy our selection criterion.
vars <- which(abs(cor(College[train, ])[-18, 18]) > .3)
  1. Fit the linear model to the training data with the selected features as predictors of Grad.Rate, and save the object as train_filter.
train_filter <- lm(Grad.Rate ~ ., data = College[train, c(vars, 18)])
  1. Display the summary of train_filter. Although all predictors have an absolute correlation of at least .3 with Grad.rate, some of them are not significant as predictor? Why is that?
summary(train_filter)
  1. Obtain and save the predictions for the test set
filter_pred <- predict(train_filter, newdata = College[-train, ])
  1. Compute and display the test MSE. Save this value as filter_mse.
filter_mse  <- mean((College[-train, "Grad.Rate"] - filter_pred)^2)

2.2 Wrapper method

Wrapper methods select features while training the model. Here we use the backward step-wise selection procedure. Check the help page of the function step() for assistance on how to conduct a backward step-wise selection.

  1. Fit the linear model with all features as predictors, and apply the backward step-wise step() procedure. Save the result under the name train_step.
train_step <- step(lm(Grad.Rate ~ ., College[train, ]))
  1. Display the summary of the step model. Compare the explained variance of this model and the filter model. Which model performs best on the training data?
summary(train_step)
  1. Obtain and save the predictions of the step model for the test set.
pred_step  <- predict(train_step, newdata = College[-train, ])
  1. Compute the test MSE for the step model, and save it as step_mse.
(step_mse <- mean((College[-train, "Grad.Rate"] - pred_step)^2))

2.3 Embedded method

In the embedded method, the feature selection process is embedded in the model training phase. We use the function glmnet() to fit the lasso and ridge models to the training data. This function computes the deviance for a sequence of values of the hyperparameter lambda, which determines the budget for the coefficients. The function cv.glmnet() is then applied to object created with glmnet() to determine the optimal value for lambda with cross-validation.

  1. Create the objects train_lasso and train_ridge with glmnet(). Set the argument alpha = 1 for the lasso, and alpha = 0 for the ridge. Do not forget that the argument x has to be of class matrix!
train_ridge <- glmnet(x = as.matrix(College[train, -18]), 
                      y = College[train, 18], 
                      alpha = 0)

train_lasso <- glmnet(x = as.matrix(College[train, -18]), 
                      y = College[train, 18], 
                      alpha = 1)
  1. Plot both glmnet objects. Interpret the plots.
par(mfrow=c(2,1))
plot(train_ridge, label = T)
plot(train_lasso, label = T)
  1. Cross-validate with cv.glmnet() to find the optimal lambda values for alpha=0 an alpha=1. Include the argument type.measure = "mse". Save the objects as cv_ridge and cv_lasso.
cv_ridge <- cv.glmnet(x = as.matrix(College[train, -18]), 
                      y = College[train, 18], 
                      alpha = 0, 
                      type.measure = "mse")

cv_lasso <- cv.glmnet(x = as.matrix(College[train, -18]), 
                      y = College[train, 18], 
                      alpha = 1, 
                      type.measure = "mse")
  1. Plot both cv.glmnet objects, and interpret.
par(mfrow=c(1, 2))
plot(cv_ridge)
plot(cv_lasso)
  1. Display a matrix with the coefficients of linear model with all features as predictors, and of the objects cv_ridge and cv_lasso in the columns. Round all coefficients to three decimals. Compare the effects of the shrinkage.
round(cbind(coef(lm(Grad.Rate ~ . , College[train, ])), coef(cv_ridge), coef(cv_lasso)), 3)
  1. Obtain the predictions for the test set - check the help page for predict.cv.glmnet() - and compute and the test MSE’s for the ridge and lasso.
pred_ridge <- predict(cv_ridge, newx = as.matrix(College[-train, -18]))

pred_lasso <- predict(cv_lasso, newx = as.matrix(College[-train, -18]))


ridge_mse  <- mean((College[-train, "Grad.Rate"] - pred_ridge)^2)

lasso_mse  <- mean((College[-train, "Grad.Rate"] - pred_lasso)^2)
  1. Compare the test MSE for the filter method, the step function, and the ridge and lasso. Which method performs work best on the test data.
data.frame(filter_mse, step_mse, ridge_mse, lasso_mse)

3 Interactions

So far we have not considered any pairwise interactions in our models. It may very well be that that the effect of one feature on Grad.rate is moderated by another feature. The problem with finding relevant interactions is that there are a lot of them (\({17\choose2}=136\)). The lasso, however, is particularly suited for such a task.

  1. Obtain the model matrix of the model Grad.Rate ~ .^2 for the training set (excluding the intercept), and cross-validate the lasso version of this model with cv.glmnet(). Save the the result as cv_lasso2
cv_lasso2 <- cv.glmnet(x = model.matrix(Grad.Rate ~ .^2, College)[train, -1], 
                       y = College$Grad.Rate[train], 
                       alpha = 1, 
                       type.measure = "mse")
  1. Plot the object. How many variables have nonzero coefficients at the optimal value for lambda?
plot(cv_lasso2)
  1. Display the coefficients. How many of these are interaction effects?
round(coef(cv_lasso2) , 3)
  1. Compute the test MSE for cv_lasso2. How does it compare to the models without interactions?
pred_lasso2 <- predict(cv_lasso2, newx = model.matrix(Grad.Rate ~ .^2, College)[-train, -1])
(lasso2_mse   <- mean((College[-train, "Grad.Rate"] - pred_lasso2)^2))

END OF LAB