4 Resampling Methods and Model Selection


4.1 Seminar

You will need to load the core library for the course textbook:

library(ISLR)

4.1.1 Exercise

In the lab session for this topic (Sections 5.3.2 and 5.3.3 in James et al.), we saw that the cv.glm() function can be used in order to compute the LOOCV test error estimate. Alternatively, one could compute those quantities using just the glm() and predict.glm() functions, and a for loop. You will now take this approach in order to compute the LOOCV error for a simple logistic regression model on the Weekly dataset. Recall that in the context of classification problems, the LOOCV error is given in Section 5.1.5 (5.4, page 184).

HINT: If you’re not sure how to write a for loop, try writing a simple one that just prints out a sequence numbers. For examples:

for (i in 1:5) {
  print(i)
}
  1. Fit a logistic regression model that predicts Direction using Lag1 and Lag2.

  2. Fit a logistic regression model that predicts Direction using Lag1 and Lag2 using all but the first observation.

  3. Use the model from (b) to predict the direction of the first observation. You can do this by predicting that the first observation will go up if P(Direction="Up"|Lag1, Lag2) > 0.5. Was this observation correctly classified?

  4. Write a for loop from i=1 to i=n, where n is the number of observations in the dataset, that performs each of the following steps:

    1. Fit a logistic regression model using all but the i-th observation to predict Direction using Lag1 and Lag2.

    2. Compute the posterior probability of the market moving up for the i-th observation.

    3. Use the posterior probability for the i-th observation in order to predict whether or not the market moves up.

    4. Determine whether or not an error was made in predicting the direction for the i-th observation. If an error was made, then indicate this as a 1, and otherwise indicate it as a 0.

  5. Take the average of the n numbers obtained in (d)iv in order to obtain the LOOCV estimate for the test error. Comment on the results.

4.1.2 Exercise

In this exercise, we will predict the number of applications received using the other variables in the College dataset.

  1. Split the dataset into a training set and a test set.
  2. Fit a linear model using least squares on the training set, and report the test error obtained.
  3. Fit a ridge regression model on the training set, with \(\lambda\) chosen by cross-validation. Report the test error obtained.
  4. Fit a lasso model on the training set, with \(\lambda\) chosen by cross-validation. Report the test error obtained, along with the number of non-zero coefficient estimates.
  5. Fit a PCR model on the training set, with \(M\) chosen by cross-validation. Report the test error obtained, along with the value of \(M\) selected by cross-validation.
  6. Fit a PLS model on the training set, with \(M\) chosen by cross-validation. Report the test error obtained, along with the value of \(M\) selected by cross-validation.
  7. Comment on the results obtained. How accurately can we predict the number of college applications received? Is there much difference among the test errors resulting from these five approaches?

4.1.3 Exercise

We will now try to predict per capita crime rate in the Boston dataset.

  1. Try out some of the regression methods explored this week, such as best subset selection, the lasso, ridge regression, and PCR. Present and discuss results for the approaches that you consider.
  2. Propose a model (or set of models) that seem to perform well on this dataset, and justify your answer. Make sure that you are evaluating model performance using validation set error, cross-validation, or some other reasonable alternative, as opposed to using training error.
  3. Does your chosen model involve all of the features in the dataset? Why or why not?