CS 229, Autumn 2014 (Solution)

$ 20.99
Category:

Description

Midterm Examination
Question Points
1 Least Squares /16
2 Generative Learning /14
3 Generalized Linear Models /18
4 Support Vector Regression /18
5 Learning Theory /20
6 Short Answers /24
Total /110
Name of Student:
SUNetID: @stanford.edu
Signed:
1. [16 points] Least Squares
As described in class, in least squares regression we have a cost function:
m
J(θ) = X(hθ(x(i)) − y(i))2 = (Xθ − ~y)T (Xθ − ~y)
i=1
The goal of least squares regression is to find θ such that we minimize J(θ) given the training data.
Let’s say that we had an original set of n features, so that the training inputs were represented by the design matrix X ∈ Rm×(n+1). However, we now gain access to one additional feature for every example. As a result, we now have an additional vector of features ~v ∈ Rm×1 for our training set that we wish to include in our regression. We can do this by creating a new design matrix: Xe = [X ~v] ∈ Rm×(n+2).
Therefore the new parameter vector is where p ∈ R is the parameter
corresponding to the new feature vector ~v.
Note: For mathematical simplicity, throughout this problem you can assume that XT X = I ∈ R(n+1)×(n+1) and XeT Xe = I ∈ R(n+2)×(n+2),~vT~v = 1. This is called an orthonormality assumption – specifically, the columns of Xe are orthonormal. The conclusions of the problem hold even if we do not make this assumption, but this will make your derivations easier.
(a) [2 points] Let θˆ = argminθ J(θ) be the minimizer of the original least squares objective (using the original design matrix X). Using the orthornormality assumption, show that J(θˆ) = (XXT~y − ~y)T (XXT~y − ~y). I.e., show that this is the value of minθ J(θ) (the value of the objective at the minimum).
(b) [5 points] Now let θˆnew be the minimizer for Je(θnew) = (Xθe new −~y)T (Xθe new −~y). Find the new minimized objective Je(θˆnew) and write this expression in the form:
Je(θˆnew) = J(θˆ) + f(X,~v,~y) where J(θˆ) is as derived in part (a) and f is some function of X,~v, and ~y.
(c) [6 points] Prove that the optimal objective value does not increase upon adding a feature to the design matrix. That is, show Je(θˆnew) ≤ J(θˆ).
(d) [3 points] Does the above result show that if we keep increasing the number of features, we can always get a model that generalizes better than a model with fewer features? Explain why or why not.
2. [14 points] Decision Boundaries for Generative Models
(a) [7 points] Consider the multinomial event model of Naive Bayes. Our goal in this problem is to show that this is a linear classifier.
For a given text document x, let c1,…,cV indicate the number of times each word (out of V words) appears in the document. Thus, ci ∈ {0,1,2,…} counts the occurrences of word i. Recall that the Naive Bayes model uses parameters φy = p(y = 1), φi|y=1 = p(word i appears in a specific document position | y = 1) and φi|y=0 = p(word i appears in a specific document position | y = 0).
We say a classifier is linear if it assigns a label y = 1 using a decision rule of the form
V
X
wici + b ≥ 0
i=1
I.e., the classifier predicts “ 0, and predicts “y = 0” otherwise.
Show that Naive Bayes is a linear classifier, and clearly state the values of wi and b in terms of the Naive Bayes parameters. (Don’t worry about whether the decision rule uses “≥” or “>.”) Hint: consider using log-probabilities.

CS229 Midterm 6
[extra space for 2 (a)]
(b) [7 points] In Problem Set 1, you showed that Gaussian Discriminant Analysis (GDA) is a linear classifier. In this problem, we will show that a modified version of GDA has a quadratic decision boundary.
Recall that GDA models p(x|y) using a multivariate normal distribution, where (x|y = 0) ∼ N(µ0,Σ) and (x|y = 1) ∼ N(µ1,Σ), where we used the same Σ for both Gaussians. For this question, we will instead use two covariance matrices Σ0,Σ1 for the two labels. So, (x|y = 0) ∼ N(µ0,Σ0) and (x|y = 1) ∼ N(µ1,Σ1).

The model distributions can now be written as:

Let’s follow a binary decision rule, where we predict y = 1 if p(y = 1|x) ≥ p(y = 0|x), and y = 0 otherwise. Show that if Σ0 6= Σ1, then the separating boundary is quadratic in x.
That is, simplify the decision rule “p(y = 1|x) ≥ p(y = 0|x)” to the form “xT Ax + BT x + C ≥ 0” (supposing that x ∈ Rn+1), for some A ∈ R(n+1)×(n+1), B ∈ Rn+1, C ∈ R and A 6= 0. Please clearly state your values for A, B and C.
CS229 Midterm 8
[extra space for 2 (b)]
3. [18 points] Generalized Linear Models
In this problem you will build a Generalized Linear Model (GLM) for a response variable y (taking on values {1,2,3,…}), whose distribution (parameterized by φ) is modeled as:
p(y;φ) = (1 − φ)y−1φ
This distribution is known as the geometric distribution, and is used to model network connections and many other problems.
(a) i. [5 points] Show that the geometric distribution is an exponential family distribution. You should explicitly specify b(y), η, T(y), α(η). Also specify what φ is in terms of η.

ii. [5 points] Suppose that we have an IID training set {(x(i),y(i)),i = 1,…,m} and we wish to model this using a GLM based on a geometric distribution. Find the log-likelihood log ) defined with respect to the entire training set.
(b) [6 points] Derive the Hessian H and the gradient vector of the log likelihood with respect to θ, and state what one step of Newton’s method for maximizing the log likelihood would be.
(c) [2 points] Show that the Hessian is negative semi-definite. This shows the optimization objective is concave, and hence Newton’s method is maximizing loglikelihood.
4. [18 points] Support Vector Regression
In class, we showed how the SVM can be used for classification. In this problem, we will develop a modified algorithm, called the Support Vector Regression algorithm, which can instead be used for regression, with continuous valued labels y ∈ R.
Suppose we are given a training set {(x(1),y(1)),…,(x(m),y(m))}, where x(i) ∈ R(n+1) and y(i) ∈ R. We would like to find a hypothesis of the form hw,b(x) = wT x + b with a small value of w. Our (convex) optimization problem is:
minw,b 1kwk2
s.t.
where > 0 is a given, fixed value. Notice how the original functional margin constraint has been modified to now represent the distance between the continuous y and our hypothesis’ output.
(a) [4 points] Write down the Lagrangian for the optimization problem above. We suggest you use two sets of Lagrange multipliers αi and , corresponding to the two inequality constraints (labeled (1) and (2) above), so that the Lagrangian would be written L(w,b,α,α∗).

CS229 Midterm 14
(b) [10 points] Derive the dual optimization problem. You will have to take derivatives of the Lagrangian with respect to w and b.
[extra space for problem 4 (b)]
(c) [4 points] Show that this algorithm can be kernelized. For this, you have to show that (i) the dual optimization objective can be written in terms of inner-products of training examples; and (ii) at test time, given a new x the hypothesis hw,b(x) can also be computed in terms of inner products.
CS229 Midterm 16
5. [20 points] Learning Theory
Suppose you are given a hypothesis h0 ∈ H, and your goal is to determine whether h0 has generalization error within η > 0 of the best hypothesis, h∗ = argminh∈H ε(h). More specifically, we say that a hypothesis h is η-optimal if ε(h) ≤ ε(h∗) + η. Here, we wish to answer the following question:
Given a hypothesis h0, is h0 η-optimal?
Let δ > 0 be some fixed constant, and consider a finite hypothesis class H of size |H| = k. For each h ∈ H, let ˆε(h) denote the training error of h with respect to some training set of m IID examples, and let hˆ = argminh∈H εˆ(h) denote the hypothesis that minimizes training error.
Now, consider the following algorithm:
1. Set
.
2. If ˆε(h0) > εˆ(hˆ) + η + 2γ, then return NO.
3. If ˆε(h0) < εˆ(hˆ) + η − 2γ, then return YES.
4. Otherwise, return UNSURE.
Intuitively, the algorithm works by comparing the training error of h0 to the training error of the hypothesis hˆ with the minimum training error, and returns NO or YES only when ˆε(h0) is either significantly larger than or significantly smaller than ˆε(hˆ)+η.
(a) [6 points] First, show that if ε(h0) ≤ ε(h∗) + η (i.e., h0 is η-optimal), then the probability that the algorithm returns NO is at most δ.

CS229 Midterm 17
[extra space for 5 (a)]
CS229 Midterm 18
(b) [6 points] Second, show that if ε(h0) > ε(h∗)+η (i.e., h0 is not η-optimal), then the probability that the algorithm returns YES is at most δ.
(c) [8 points] Finally, suppose that h0 = h∗, and let η > 0 and δ > 0 be fixed. Show that if m is sufficiently large, then the probability that the algorithm returns YES is at least 1 − δ.
Hint: observe that for fixed η and δ, as m → ∞, we have
.
This means that there are values of m for which 2γ < η − 2γ.

6. [24 points] Short answers
To discourage random guessing, one point will be deducted for a wrong answer on true/false or multiple choice questions! Also, no credit will be given for answers without a correct explanation.
(a) [3 points] You have an implementation of Newton’s method and gradient descent. Suppose that one iteration of Newton’s method takes twice as long as one iteration of gradient descent. Then, this implies that gradient descent will converge to the optimal objective faster. True/False?
(b) [3 points] A stochastic gradient descent algorithm for training logistic regression with a fixed learning rate will always converge to exactly the optimal setting of the parameters θ∗ = argmax ), assuming a reasonable choice of the learning rate. True/False?
(c) [3 points] Given a valid kernel K(x,y) over a valid kernel?
(d) [3 points] Consider a 2 class classification problem with a dataset of inputs {x(1) = (−1,−1),x(2) = (−1,+1),x(3) = (+1,−1),x(4) = (+1,+1)}. Can a linear SVM (with no kernel trick) shatter this set of 4 points?
(e) [3 points] For linear hypotheses (i.e. of the form h(x) = wT x + b), the vector of learned weights w is always perpendicular to the separating hyperplane. True/False? Provide a counterexample if False, or a brief explanation if True.
(f) [3 points] Let H be a set of classifiers with a VC dimension of 5. Consider a set of 5 training examples {(x(1),y(1)),…,(x(5),y(5))}. Now we select a classifier h∗ from H by minimizing the classification error on the training set. Which one of the following is true?
i. x(5) will certainly be classified correctly (i.e. h∗(x(5)) = y(5))
ii. x(5) will certainly be classified incorrectly (i.e. h∗(x(5)) 6= y(5))
iii. We cannot tell
Briefly justify your answer.
(g) [6 points] Suppose you would like to use a linear regression model in order to predict the price of houses. In your model, you use the features x0 = 1, x1 = size in square meters, x2 = height of roof in meters. Now, suppose a friend repeats the same analysis using exactly the same training set, only he represents the data instead using features , and = height in cm (so i. [3 points] Suppose both of you run linear regression, solving for the parameters via the Normal equations. (Assume there are no degeneracies, so this gives a unique solution to the parameters.) You get parameters θ0, θ1, θ2; your friend gets . Then . True/False?
ii. [3 points] Suppose both of you run linear regression, initializing the parameters to 0, and compare your results after running just one iteration of batch gradient descent. You get parameters θ0, θ1, θ2; your friend gets .
Then . True/False?

Reviews

There are no reviews yet.

Be the first to review “CS 229, Autumn 2014 (Solution)”

Your email address will not be published. Required fields are marked *