CS 224n Assignment #2: word2vec (49 Points) (Solution)

$ 25.00
Category:

Description

1 Written: Understanding word2vec (31 points)
Recall that the key insight behind word2vec is that ‘a word is known by the company it keeps’. Concretely, consider a ‘center’ word c surrounded before and after by a context of a certain length. We term words in this contextual window ‘outside words’ (O). For example, in Figure 1, the context window length is 2, the center word c is ‘banking’, and the outside words are ‘turning’, ‘into’, ‘crises’, and ‘as’:

Figure 1: The word2vec skip-gram prediction model with window size 2
Skip-gram word2vec aims to learn the probability distribution P(O|C). Specifically, given a specific word o and a specific word c, we want to predict P(O = o|C = c): the probability that word o is an ‘outside’ word for c (i.e., that it falls within the contextual window of c). We model this probability by taking the softmax function over a series of vector dot-products:
exp(u⊤o vc)
P(O = o | C = c) = (1)
Pw∈Vocab exp(u⊤wvc)
For each word, we learn vectors u and v, where uo is the ‘outside’ vector representing outside word o, and vc is the ‘center’ vector representing center word c. We store these parameters in two matrices, U and V . The columns of U are all the ‘outside’ vectors uw; the columns of V are all of the ‘center’ vectors vw. Both U and V contain a vector for every w ∈ Vocabulary.
Recall from lectures that, for a single pair of words c and o, the loss is given by:
Jnaive-softmax(vc,o,U) = −logP(O = o|C = c). (2)
We can view this loss as the cross-entropy between the true distribution y and the predicted distribution yˆ, for a particular center word c and a particular outside word o. Here, both y and yˆ are vectors with length equal to the number of words in the vocabulary. Furthermore, the kth entry in these vectors indicates the conditional probability of the kth word being an ‘outside word’ for the given c. The true empirical distribution y is a one-hot vector with a 1 for the true outside word o, and 0 everywhere else, for this particular example of center word c and outside word o.3 The predicted distribution yˆ is the probability distribution P(O|C = c) given by our model in equation (1).
Note: Throughout this homework, when computing derivatives, please use the method reviewed during the lecture (i.e. no Taylor Series Approximations).
1
(a) (2 points) Prove that the naive-softmax loss (Equation 2) is the same as the cross-entropy loss between y and yˆ, i.e. (note that y,yˆ are vectors and yˆo is a scalar):
− X yw log(yˆw) = −log(yˆo). (3)
w∈Vocab
Your answer should be one line. You may describe your answer in words.
(b) (7 points)
(i) Compute the partial derivative of Jnaive-softmax(vc,o,U) with respect to vc. Please write your

answer in terms of y, yˆ, U, and show your work to receive full credit.

• Note: Your final answers for the partial derivative should follow the shape convention: the partial derivative of any function f(x) with respect to x should have the same shape as x. • Please provide your answers for the partial derivative in vectorized form. For example, when we ask you to write your answers in terms of y, yˆ, and U, you may not refer to specific elements of these terms in your final answer (such as y1, y2, …).
(ii) When is the gradient you computed equal to zero?
Hint: You may wish to review and use some introductory linear algebra concepts.
(iii) The gradient you found is the difference between two terms. Provide an interpretation of how each of these terms improves the word vector when this gradient is subtracted from the word vector vc.
(iv) In many downstream applications using word embeddings, L2 normalized vectors (e.g. u/||u||2
where ||u||2 = pPi u2i) are used instead of their raw forms (e.g. u). Now, suppose you would like to classify phrases as being positive or negative. When would L2 normalization take away useful information for the downstream task? When would it not? Hint: Consider the case where ux = αuy for some words x ̸= y and some scalar α.
(c) (5 points) Compute the partial derivatives of Jnaive-softmax(vc,o,U) with respect to each of the ‘outside’ word vectors, uw’s. There will be two cases: when w = o, the true ‘outside’ word vector, and w ̸= o, for all other words. Please write your answer in terms of y, yˆ, and vc. In this subpart, you may use specific elements within these terms as well (such as y1, y2, …). Note that uw is a vector while y1,y2,… are scalars. Show your work to receive full credit.
(d) (1 point) Write down the partial derivative of Jnaive-softmax(vc,o,U) with respect to U. Please break down your answer in terms of the column vectors ∂ ,o,1 ∂ ∂ ,o,2 , ···, ∂∂Ju(v|Vocabc,o,U|). No derivations are necessary, just an answer in the form of a matrix.
(e) (2 points) The Leaky ReLU (Leaky Rectified Linear Unit) activation function is given by Equation 4 and Figure 2:
f(x) = max(αx,x) (4)
Where x is a scalar and 0 < α < 1, please compute the derivative of f(x) with respect to x. You may ignore the case where the derivative is not defined at 0. (f) (3 points) The sigmoid function is given by Equation 5:
(5)

Figure 2: Leaky ReLU
Please compute the derivative of σ(x) with respect to x, where x is a scalar. Please write your answer in terms of σ(x). Show your work to receive full credit.
(g) (6 points) Now we shall consider the Negative Sampling loss, which is an alternative to the Naive Softmax loss. Assume that K negative samples (words) are drawn from the vocabulary. For simplicity of notation we shall refer to them as w1,w2,…,wK, and their outside vectors as uw1,uw2,…,uwK. For this question, assume that the K negative samples are distinct. In other words, i ̸= j implies wi ̸= wj for i,j ∈ {1,…,K}. Note that o /∈ {w1,…,wK}. For a center word c and an outside word o, the negative sampling loss function is given by:
K
Jneg-sample(vc,o,U) = −log(σ(u⊤o vc)) − Xlog(σ(−u⊤wsvc)) (6)
s=1
for a sample w1,…wK, where σ(·) is the sigmoid function.
(i) Please repeat parts (b) and (c), computing the partial derivatives of Jneg-sample with respect to vc, with respect to uo, and with respect to the sth negative sample uws. Please write your answers in terms of the vectors vc, uo, and uws, where s ∈ [1,K]. Show your work to receive full credit. Note: you should be able to use your solution to part (f) to help compute the necessary gradients here.
(ii) In lecture, we learned that an efficient implementation of backpropagation leverages the re-use of previously-computed partial derivatives. Which quantity could you reuse amongst the three partial derivatives calculated above to minimize duplicate computation? Write your answer in terms of
Uo, , a matrix with the outside vectors stacked as columns, and 1, a (K +1)×1 vector of 1’s. Additional terms and functions (other than Uo,{w1,…,wK} and 1) can be used in your solution.
(iii) Describe with one sentence why this loss function is much more efficient to compute than the naive-softmax loss.
Caveat: So far we have looked at re-using quantities and approximating softmax with sampling for faster gradient descent. Do note that some of these optimizations might not be necessary on modern GPUs and are, to some extent, artifacts of the limited compute resources available at the time when these algorithms were developed.
(h) (2 points) Now we will repeat the previous exercise, but without the assumption that the K sampled words are distinct. Assume that K negative samples (words) are drawn from the vocabulary. For simplicity of notation we shall refer to them as w1,w2,…,wK and their outside vectors as uw1,…,uwK. In this question, you may not assume that the words are distinct. In other words, wi = wj may be true when i ̸= j is true. Note that o /∈ {w1,…,wK}. For a center word c and an outside word o, the negative sampling loss function is given by:
K
Jneg-sample(vc,o,U) = −log(σ(u⊤o vc)) − Xlog(σ(−u⊤wsvc)) (7)
s=1
for a sample w1,…wK, where σ(·) is the sigmoid function.
Compute the partial derivative of Jneg-sample with respect to a negative sample uws. Please write your answers in terms of the vectors vc and uws, where s ∈ [1,K]. Show your work to receive full credit. Hint: break up the sum in the loss function into two sums: a sum over all sampled words equal to ws and a sum over all sampled words not equal to ws. Notation-wise, you may write ‘equal’ and ‘not equal’ conditions below the summation symbols, such as in Equation 8.
(i) (3 points) Suppose the center word is c = wt and the context window is [wt−m, …, wt−1, wt, wt+1, …, wt+m], where m is the context window size. Recall that for the skip-gram version of word2vec, the total loss for the context window is:
Jskip-gram(vc,wt−m,…wt+m,U) = X J(vc,wt+j,U) (8)
−m≤j≤m j̸=0
Here, J(vc,wt+j,U) represents an arbitrary loss term for the center word c = wt and outside word wt+j. J(vc,wt+j,U) could be Jnaive-softmax(vc,wt+j,U) or Jneg-sample(vc,wt+j,U), depending on your implementation.
Write down three partial derivatives:
(i) ∂Jskip-gram(vc,w∂Ut−m,…wt+m,U)
(ii) ∂Jskip-gram(vc,w∂vtc−m,…wt+m,U)
(iii) ∂Jskip-gram(vc∂,wvtw−m,…wt+m,U) when w ̸= c
Write your answers in terms of and . This is very simple – each solution should be one line.
Once you’re done: Given that you computed the derivatives of J(vc,wt+j,U) with respect to all the model parameters U and V in parts (a) to (c), you have now computed the derivatives of the full loss function Jskip-gram with respect to all parameters. You’re ready to implement word2vec!
2 Coding: Implementing word2vec (18 points)
In this part you will implement the word2vec model and train your own word vectors with stochastic gradient descent (SGD). Before you begin, first run the following commands within the assignment directory in order to create the appropriate conda virtual environment. This guarantees that you have all the necessary packages to complete the assignment. Windows users may wish to install the Linux Windows Subsystem . Also note that you probably want to finish the previous math section before writing the code since you will be asked to implement the math functions in Python. You’ll probably want to implement and test each part of this section in order, since the questions are cumulative.
conda env create -f env.yml conda activate a2
Once you are done with the assignment you can deactivate this environment by running:
conda deactivate
For each of the methods you need to implement, we included approximately how many lines of code our solution has in the code comments. These numbers are included to guide you. You don’t have to stick to them, you can write shorter or longer code as you wish. If you think your implementation is significantly longer than ours, it is a signal that there are some numpy methods you could utilize to make your code both shorter and faster. for loops in Python take a long time to complete when used over large arrays, so we expect you to utilize numpy methods. We will be checking the efficiency of your code. You will be able to see the results of the autograder when you submit your code to Gradescope, we recommend submitting early and often.
Note: If you are using Windows and have trouble running the .sh scripts used in this part, we recommend trying Gow or manually running commands in the scripts.
(a) (12 points) We will start by implementing methods in word2vec.py. You can test a particular method by running python word2vec.py m where m is the method you would like to test. For example, you can test the sigmoid method by running python word2vec.py sigmoid.
(i) Implement the sigmoid method, which takes in a vector and applies the sigmoid function to it. (ii) Implement the softmax loss and gradient in the naiveSoftmaxLossAndGradient method.
(iii) Implement the negative sampling loss and gradient in the negSamplingLossAndGradient method.
(iv) Implement the skip-gram model in the skipgram method.
When you are done, test your entire implementation by running python word2vec.py.
(b) (4 points) Complete the implementation for your SGD optimizer in the sgd method of sgd.py. Test your implementation by running python sgd.py.
(c) (2 points) Show time! Now we are going to load some real data and train word vectors with everything you just implemented! We are going to use the Stanford Sentiment Treebank (SST) dataset to train word vectors, and later apply them to a simple sentiment analysis task. You will need to fetch the datasets first. To do this, run sh getdatasets.sh. There is no additional code to write for this part; just run python run.py.
Note: The training process may take a long time depending on the efficiency of your implementation and the compute power of your machine (an efficient implementation takes one to two hours).
Plan accordingly!
After 40,000 iterations, the script will finish and a visualization for your word vectors will appear. It will also be saved as wordvectors.png in your project directory. Include the plot in your homework write up. In at most three sentences, briefly explain what you see in the plot. This may include, but is not limited to, observations on clusters and words that you expect to cluster but do not.
3 Submission Instructions
You shall submit this assignment on Gradescope as two submissions – one for “Assignment 2 [coding]” and another for ‘Assignment 2 [written]”:
(a) Run the collectsubmission.sh script to produce your assignment2.zip file.
(b) Upload your assignment2.zip file to Gradescope to “Assignment 2 [coding]”.
(c) Upload your written solutions to Gradescope to “Assignment 2 [written]”.

Reviews

There are no reviews yet.

Be the first to review “CS 224n Assignment #2: word2vec (49 Points) (Solution)”

Your email address will not be published. Required fields are marked *