Description
Le Song
• Submit your answers as an electronic copy on Canvas.
• Recommended reading: PRML Section 9.1, 12.1
1 Probability [15 pts]
(a) Stores A, B, and C have 50, 75, and 100 employees and, respectively, 50, 60, and 70 percent of these are women. Resignations are equally likely among all employees, regardless of stores and sex. Suppose an employee resigned, and this was a woman. What is the probability that she has worked in store C? [5 pts]
(b) A laboratory blood test is 95 percent effective in detecting a certain disease when it is, infact, present. The test also yields a false positive result for 1 percent of the healthy persons tested. That is, if a healthy person is tested then with probability 0.01 the test result will imply he has the disease. If 0.5 percent of the population actually has the disease, what is the probability a person has the disease given that his test result is positive? [5 pts]
Team Won Lost
Atlanta Braves 87 72
San Francisco Giants 86 73
Los Angeles Dodgers 86 73
Each team had 3 games remaining to be played. All 3 of the Giants games were with the Dodgers, and the 3 remaining games of the Braves were against the San Diego Padres. Suppose that the outcomes of all remaining games are independent and each game is equally likely to be won by either participant. If two teams tie for first place, they have a playoff game, which each team has an equal chance of winning.
(c) What is the probability that Atlanta Braves wins the division? [2 pts]
(d) What is the probability to have an additional playoff game? [3 pts]
2 Maximum Likelihood [15 pts]
Suppose we have n i.i.d (independent and identically distributed) data samples from the following probability distribution. This problem asks you to build a log-likelihood function, and find the maximum likelihood estimator of the parameter(s).
(a) Poisson distribution [5 pts]
The Poisson distribution is defined as
.
What is the maximum likelihood estimator of λ?
(b) Multinomial distribution [5 pts]
The probability density function of Multinomial distribution is given by
,
where . What is the maximum likelihood estimator of θj,j = 1,…k?
(c) Gaussian normal distribution [5 pts]
Suppose we have n i.i.d (Independent and Identically Distributed) data samples from a univariate Gaussian normal distribution N(µ,σ2), which is given by
.
What is the maximum likelihood estimator of µ and σ2?
3 Principal Component Analysis [20 pts]
In class, we learned that Principal Component Analysis (PCA) preserves variance as much as possible. We are going to explore another way of deriving it: minimizing reconstruction error.
Consider data points xn(n = 1,…,N) in D-dimensional space. We are going to represent them in {u1,…,uD} orthonormal basis. That is,
D D xn = Xαinui = X(xnTui)ui.
i=1 i=1
Here, is the length when xn is projected onto ui.
Suppose we want to reduce the dimension from D to M < D. Then the data point xn is approximated by
M D x˜ .
i=1 i=M+1
In this representation, the first M directions of ui are allowed to have different coefficient zin for each data point, while the rest has a constant coefficient bi. As long as it is the same value for all data points, it does not need to be 0.
Our goal is setting u , and bi for n = 1,…,N and i = 1,…,D so as to minimize reconstruction error. That is, we want to minimize the difference between xn and x˜n over :
.
(a) What is the assignment of zjn for j = 1,…,M minimizing J? [5 pts]
(b) What is the assignment of bj for j = M + 1,…,D minimizing J? [5 pts]
(c) Express optimal x˜n and xn − x˜n using your answer for (a) and (b). [2 pts]
(d) What should be the ui for i = 1,…,D to minimize J? [8 pts] Hint: Use x¯)(xn − x¯)T for sample covariance matrix.
4 Clustering [20 pts]
[a-b] Given N data points xn(n = 1,…,N), K-means clustering algorithm groups them into K clusters by minimizing the distortion function over {rnk,µk}
N K
J = XXrnkkxn − µkk2,
n=1k=1
where rnk = 1 if xn belongs to the k-th cluster and rnk = 0 otherwise.
(a) Prove that using the squared Euclidean distance kxn − µkk2 as the dissimilarity function and minimizing the distortion function, we will have
.
That is, µk is the center of k-th cluster. [5 pts]
(b) Prove that K-means algorithm converges to a local optimum in finite steps. [5 pts]
[c-d] In class, we discussed bottom-up hierarchical clustering. For each iteration, we need to find two clusters {x1,x2,…,xm} and {y1,y2,…,yp} with the minimum distance to merge. Some of the most commonly used distance metrics between two clusters are:
• Single linkage: the minimum distance between any pairs of points from the two clusters, i.e.
i=1min,…,m kxi − yjk
j=1,…,p
• Complete linkage: the maximum distance between any parts of points from the two clusters, i.e.
• Average linkage: the average distance between all pair of points from the two clusters, i.e.
(c) When we use the bottom up hierarchical clustering to realize the partition of data, which of the three cluster distance metrics described above would most likely result in clusters most similar to those given by K-means? (Suppose K is a power of 2 in this case). [5 pts]
(d) For the following data (two moons), which of these three distance metrics (if any) would successfully separate the two moons? [5 pts]
5 Programming: Image compression [30 pts]
In this programming assignment, you are going to apply clustering algorithms for image compression. Before starting this assignment, we strongly recommend reading PRML Section 9.1.1, page 428 – 430.
To ease your implementation, we provide a skeleton code containing image processing part. homework1.m is designed to read an RGB bitmap image file, then cluster pixels with the given number of clusters K. It shows converted image only using K colors, each of them with the representative color of centroid. To see what it looks like, you are encouraged to run homework1(‘beach.bmp’, 3) or homework1(‘football.bmp’, 2), for example.
The file you need to edit is mykmeans.m and mykmedoids.m, provided with this homework. In the files, you can see it calls Matlab function kmeans initially. Comment this line out, and implement your own in the files. You would expect to see similar result with your implementation of K-means, instead of kmeans function in Matlab.
K-medoids
Given N data points xn(n = 1,…,N), K-medoids clustering algorithm groups them into K clusters by minimizing the distortion function ), where D(x,y) is a distance measure between two vectors x and y in same size (in case of K-means, D(x,y) = kx − yk2), µk is the center of k-th cluster; and rnk = 1 if xn belongs to the k-th cluster and rnk = 0 otherwise. In this exercise, we will use the following iterative procedure:
• Initialize the cluster center µk, k = 1,…,K.
• Iterate until convergence:
– Update the cluster assignments for every data point xn: rnk = 1 if k = argminj D(xn,µj), and rnk = 0 otherwise.
– Update the center for each cluster k: choosing another representative if necessary.
There can be many options to implement the procedure; for example, you can try many distance measures in addition to Euclidean distance, and also you can be creative for deciding a better representative of each cluster. We will not restrict these choices in this assignment. You are encouraged to try many distance measures as well as way of choosing representatives.
Formatting instruction
Both mykmeans.m and mykmedoids.m take input and output format as follows. You should not alter this definition, otherwise your submission will print an error, which leads to zero credit.
Input
• pixels: the input image representation. Each row contains one data point (pixel). For image dataset, it contains 3 columns, each column corresponding to Red, Green, and Blue component. Each component has an integer value between 0 and 255.
Output
• class: cluster assignment of each data point in pixels. The assignment should be 1, 2, 3, etc. For K = 5, for example, each cell of class should be either 1, 2, 3, 4, or 5. The output should be a column vector with size(pixels, 1) elements.
• centroid: location of K centroids (or representatives) in your result. With images, each centroid corresponds to the representative color of each cluster. The output should be a matrix with K rows and 3 columns. The range of values should be [0, 255], possibly floating point numbers.
Hand-in
Both of your code and report will be evaluated. Upload mykmeans.m and mykmedoids.m files with your implementation. In your report, answer to the following questions:
1. Within the K-medoids framework, you have several choices for detailed implementation. Explain how you designed and implemented details of your K-medoids algorithm, including (but not limited to) how you chose representatives of each cluster, what distance measures you tried and chose one, or when you stopped iteration.
2. Attach a picture of your own. We recommend size of 320 × 240 or smaller.
3. Run your K-medoids implementation with the picture you chose above, with several different K. (e.g, small values like 2 or 3, large values like 16 or 32) What did you observe with different K? How long does it take to converge for each K?
5. Repeat question 2 and 3 with K-means. Do you see significant difference between K-medoids and K-means, in terms of output quality, robustness, or running time?
Note
• We will grade using test pictures which are not provided. We recommend you to test your code with several different pictures so that you can detect some problems that might happen occasionally.
• If we detect copy from any other student’s code or from the web, you will not be eligible for any credit for the entire homework, not just for the programming part. Also, directly calling Matlab function kmeans or other clustering functions is not allowed.
Reviews
There are no reviews yet.