Description
Q1. Programming question β calculating π [10 marks]
Improve the performance of an existing single-threaded calcpi program by converting it to a multi-threaded implementation. Download the starter code, compile it, and run it:
$ git clone https://gitlab.com/cpsc457f21/pi-calc.git
$ cd pi-calc
$ make
$ ./calcpi
Usage: ./calcpi radius n_threads where 0 <= radius <= 100000 and 1 <= n_threads <= 256
The calcpi program estimates the value of Ο using an algorithm described in https://en.wikipedia.org/wiki/Approximations_of_%CF%80#Summing_a_circle’s_area, and it is implemented inside the function count_pi() in file calcpi.cpp.
The included driver (main.cpp) parses the command line arguments, calls count_pi() and prints the results. The driver takes 2 command line arguments: an integer radius and number of threads. For example, to estimate the value of Ο using radius of 10 and 2 threads, you would run it like this:
$ ./calcpi 10 2
Calculating PI with r=10 and n_threads=2
count: 317 PI: 3.17
The function uint64_t count_pi(int r, int N) takes two parameters β the radius and number of threads, and returns the number of pixels inside the circle or radius π centered at (0,0) for every pixel (π₯, π¦) in squre βπ β€ π₯, π¦ β€ π. The current implementation is single threaded, so it ignores the N argument. Your job is to re-implement the function so that it uses N threads to speed up its execution, such that it runs N times faster with N threads on hardware where N threads can run concurrently. Please note that your assignment will be marked both for correctness and the speedup it achieves.
You need to find a way to parallelize the algorithm without using any synchronization mechanisms, such as mutexes, semaphores, atomic types, etc. You are only allowed to create and join threads.
Assume 0 β€ π β€ 100,000 and 1 β€ ππ‘βπππππ β€ 256.
Timing on linuxlab
Please note that not all linuxlab machines are the same. Some have 4 core CPUs, some have 6 cores CPUs, some have 3.6GHz CPUs, others have 3.2GHz. You can check which CPU you have by running lscpu command. When you are running your timings, please make sure you do all of them on the same machine. Otherwise, you will get very inconsistent results.
My basic multi-threaded implementation achieves the following timings using r=100000. I expect your solutions to achieve similar results.
CPU 1 thread 2 threads 4 threads 8 threads 16 threads
i7-4770 12.900 6.470 3.410 2.733 2.740
i7-4790 12.571 6.306 3.311 2.653 2.688
i7-7700 11.473 5.857 3.006 1.765 1.801
i7-8700 10.476 5.374 2.793 1.610 1.161
Q2 β Written answer [3 marks]
Time your multi-threaded solution from Q1 with π = 50000 using the time command on linuxlab.cpsc.ucalgary.ca. Record the real-time for 1, 2, 3, 4, 6, 8, 12, 16, 24 and 32 threads. Also record the timings of the original single-threaded program.
A. Make a table with these timings, and a bar graph, both formatted similar to these:
The numbers in the above table and graph are random. Your timings should look different.
B. When you run your implementation with N threads, you should see N-times speed up compared to the original single threaded program. Do you observe this in your timings for all values of N?
C. Why do you stop seeing the speed up after some value of N?
Q3. Programming question β detecting primes [30 marks]
Convert a single-threaded program detectPrimes to a multi-threaded implementation. Start by downloading the single-threaded code, then compile it and run it:
$ git clone https://gitlab.com/cpsc457f21/detect-primes.git
$ cd detect-primes
$ make
$ cat example.txt
0 3 19 25
4012009 165 1033
$ ./detectPrimes 5 < example.txt
Using 5 threads.
Identified 3 primes:
3 19 1033
Finished in 0.0000s
$ seq 100000000000000000 100000000000000300 | ./detectPrimes 2
Using 2 threads.
Identified 9 primes:
100000000000000003 100000000000000013 100000000000000019 100000000000000021
100000000000000049 100000000000000081 100000000000000099 100000000000000141
100000000000000181 Finished in 5.6863s
The detectPrimes program reads integers in range [2, 263 β 2] from standard input, and then prints out the ones that are prime numbers. The first invocation example above detects prime numbers 3, 19 and 1033 in a file example.txt. The second invocation uses the program to find all primes in the range [1017, 1017 + 300]. If duplicate primes appear in the input, they will be duplicated in the output.
detectPrimes accepts a single command line argument β a number of threads. This parameter is ignored in the current implementation because it is single threaded. Your job is to improve the execution time of detectPrimes by making it multi-threaded, and your implementation should use the number of threads given on the command line. To do this, you will need to re-implement the function:
std::vector<int64_t>
detect_primes(const std::vector<int64_t> & nums, int n_threads);
which is defined in detectPrimes.cpp. The function takes two parameters: the list of numbers to test, and the number of threads to use. The function is called by the driver (main.cpp) after parsing the standard input and command line. Your implementation should use n_threads number of threads. Ideally, if the original single-threaded program takes time π to complete a test, then your multi-threaded implementation should finish that same test in π/π time when using π threads. For example, if it takes 10 seconds to complete a test for the original singlethreaded program, then it should take your multi-threaded program only 2.5 seconds to complete that same test with 4 threads. To achieve this goal, you will need to design your program so that:
β’ You give each thread the same amount of work;
β’ your multi-threaded implementation does the same amount of work as the single-threaded version; and
β’ the synchronization mechanisms you utilize are efficient.
Your TAs will mark your assignment by running the code against multiple different inputs and using different numbers of threads. To get full marks for this assignment, your program needs to output correct results but also achieve near optimal speedup for the given number of threads and available cores. If your code does not achieve optimal speedup on all inputs, you will lose some marks for those tests.
Please note that the purpose of this question is NOT to find a more efficient factorization algorithm. You must implement the exact same factorization algorithm as given in the skeleton code, except you need to make it multi-threaded.
Q4 β Written question (5 marks)
Time the original single-threaded detectPrimes.cpp as well as your multi-threaded version on three files: medium.txt, hard.txt and hard2.txt. For each of these files, you will run your solution 6 times, using 1, 2, 3, 4, 8 and 16 threads. You will record your results in 3 tables, one for each file, formatted like this:
medium.txt
# threads Observed timing Observed speedup compared to original
Expected speedup
original program 1.0 1.0
1 1.0
2 2.0
3 3.0
4 4.0
8 8.0
16 16.0
The βObserved timingβ column will contain the raw timing results of your runs. The βObserved speedupβ column will be calculated as a ratio of your raw timing with respect to the timing of the original single-threaded program. Once you have created the tables, explain the results you obtained. Are the timings what you expected them to be? If not, explain why they differ.
Submission
Submit the following files to D2L for this assignment.
Do not submit a ZIP file. Submit individual files.
calcpi.cpp solution to Q1
detectPrimes.cpp solution to Q3
report.pdf answers to all written questions
Please note β you need to submit all files every time you make a submission, as the previous submission will be overwritten.
General information about all assignments:
3. After you submit your work to D2L, verify your submission by re-downloading it.
http://www.ucalgary.ca/pubs/calendar/current/k-5.html.
8. Here are some examples of what you are not allowed to do for individual assignments: you are not allowed to copy code or written answers (in part, or in whole) from anyone else; you are not allowed to collaborate with anyone; you are not allowed to share your solutions with anyone else; you are not allowed to sell or purchase a solution. This list is not exclusive.
Appendix β Hints for Q1
I suggest you parallelize the outer loop. Give each thread roughly equal number of columns in which to count the pixels. Then sum up the counts from each thread. Your overall algorithm could look like this:
β’ Create separate memory area for each thread (for input and output), e.g. struct Task { int start_x, end_x, partial_count; …}; Task tasks[256];
β’ Divide the work evenly between threads, e.g. for(int i = 0 ; I < n_threads ; I ++) { tasks[i].start_x = β¦ ; tasks[i].end_x = β¦ ; }
β’ Join the threads.
β’ Combine the results of each thread into final result β i.e. return the sum of all partial_counts.
Appendix β Hints for Q3
Hint 1 β bad solution (do not implement this)
A bad solution would be to parallelize the outer loop of the algorithm and assign a fixed portion of the numbers to each thread to check. This is a terrible solution because it would not achieve speedups on many inputs, for example where all hard numbers are at the beginning, and all the easy ones at the end. Your program would then likely give all hard numbers to one thread and would end up running just as slowly as the single-threaded version.
Hint 2 β simple solution (start with this)
A much better, yet still simple solution, would be to parallelize the outer loop, but instead of giving each thread a fixed portion of the input to test, it would decide dynamically how many numbers each thread would process. For example, each thread could process the next number in the list, and if it is a prime, it would add it to the result vector. This would repeat until all numbers have been tested. Note that this solution would achieve optimal speedup for many inputs, but not for all. For example, on input with a single large prime number, it will not achieve any speedup at all. Consequently, if you choose this approach, you will not be able to receive full marks for some tests.
I strongly suggest you start by implementing this simple solution first, and only attempt the more difficult approaches after your simple solution already works.
Hint 3 βgood solution
Even more efficient approach would be to parallelize the inner loop (the loop inside the is_prime function). In this approach, all threads would test the same number for primality. If you choose this approach, you need to give each thread a different portion of divisors to check. This will allow you to handle more input cases than the simple solution mentioned earlier. For extra efficiency, and better marks, you also need to consider implementing thread re-use, e.g., by using barriers. Here is a possible rough outline of an algorithm that you could implement:
detectPrimes(): prepare memory for each thread
initialize empty array result[] β this could be a global variable set global_finished = false β make it atomic to be safe start N threads, each runs thread_function() on its own memory join N threads return results
thread_function(): repeat forever: serial task β pick one thread using barrier get the next number from nums[] if no more numbers left: set global_finished=true to indicate to all threads to quit otherwise: divide work for each thread
parallel task β executed by all threads, via barrier if global_finished flag is set, exit thread
Hint 4 β best solution
This builds on top of the hint 3 above, but it also adds thread cancellation. You need cancellation for cases where one of the threads discovers the number being tested is not a prime, so that it can cancel the work of the other threads. Thread cancellation sounds simple to implement, but it does take non-trivial effort to get it working correctly.
Appendix β Approximate grading scheme for Q3
The test cases that we will use for marking will be designed so that you will get full marks only if you implement the most optimal solution. However, you will receive partial marks even if you one of the less optimal solutions. Here is the rough breakdown of what to expect depending on which solution you implement:
β’ Parallelization of outer loop with fixed amount of work will yield ~9/28 marks
β’ Parallelization of outer loop with dynamic work will yield ~15/28 marks
β’ Parallelization of inner loop without work cancellation and without thread reuse will yield
~19/28 marks
β’ Parallelization of inner loop with thread reuse but without cancellation will yield ~24/28 marks.
β’ Parallelization of inner loop with thread reuse and with cancellation will give 28/28 marks.
Any tests on which your program produces wrong results will receive 0 marks.
Reviews
There are no reviews yet.