50.049 Parallel Computing on Multicore Architectures

Home / Education / Undergraduate / Courses / 50.049 Parallel Computing on Multicore Architectures

Course Description

This course hence aims to equip students with core knowledge of multicore processor architectures and parallel computing, they will:

  • understand where is the parallelism come from based on the advances in superscalar hyperthreading hardware architectures (multicore CPUs and GPUs);
  • learn how to architect algorithms, software and solutions that can take full advantage of the latest hardware architectures;
  • understand the principles of how to design correct and efficient parallel computing software and get familiar with the tools to debug and instrument parallel computing;
  • get hands-on experience from case studies of algorithms/systems and readings from the current literature provide comparisons and contrasts.


Learning Objectives

  1. Explain the key technologies (e.g., pipeline, out-of-order execution, speculation) used in processor architecture for improving performance.
  2. Learning key concepts in design issues of multi-core processors, such as memory, communication, and scheduling.
  3. Learning how one can develop software that exploits parallelism and concurrency for efficiency, including using software libraries, tools, and formal techniques for design and benchmarking.
  4. Able to develop parallel computing algorithm or system component on modern multicore hardware architectures.

Measurable Outcomes

  1. Able to understand the fundamental concepts of multicore architectures [Exam].
  2. Implement a working efficient parallel computing algorithm/system components on modern multicore architectures [Projects].
  3. Implement, optimize and test parallel algorithms and data structures [Projects, Exams].

Topics Covered

The overall module contains three parts.

  • The first (week 1 ~ week 3) will focus on “what is parallel computing”, and “how to parallel computing”
  • The second (week 4 ~ week 9) discuss “what are the potential problems in parallel computing and how to address them”, and “what are the common strategies to optimize parallel computing”
  • The third (week 10 ~ week 12) will focus on advanced topics including “GPGPU programming”, “shared-nothing parallel computing”, “energy-efficient computing”.
  • In the 13th week, we will have a summary and recap session to help students to fully digest the knowledges.

Textbooks & Required Readings

  • Java Concurrency in Practice by Brian Goetz / Tim Peierls / Joshua Bloch / Joseph Bowbeer / David Holmes / Doug Lea, 2006
  • Parallel Programming: For Multicore and Cluster Systems Authors: Rauber, Thomas, Rünger, Gudula

Recommended Texts and Readings

  • Concurrency: State Models and Java Programming by Jeff Magee and Jeff Kramer, Wiley Second Edition.

Course Instructor(s)

Prof Zhang Shuhao

Image Credit Freepik