General Information
General
Course title: | Distributed Computing |
Cource code: | CS586 |
Type of course: | Elective |
Level of course: | Postgraduate |
Course email address: | ![]() |
Instructor
Name: | Panagiota Fatourou |
Office: | K223 |
Office hours: | Wednesday and Thursday, 12:00 - 13:00 |
Telephone number: | +30 2810 39 - 3549(CSD), 1727(ICS-FORTH) |
Email address: | ![]() |
TA
Name: | Eleftherios Kosmas |
Office: | 1.12a (ICS-FORTH) |
Office hours: | Friday, 15:00 - 17:00 |
Telephone number: | +30 2810 391880 |
Email address: | ![]() |
Class Information
Lectures: Wednesday, 17:00-19:00, class A.121, and Thursday, 17:00-19:00, class A.121. Tutorial: Monday, 17:00-19:00, class A.121.Course Summary
We have recently entered the era of multi-core architectures (in which several processors reside on each chip to achieve better performance), and we are now in the position to see multiprocessors sitting in every computer, from desktops to laptops and soon even to cellular phones. Thus, introducing as much parallelism as possible into software applications has become ever so urgent. A major obstacle in this direction comes from the difficulty of developing concurrent data structures and algorithms. This difficulty is inherent in achieving synchronization between processes that run concurrently.
The proliferation of multicore architectures is obviously a serious motivation for studying concurrent programming but it is certainly not the only one. Distributed systems are nowadays widely used in all aspects of everyday life (by the organizations and the companies, the universities and the schools, the government, at home, etc.) Modern distributed applications should exhibit high performance and be highly fault-tolerant. The design of efficient, fault-tolerant, concurrent distributed algorithms plays a vital role in simplifying the task of programming a distributed system. However, designing such algorithms is not an easy task.
This course focuses on the design and analysis of distributed algorithms for multiprocessors that communicate via a shared memory. It studies fundamental models of distributed computing (to capture the needs of modern distributed applications), as well as major techniques for the design and analysis of distributed data structures and algorithms. The biggest part of the course is devoted to synchronization mechanisms; several different techniques are studied for achieving it. A lot of distributed implementations have been presented in the literature in a very informal way and without any proof of their correctness. As a result, the behavior of many of these algorithms is simply unpredictable. This course aims at providing the required theoretical foundations, without which it is impossible to design correct distributed algorithms, check the correctness of current distributed implementations, capture all the details of their performance, discover their inherent limitations and establish optimality results, or determine whether any design tradeoffs are indeed fundamental or simply artifacts of certain environments. The main objective of the course is to supply the students with all the required dexterities for a rigorous and complete theoretical study of shared-memory multiprocessing systems.
This class is geared toward graduate students at all levels as well as advanced undergraduates.
Major objectives
- Understanding the major tools and techniques that allow programmers to effectively program the parts of the code that require substantial communication and synchronization;
- Studying the core ideas behind modern coordination paradigms and concurrent data structures;
- Introduce a variety of methodologies and approaches for reasoning about concurrent programs;
- Realizing not only the basic principles but also the best practice engineering techniques of concurrent computing;
- Identifying techniques to formally prove correctness of multiprocessor programs;
- Presenting techniques to formally study the progress properties of concurrent algorithms;
- Analyzing the performance of multiprocessor algorithms;
- Identifying limitations and impossibility results which express where the effort should not be put in solving a task;
Program
Week | Subject |
---|---|
1st | Introduction - Synchronization: Characteristics of concurrent algorithms, Difficulties encountered in achieving parallelism, The harsh realities of parallelization, Amdhal’s law, Modeling issues. |
2nd | Mutual Exclusion: Mutual Exclusion using read/write registers, Algorithms for two processes, Tournament algorithms. |
3rd | Mutual Exclusion: The Bakery algorithm, The Black-White algorithm, A tight space lower bound, The one-bit mutual exclusion algorithm. |
4th | Mutual Exclusion |
5th | Presentations by the students. |
6th | Practice: Spin locks, contention: Test-and-Set Locks, Exponential Backoff, Queue Locks. Concurrent objects - Correctness, progress and efficiency: Concurrency and Correctness, Linearizability and sequential consistency, Progress conditions. |
7th | Foundations of Shared Memory: Wait-free Simulations of complex shared objects from simpler objects: Implementing a multi-valued SRSW register from binary SRSW registers, Implementing a MRSW register from multi-valued SRSW registers, Implementing a MRMW register from MRSW registers. |
8th | Foundations of Shared Memory: Snapshot objects: An obstruction-free snapshot implementation, A wait-free snapshot implementation that uses unbounded-size registers, Bounding the size of the registers, Single-scanner snapshots. |
9th | The relative power of primitive synchronization operations: The consensus object, Consensus numbers, Wait-free implementations of arbitrary objects, Universality of consensus, A lock-free universal construction, A wait-free universal construction. The wait-free hierarchy. |
10th | Universal Constructions. Concurrent Queues: A bounded partial queue, An unbounded total queue, An unbounded lock-free queue, Dual data structures, Memory Reclamation and the ABA Problem Stacks: An unbounded lock-free stack, Elimination. |
11th | Safe memory Reclamation and the ABA problem: Hazard pointers. Concurrent Stacks & Elimination: An unbounded lock-free stack, The elimination BackOff stack. |
12th | Linked Lists: A simple linked list implementation. Software transactional memory (STM): Transactions and atomicity, Properties of STM implementation, An obstruction-free STM implementation. |
13th | Software transactional memory (STM) : A lock-free STM implementation. Barriers: Sense-reversing barrier, Combining tree barrier, Static tree barrier, Termination detecting barriers. |
Language of instruction
Language of instruction is ordinarily Greek; however, lectures are given in English when foreign students attend the course.
Assignments and Marking
Throughout the semester, a set of theoretical assignments and a project will be handed out. Students will be given the chance to improve their theoretical assignments based on feedback provided by the instructor and the TA. Some of the assignments may ask the students to review one or more papers. A review consists of a two page report containing a summary of the main points of the paper, a discussion explaining why the results are new/interesting/significant and how they relate to the topic(s) discussed in class, and the student’s critique of the paper (more information on how to review a paper here). One of the assignments may ask the students to study a section from a book chapter and present its contents during the course.
The project involves reading one or two research papers from the literature, summarizing and critically reviewing the work, and suggesting some directions for future work relevant to the paper.
The final grade will be derived based on the student performance at the homework assignments, the paper reviews, the projects and the take home exam. The take home exam will be worked by the students for 24 hours at home. This exam will be provided to the students on Saturday, February 1, 2014.
Course Emailing-list & Electronic Account
The course uses an emailing list for the efficient communication of the instructor and the TAs with the students. Subscription to the emailing list is performed by sending an email to the following address:

with empty subject and the following line as the email body:

Students must subscribe to the emailing list by October 4, 2008. The address of the emailing list is:

Prerequisities
Successful completion of CS380 (Algorithms and Complexity) is a prerequisite of this course.
Recommended Reading
-
Books:
- H. Attiya & J. Welch, Distributed Computing: Fundamentals, Simulations and Advanced Topics, Morgan Kaufmann, 1998.
- G. Taubenfeld, Synchronization Algorithms and Concurrent Programming, Pearson / Prentice Hall, 2006.
- M. Herlihy and N. Shavit, The Art of Multiprocessor Programming, Morgan Kauffman, 2008.
- N. Lynch, Distributed Algorithms, Morgan Kaufmann, 1996. Published papers on the following conferences:
- ACM Symposium on Principles of Distributed Computing (PODC)
- International Symposium on DIStributed Computing (DISC)
- ACM Symposium on Theory of Computing (STOC)
- ACM/IEEE Symposium on the Foundations of Theoretical Computer Science (FOCS)
- IEEE International Parallel & Distributed Processing Symposium (IPDPS) Published papers on the following journals:
- Journal of the ACM
- SIAM Journal on Computing
- Distributed Computing
- Information and Computation
- IEEE Transactions on Parallel and Distributed Systems
- ACM Transactions on Programming Languages and Systems
- ACM Transactions on Computer Systems
- Journal of Parallel and Distributed Computing How to review a paper in Theoretical Computer Science
- Ian Parbery, A Guide for new Referees in Theoretical Computer Science
- A manual for authors of mathematical papers by American Mathematical Society
- Indicative Review Form for Research Papers in Theoretical Computer Science