Four-day course in “Parallel Programming with MPI/OpenMP”

For the second time, the four-day course Parallel Programming with MPI/OpenMP, organised by Thomas Wüst (ITS SIS), was met with great interest in August 2017. 57 participants from 12 different ETH departments (D-BAUG, D-BSSE, D-CHAB, D-ERDW, D-GESS, D-HEST, D-ITET, D-MATH, D-MATL, D-MAVT, D-PHYS and D-USYS) and one technology platform (NEXUS), other Swiss and foreign universities and research institutes (including Harvard University and the Paul Scherrer Institute) as well as private companies (Disney Research and Infineon) took part in this year’s course.

This pleasing number (an increase of more than 50% compared to last year) demonstrates the great interest and need for such an offer – centrally at ETH Zurich and in an intensive, concentrated form.

The speaker at the course was Rolf Rabenseifner of the High Performance Computing Center Stuttgart (HLRS). Rolf Rabenseifner is a global expert in parallel programming and a member of the MPI Steering Committee. He teaches parallel programming concepts and techniques at German and international universities and research and high performance computing centres (such as Forschungszentrum Jülich) as well as at numerous workshops and conferences (such as the Supercomputing Conference). It is therefore a special honour for ITS to gain Rolf Rabenseifner for this course at ETH Zurich.

The course was aimed at beginners as well as advanced participants with some programming experience in C/C++ or Fortran to give a well balanced insight (in the sense of “breath and depth”) into the parallel programming models of MPI and OpenMP in four intensive days. Optionally, you could only register for the first two (beginning level) or the last two (advanced level) days of the course. The large majority of participants, however, chose the entire four-day course. In view of the participants’ various backgrounds, expertise and requirements, from the initial “exploration” of parallel programming on 10 CPU cores to the scaling task to >10,000 CPU cores, it was also quite a challenge to meet these different needs; however, Rolf Rabenseifner mastered this confidently with his didactic skill.

MPI and OpenMP

MPI (“Message Passing Interface”) is a standard first published in 1994 and constantly revised ever since, which describes the exchange of messages during parallel calculations on distributed computer systems. An MPI application generally consists of several processes communicating with each other, which are all started in parallel at the beginning of the program execution. All of these processes then work together on a problem and use messages that are sent from one process to another to exchange data. An advantage of this principle is that the message exchange also works beyond computer boundaries (inter-node communication).1

OpenMP (“Open Multi-Processing”) is a programming interface (API) developed jointly by various hardware and compiler manufacturers since 1997 for shared memory programming in C/C++ and Fortran on multiprocessor computers. OpenMP is used on systems with joint main memory (“shared memory”), whereas MPI is more likely to be used to parallelise distributed systems (“distributed memory”). In modern high-performance and supercomputers, OpenMP and MPI are often used together.1

In spite of recent programming developments, which can facilitate certain parallelisation tasks of modern hardware architectures, MPI and OpenMP still represent the “workhorses” of parallel programming. This is due in part to their generic and versatile application possibilities as well as to the large number of important, scientific “legacy” codes.

Hands-on at the Euler HPC Cluster

In addition to teaching sometimes complex, theoretical concepts, the four-day course also offered important and necessary hands-on exercises, which the participants were able to carry out directly on the ETH High Performance Compute Cluster Euler. Euler offers various MPI implementations and OpenMP compilers for C/C++ and Fortran , which are constantly updated by the HPC Team, and is ideally equipped for parallel applications thanks to its powerful InfiniBand Interconnect.

Scheduling for 2018

ITS SIS plans to offer the course again in 2018. “Unfortunately”, with a maximum of 60 members, the course is already reaching capacity, on the one hand due to current limited facilities (the computing lab was kindly provided by ISG D-USYS), and on the other hand to ensure high-quality knowledge transfer and to be able to assist the participants with their exercises. We are excited about the level of interest there will be next year.

 

20170821_155509
Great concentration despite somewhat limited space and summer temperatures during the four-day course in parallel programming in August 2017 (Rolf Rabenseifner standing in a white-and-grey chequered shirt).

 

20170821_163219
Blocking vs non-blocking point-to-point communication “made tangible”…

References

  1. MPI Forum
  2. Message Passing Interface Tutorial
  3. OpenMP API Specification
  4. OpenMP Tutorial

1. Texts adopted in modified and abridged form from Wikipedia.

 

erstellt am
in News Schlagwörter: ,,,,

Hinterlassen Sie eine Antwort

Ihre E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahren Sie mehr darüber, wie Ihre Kommentardaten verarbeitet werden .