HiPerCH 14 - Module 3

HiPerCH 14 - Module 3

Starting with MPI


 

Content

Modern clusters are collections of individual (shared-memory) nodes, connected via a high- performance network. Without direct access to memory on remote nodes, developers need efficient ways to communicate data between processes on different nodes.

MPI is the de-facto standard for programming such distributed-memory system in the context of high-performance computing (HPC), and can even be used on a single cluster node. MPI provides different abstractions that help users to create efficient multi-processing applications.

This workshop will provide the basics of MPI programming, spanning from point-to-point and collective communication to understanding the most important abstractions, such as communicators and datatypes. The lecture will be mixed with hands-on sessions.


 

Agenda

  • 09:00 - 10:30 Introduction: Overview - Basic concepts - Hands-on
  • 10:30 - 10:45 Break
  • 10:45 - 12:15 Point-to-point: Blocking communication - Non-blocking communication - Hands-on
  • 12:15 - 13:30 Lunch Break
  • 13:30 - 15:00 Blocking Collective communication: Hands-on
  • 15:00 - 15:15 Break
  • 15:15 - 16:45 Derived Datatypes Basics: Communicator handling - Hands-On

 

Trainer(s)

  • Marc-André Hermanns (RWTH Aachen)

 

Requirements

  • Good command of C/C++ or Fortran
  • For the hands-on sessions you will need a machine with MPI installed. This could be a HPC machine like Lichtenberg, or your local computer.

Participating Universities