MPI & OpenMP Introductory Tutorial


A short (1 hour) presentation that was presented at CITA May 18th 2007 can be found here.


Parallel Programming Overview

Why should one use parallel computing?
  • implement inherent parallelism in algorithms
    • faster processing of data
  • larger amounts of memory
    • capability of handling a larger data set
What are the major distinctions in parallel computer systems?
  • the locality of the system memory with respect to the processors dictates the approach required to implement parallel computing
    • Shared memory: all processors share the same address space
    • Distributed memory: processors and memory are distributed accross a network, each node can only see it's local address space

  • systems are regularly a combination of the two
How is parallelism implemented?
  • shared memory (symmetric multiprocessor) operations
    • compiler directives, intrinsics, run-time library calls
  • distributed memory network message passing
    • subroutine/function calls to an MPI library
Are there specifications to parallel programming?
  • two main application program interfaces:
    • OpenMP (open multi-processing)
      • shared-memory parallel programming in C/C++ and Fortran
    • MPI (message passing interface)
      • C/C++ and Fortran library specification for message passing
Literature
(available at CITA)

Example: calculating pi

While this is not an important numerical method, it illustrates the use of both MPI and OpenMP in both C and Fortran in a concise manner.

Most algorithms require more time to port to MPI than OpenMP, and there are pitfalls that exist in mixing the two (thread safety).

The source code is commented and should provide the basic working knowledge necessary to get started with either paradigm. Examples of how to compile and run the code are contained in the logs.

OpenMP C version -- Compiling and Execution Log

MPI & OpenMP C version -- Compiling and Execution Log

OpenMP Fortran version -- Compiling and Execution Log

MPI & OpenMP Fortran version -- Compiling and Execution Log

Resources

  • If you use cpu-intensive mathematical kernels in your calculations you will benefit from using pre-parallelized libraries like fftw, scalapack, MKL, CXML, ESSL.

  • The LLNL OpenMP tutorial is recommended as a starting point for learning OpenMP.

  • The LAM website contains a number of good MPI tutorials, in particular the NCSA Introduction to MPI tutorial is highly recommended

  • The Wikipedia Parallel Programming Entry is also a good starting point.

  • For help with parallel programming issues contact the CITA parallel programmer