This lesson is in the early stages of development (Alpha version)

HPC Parallelisation For Novices

“High-performance computing” supercomputers have been around for longer than some of their users today. The first supercomputer, the Cray-1, was setup in 1976 and was put in operation at Los Alamos National Laboratory. In the course of history, the design of supercomputers underwent several revolutions. Today, most universities and an increasing part of the industry in several domains exploit the computational power of clusters of interconnected servers.

These High-performance Computing (HPC) clusters are used for large scale data processing and data analysis, scalable but yet fine grained parallel calculations and compute intensive simulations of ever increasing fidelity. This course material is meant to introduce learners to the core principles how to program applications that can harness the full power of such machinery.

Please note that this lesson uses Python 3 without the intent of claiming python to be the universal language for HPC. Python is merely used as a vehicle to convey concepts that due to the intuitiveness of the language should be easy to transport to other languages and domains.

Prerequisites

This material targets future or present users of a HPC infrastructure of any discipline. The learners are expected to have programming skills beyond basic courses. Learners are expected to know how to submit a batch job on a HPC cluster. Further, knowledge on how to write functions and declare variables in python are required. Basic numpy array commands are beneficial but not required to follow the course.

In other words, learners are expected to have skills which are equivalent to having completed:

This lesson guides you through the basics of parallelisation on a computer cluster (or batch farm or supercomputer). If you’re already comfortable with using systems like LSF, Slurm or PBS/Pro and have written parallel applications to run on a cluster, you probably won’t learn much from this lesson. But you are welcome to help others as a technical assistant or contribute to this course material.

Schedule

Setup Download files used on the lesson.
00:00 Recap: Changing the Environment How to extend the software installation of a HPC cluster?
00:30 Estimation of Pi for Pedestrians How do I find the portion of a code snippet that consumes the longest time?
01:20 Parallel Estimation of Pi for Pedestrians What are data parallel algorithms?
How can I estimate the yield of parallelization without writing one line of code?
How do I use multiple cores on a computer in my program?
02:05 Higher levels of parallelism What were the key changes when using the multiprocessing library?
How could this be implemented with dask?
How does a conversion using dask high level API compare?
02:50 Searching for Pi How do I analyze a lot of large files efficiently?
03:35 Bonus session: Distributing computations among computers What is the message passing interface (MPI)?
How do I exploit parallelism using the message passing interface (MPI)?
04:30 Finish

The actual schedule may vary slightly depending on the topics and exercises chosen by the instructor.