This course will equip students with the necessary knowledge and skills to develop fast algorithms and their massively parallel implementation on modern supercomputers using parallel programming techniques such as SIMD, OpenMP, MPI, and CUDA. The course will cover how to use various linear algebra libraries for parallel execution on both CPUs and GPUs. Tutorials on how to use debuggers and profilers in a massively parallel environment will also be given. Demonstration of performance primitives and how to build container environments on TSUBAME will be given, along with tips on how to execute deep learning frameworks on large GPU supercomputers.
By the end of this course, students will be able to
1. Use SIMD vectorization, shared memory parallelization via OpenMP, and distributed memory parallelization via MPI
2. Program GPUs using OpenACC, CUDA, and HIP
3. Understand how high performance numerical libraries function, and will be able to use them appropriately
4. Debug and profile code in a parallel environment by using parallel debuggers and profilers
5. Use containers and deep learning frameworks on massively parallel computers
Vectorization, Shared memory parallelism, Distributed memory parallelism, GPU programming, Python libraries, Matrix Multiplication, Linear solvers, Parallel debugger, Parallel profilers, Containers, Deep Learning
|Intercultural skills||Communication skills||✔ Specialist skills||✔ Critical thinking skills||✔ Practical and/or problem-solving skills|
Courses will be taught online.
Sample codes will be prepared for each lecture, and exercises will be performed on TSUBAME.
|Course schedule||Required learning|
|Class 1||Introduction to parallel programming||Introduction to the basic concepts of parallel programming|
|Class 2||Shared memory parallelization||Use OpenMP to achieve shared memory parallelization|
|Class 3||Distributed memory parallelization||Use MPI to achieve distributed memory parallelization|
|Class 4||SIMD parallelization||Use SSE, AVX, and AVX512 to achieve SIMD vectorization|
|Class 5||GPU programming||Use OpenACC, CUDA, and HIP to program GPUs|
|Class 6||Parallel programming models||Use advanced parallel programming models such as StarPU, OmpSs, and Legion|
|Class 7||Cache blocking||Use BLISLAB and CUBLAS as an example to practice cache blocking|
|Class 8||High performance Python||Understand how numPy, cuPy, and other libraries can be used to accelerate Python code|
|Class 9||I/O libraries||Use NetCDF, HDF5, MPI-IO to read and write on large parallel file systems|
|Class 10||Parallel debugger||Use CUDA-GDB, Valgrind, TotalView to debug parallel code|
|Class 11||Parallel profiler||Use gprof, VTune, PAPI, Tau, Vampire to profile parallel code|
|Class 12||Containers||Use Singularity with Docker images to build container environments|
|Class 13||Scientific Computing||Learn how to discretize partial differential equations and parallelize the resulting system of equations|
|Class 14||Deep Learning||Use PyTorch to train a large neural network on a parallel computer|
Evaluation is based on written reports (40%) and final report (60%).
The Zoom link will be send to registered students one day before the first lecture.