# Advanced MPI & parallel I/O
This note contains the up-to-date information during the course
## Important links
- [Zoom](https://cscfi.zoom.us/j/65147792676)
- [RocketChat](https://chat.csc.fi/invite/ChYw5h)
- [Course github](https://github.com/csc-training/advanced-mpi)
- [Course home page](https://events.prace-ri.eu/e/AdvancedMPIprogramming-parallelI/O_CSC_SEPT-2021)
- [Lecture slides](https://events.prace-ri.eu/event/1224/attachments/1629/3090/Lecture%20slides_Advanced%20MPI%20Programming%20and%20Parallel%20I_O%20%40%20CSC%20%28PTC%20%7C%20ONLINE%29%2016.9-17.9.2021.pdf)
## General instructions
- During the lectures, you can ask questions via microphone or Zoom chat
- During the hands-on sessions, ask questions in the RocketChat (please use Multiline formatting for error messages and code snippets).
- Complex questions with screen sharing etc. can be discussed in a private break-out room in Zoom.
## Exercises for current session
Parallel I/O with Posix
## Agenda
| Thursday | Advanced MPI|
| -------- | -------- |
|09:00 - 09:45 | Using own communicators|
|09:45 - 10:30 | Exercises|
|11:00 - 11:30 | Advanced communication|
|11:30 - 12:15 | Exercises
|12:15 - 13:00 | **Lunch break**|
|13:00 - 13:45 | User defined datatypes, part 1|
|13:45 - 14:30 | Exercises|
|14:30 - 14:45 | break|
|14:45 - 15:15 | User defined datatypes, part 2|
|15:15 - 16:15 | Exercises|
|16:15 - 16:30 | Wrap-up
| Friday | Parallel I/O |
| -------- | -------- |
|09:00 - 10:00 | Introduction to Parallel I/O and Simple POSIX with MPI|
|10:00 - 10:10 | Break |
|10:10 - 11:00 | MPI-IO |
|11:00 - 12:00 | Exercises|
|12:00 - 13:00 | **Lunch break**|
|13:00 - 14:30 | Exercises|
|14:30 - 15:15 | Parallel I/O with HDF5 |
|15:15 - 16:00 | Exercises|
|16:00 - 16:30 | Wrap-up |
## Useful online material
- [MPI reference](https://www.rookiehpc.com/mpi/docs/index.php)
- [Official MPI standards](https://www.mpi-forum.org/docs/)
- https://support.hdfgroup.org/HDF5/examples/api-c.html
## Free discussion
Feel free to add any general remarks, tips, tricks, comments etc. here. For questions during the exercise sessions use, however, RocketChat as that will monitored more frequently
## Introductory poll
1. My choice of programming language
A. Fortran
B. plain C
C. C++
A. xxxxxx
B. xxxxxxxxxxxx
C. xxxxxxxxxxxxxxxxxxxx
2. My previous experience with MPI
A. I participated "Parallel programming with MPI" course
B. I participated some other MPI course
C. I've learned MPI on my own
A. xxxxxxxxxxxxxxxxxxxxxx
B. xxxxxxxx
C. xxxxxxxxxxxx
3. My motivation for learning MPI
A. I am developing an HPC application
B. I am using MPI application and want to understand it better
C. General interest in parallel programming
A. xxxxxxxxxxxxxxx
B. xxxxxxxxxx
C. xxxxxxxxxxxxxxxxxx
## Quiz: MPI recap
1. What is MPI?
A. the Message Passing interface
B. the Miami Police Investigators
C. the Minimal Polynomial instantiation
D. the Millipede Podiatry institution
E. a way of doing distributed memory parallel programming
A.xxxxxxxxxxxxxxxxxxxxxxxxxxx
B.
C.
D.
E.xxxxxxxxxxxxxxxxxxx
2. How is a parallel MPI program executed?
A. As a set of identical, independent processes
B. Program starts serially, and then spawns and closes threads
C. My MPI programs just crash :-(
D. Each MPI task runs different program with different source code
A.xxxxxxxxxxxxxxxxxxxxxxxxxxx
B.x
C.
D.
3. After initiating an MPI program with "mpiexec -n 4
./my_mpi-program", what does the call to MPI_Init() do?
A. create the 4 parallel processes
B. start program execution
C. enable the 4 independent programs subsequently to communicate with each other
D. create the 4 parallel threads
A.x
B.
C.xxxxxxxxxxxxxxxxxxxxxxxxx
D.
4. If you call MPI_Recv and there is no incoming message, what happens?
A. the Recv fails with an error
B. the Recv reports that there is no incoming message
C. the Recv waits until a message arrives (potentially waiting forever)
D. the Recv times out after some system specified delay (e.g. a few
minutes)
A.
B.
C.xxxxxxxxxxxxxxxxxxxxxxxxxx
D.x
5. If you call MPI_Send and there is no matching receive, which of the
following are possible outcomes?
A. the message disappears
B. the send fails with an error
C. the send waits until a receive is posted (potentially waiting forever)
D. the message is stored and delivered later on (if possible)
E. the send times out after some system specified delay (e.g. a few minutes)
F. the program continues execution regardless of whether the message
is received
A.
B.
C.xxxxxxxxxxxxxxxxxxxxxx
D.xxxxxxxxxxxxxx
E.
F.
7. Which of the following statements do you agree with regarding this code:
```
for (i=0; i < size; i++)
{
if (rank == i)
{
printf("Hello from rank %d\n", rank);
j = 10*i;
}
}
```
A. The for loop ensures the operations are in order: rank 0, then rank
1, ...
B. The for loop ensures the operation are done in parallel across all processes
C. The for loop is entirely redundant
D. The final value of j will be equal to 10*(size-1)
A.
B.
C.xxxxxxxxxxxxxxxxxxxxx
D.xx
10. What is the outcome of the following code snippet when run with 4 processes?
```fortran
a(:) = my_id
call mpi_gather(a, 2, MPI_INTEGER, aloc, 2, MPI_INTEGER, 3, MPI_COMM_WORLD, rc)
if (my_id==3) print *, aloc(:)
```
A. "0 1 2 3"
B. "2 2 2 2 2 2 2 2"
C. "0 0 1 1 2 2 3 3"
D. "0 1 2 3 0 1 2 3"
A.
B.xxxx
C.xxxxxxxxxxxx
D.
11. What is the outcome of the following code snippet when run with 8
processes, i.e. on ranks 0, 1, 2, 3, 4, 5, 6, 7
```c
if (rank % 2 == 0) { // Even processes
MPI_Allreduce(&rank, &evensum, 1, MPI_INT, MPI_SUM, MPI_COMM_WORLD);
if (0 == rank) printf("evensum = %d\n", evensum);
} else { // odd processes
MPI_Allreduce(&rank, &oddsum, 1, MPI_INT, MPI_SUM, MPI_COMM_WORLD);
if (1 == rank) printf("oddsum = %d\n", oddsum);
}
```
A. evensum = 16, oddsum = 12
B. evensum = 28, oddsum = 28
C. evensum = 12, oddsum = 16
D. evensum = 6, oddsum = 2
A.x
B.xxxxxxxxxxx
C.xx
D.