2 T 1 N n T n α = T 1 nt n (1) α = 1 100% OpenMP MPI OpenMP OpenMP MPI (Message Passing Interface) MPI MPICH OpenMPI 1 OpenMP MPI MPI (trivial p

Similar documents
目 目 用方 用 用 方

para02-2.dvi

NUMAの構成

演習 II 2 つの講義の演習 奇数回 : 連続系アルゴリズム 部分 偶数回 : 計算量理論 部分 連続系アルゴリズム部分は全 8 回を予定 前半 2 回 高性能計算 後半 6 回 数値計算 4 回以上の課題提出 ( プログラム + 考察レポート ) で単位

Microsoft PowerPoint - KHPCSS pptx

44 6 MPI 4 : #LIB=-lmpich -lm 5 : LIB=-lmpi -lm 7 : mpi1: mpi1.c 8 : $(CC) -o mpi1 mpi1.c $(LIB) 9 : 10 : clean: 11 : -$(DEL) mpi1 make mpi1 1 % mpiru

Microsoft PowerPoint MPI.v...O...~...O.e.L.X.g(...Q..)

C/C++ FORTRAN FORTRAN MPI MPI MPI UNIX Windows (SIMD Single Instruction Multipule Data) SMP(Symmetric Multi Processor) MPI (thread) OpenMP[5]

WinHPC ppt

演習準備 2014 年 3 月 5 日神戸大学大学院システム情報学研究科森下浩二 1 RIKEN AICS HPC Spring School /3/5

Microsoft PowerPoint - 演習2:MPI初歩.pptx

コードのチューニング

±é½¬£²¡§£Í£Ð£É½éÊâ


86

untitled

MPI usage

講義の流れ 並列プログラムの概要 通常のプログラムと並列プログラムの違い 並列プログラム作成手段と並列計算機の構造 OpenMP による並列プログラム作成 処理を複数コアに分割して並列実行する方法 MPI による並列プログラム作成 ( 午後 ) プロセス間通信による並列処理 処理の分割 + データの

スライド 1

MPI () MPIMessage Passing Interface MPI MPI OpenMP 7 ( ) 1

コードのチューニング

Microsoft PowerPoint - 講義:片方向通信.pptx

CS

Microsoft PowerPoint - 演習1:並列化と評価.pptx

untitled

¥Ñ¥Ã¥±¡¼¥¸ Rhpc ¤Î¾õ¶·

Microsoft PowerPoint - 講義:コミュニケータ.pptx

MPI

Microsoft PowerPoint _MPI-03.pptx

Microsoft PowerPoint 並列アルゴリズム04.ppt

<4D F736F F F696E74202D C097F B A E B93C782DD8EE682E890EA97705D>

スライド 1

58 7 MPI 7 : main(int argc, char *argv[]) 8 : { 9 : int num_procs, myrank; 10 : double a, b; 11 : int tag = 0; 12 : MPI_Status status; 13 : 1 MPI_Init

Microsoft PowerPoint - scls_biogrid_lecture_v2.pptx

情報処理演習 II

かし, 異なったプロセス間でデータを共有するためには, プロセス間通信や特殊な共有メモリ領域を 利用する必要がある. このためマルチプロセッサマシンの利点を最大に引き出すことができない. こ の問題はマルチスレッドを用いることで解決できる. マルチスレッドとは,1 つのプロセスの中に複 数のスレッド

課題 S1 解説 C 言語編 中島研吾 東京大学情報基盤センター

chap2.ppt

Microsoft PowerPoint - MPIprog-C2.ppt [互換モード]

MPI MPI MPI.NET C# MPI Version2

演習準備

120802_MPI.ppt

XcalableMP入門

Microsoft PowerPoint _MPI-01.pptx

Fundamental MPI 1 概要 MPI とは MPI の基礎 :Hello World 全体データと局所データ グループ通信 (Collective Communication) 1 対 1 通信 (Point-to-Point Communication)

Fundamental MPI 1 概要 MPI とは MPI の基礎 :Hello World 全体データと局所データタ グループ通信 (Collective Communication) 1 対 1 通信 (Point-to-Point Communication)

GNU開発ツール

<4D F736F F F696E74202D C097F B A E B93C782DD8EE682E890EA97705D>

スライド 1

スライド 1

Fundamental MPI 1 概要 MPI とは MPI の基礎 :Hello World 全体データと局所データタ グループ通信 (Collective Communication) 1 対 1 通信 (Point-to-Point Communication)


Microsoft PowerPoint - MPIprog-C1.ppt [互換モード]

情報処理概論(第二日目)

86 8 MPIBNCpack 15 : int n, myid, numprocs, i; 16 : double pi, start_x, end_x; 17 : double startwtime = 0.0, endwtime; 18 : int namelen; 19 : char pro

MPI 超 入門 (FORTRAN 編 ) 東京大学情報基盤センター C 言語編は以下 /ohshima/seminars/t2k201111/ (MPI による並列アプリケーション開発入門 2)

Microsoft PowerPoint - MPIprog-C [互換モード]

Microsoft PowerPoint - MPIprog-C1.ppt [互換モード]

GNU開発ツール

Microsoft PowerPoint - MPIprog-F2.ppt [互換モード]

Page 2 本資料は, 東北大学サイバーサイエンスセンターと NEC の共同により作成され, 大阪大学サイバーメディアセンターの環境で実行確認を行い, 修正を加えたものです. 無断転載等は, ご遠慮下さい.

MPI によるプログラミング概要 C 言語編 中島研吾 東京大学情報基盤センター

Microsoft PowerPoint - S1-ref-F.ppt [互換モード]

115 9 MPIBNCpack 9.1 BNCpack 1CPU X = , B =

MPI コミュニケータ操作

Microsoft PowerPoint - MPIprog-F [互換モード]

Microsoft PowerPoint - MPIprog-F1.ppt [互換モード]

Gfarm/MPI-IOの 概要と使い方

PowerPoint プレゼンテーション

untitled

05-opt-system.ppt

cpp1.dvi

スライド 1

O(N) ( ) log 2 N

Microsoft PowerPoint - GPGPU実践基礎工学(web).pptx

r07.dvi

smpp_resume.dvi

ohp07.dvi

Microsoft PowerPoint - ishikawa.ppt

PowerPoint プレゼンテーション

1-4 int a; std::cin >> a; std::cout << "a = " << a << std::endl; C++( 1-4 ) stdio.h iostream iostream.h C++ include.h 1-4 scanf() std::cin >>

programmingII2019-v01

Microsoft PowerPoint - MPIprog-C1.ppt [互換モード]

XACC講習会

DKA ( 1) 1 n i=1 α i c n 1 = 0 ( 1) 2 n i 1 <i 2 α i1 α i2 c n 2 = 0 ( 1) 3 n i 1 <i 2 <i 3 α i1 α i2 α i3 c n 3 = 0. ( 1) n 1 n i 1 <i 2 < <i

課題 S1 解説 Fortran 編 中島研吾 東京大学情報基盤センター

超初心者用

double 2 std::cin, std::cout 1.2 C fopen() fclose() C++ std::fstream 1-3 #include <fstream> std::fstream fout; int a = 123; fout.open( "data.t

: CR (0x0d) LF (0x0a) line separator CR Mac LF UNIX CR+LF MS-DOS WINDOWS Japan Advanced Institute of Science and Technology

Microsoft Word - 計算科学演習第1回3.doc

double float

nakao

about MPI

MPI によるプログラミング概要 Fortran 編 中島研吾 東京大学情報基盤センター

2 /83

Microsoft PowerPoint - 阪大CMSI pptx

r03.dvi

ohp03.dvi

1.overview

第三回 MPI 実践セミナー HPC システムズ株式会社 新規事業企画室南本和秀 Copyright (C) 2009 HPC SYSTEMS, Inc. All 2009/3/26 1 rights reserved.

Transcription:

22 6 22 MPI MPI 1 1 2 2 3 MPI 3 4 7 4.1.................................. 7 4.2 ( )................................ 10 4.3 (Allreduce )................................. 12 5 14 5.1........................................ 14 5.2..................................... 18 6 18 1 CPU 1

2 T 1 N n T n α = T 1 nt n (1) α = 1 100% 4 4 1 OpenMP MPI OpenMP OpenMP MPI (Message Passing Interface) MPI MPICH OpenMPI 1 OpenMP MPI MPI (trivial parallelization) 100% MPI Mac OS X 2 $ echo Hello Hello Hello mpirun $ mpirun -np 2 echo Hello Hello Hello Hello mpirun -np -np 2 Hello -np 4 4 Hello 4 1 OpenMP OpenMPI OpenMPI MPI OpenMP 2

MPI SPMD (Single Program Multiple Data) mpitest.cc main( argc,char **argv){ rank; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); prf("my rank = %d\n",rank); mpic++ $ mpic++ mpitest.cc My rank = 0 My rank = 1 My rank = 2 My rank = 3 mpi.h MPI C/C++ MPI_Init MPI_Finalize MPI_Init main MPI MPI_ MPI_Comm_rank MPI_COMM_WORLD (Rank) MPI OpenMP OpenMP MPI MPI MPI mpic++(macos X) mpcc(aix IBM ) mpic++ g++ icc icpc $ icpc mpitest.cc -I/usr/local/include -L/usr/local/lib -lmpich -lrt ( ) 3

3 MPI MPI MPI MPI List 1: ( ) #include <stdlib.h> double myrand(void){ return (double)rand()/(double)rand_max; double calc_pi( seed, trial){ srand(seed); n = 0; for( i=0;i<trial;i++){ double x = myrand(); double y = myrand(); if(x*x + y*y < 1.0){ n++; return 4.0*(double)n/(double)trial; main( argc, char **argv){ double pi = calc_pi(1,1000000); prf("%f \n",pi); calc_pi seed trial $./a.out 3.142096 1. 2. main MPI_Init MPI_Finalize 3. MPI_Comm_rank 4. calc_pi 4

List 2: ( 1) #include <stdlib.h> double myrand(void){ return (double)rand()/(double)rand_max; double calc_pi( seed, trial){ srand(seed); n = 0; for( i=0;i<trial;i++){ double x = myrand(); double y = myrand(); if(x*x + y*y < 1.0){ n++; return 4.0*(double)n/(double)trial; main( argc, char **argv){ rank; MPI_Comm_rank(MPI_COMM_WORLD, &rank); double pi = calc_pi(rank,1000000); prf("rank=%d: pi = %f \n",rank,pi); 4 rank=0: pi = 3.139268 rank=3: pi = 3.142288 rank=1: pi = 3.142096 rank=2: pi = 3.139256 rank=1 MPI_Comm_size main List 3: ( ) main( argc, char **argv){ rank, size; MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &size); double pi = calc_pi(rank,1000000); prf("rank=%d:/%d pi = %f \n",rank,size,pi); 5

2 4 $ mpirun -np 2./a.out rank=0/2 pi = 3.139268 rank=1/2 pi = 3.142096 rank=1/4 pi = 3.142096 rank=0/4 pi = 3.139268 rank=2/4 pi = 3.139256 rank=3/4 pi = 3.142288 1,2,3 0 0 MPI MPI_Allreduce List 4: ( 3) main( argc, char **argv){ rank, procs; MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &procs); double pi = calc_pi(rank,1000000); prf("rank=%d/%d pi = %f \n",rank,procs,pi); MPI_Barrier(MPI_COMM_WORLD); double sum = 0; MPI_Allreduce(&pi, &sum, 1, MPI_DOUBLE, MPI_SUM,MPI_COMM_WORLD); sum = sum / (double)procs; if (0==rank){ prf("average = %f\n",sum); rank=0/4 pi = 3.139268 rank=1/4 pi = 3.142096 rank=2/4 pi = 3.139256 rank=3/4 pi = 3.142288 average = 3.140727 0 ( ) MPI_Allreduce MPI_Allreduce(void* senddata, void* recvdata, count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm) senddata recvdata count datatype ( double MPI_DOUBLE) op ( MPI_SUM) MPI_COMM_WORLD MPI_Allreduce 6

pi ( ) sum sum MPI_Reduce MPI_Reduce MPI_Allreduce MPI_Reduce List 5: ( 3) main( argc, char **argv){ rank, procs; MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &procs); double pi = calc_pi(rank,1000000); prf("rank=%d/%d pi = %f \n",rank,procs,pi); MPI_Barrier(MPI_COMM_WORLD); double sum = 0; MPI_Reduce(&pi, &sum, 1, MPI_DOUBLE, MPI_SUM, 0,MPI_COMM_WORLD); if (0==rank){ sum = sum / (double)procs; prf("average = %f\n",sum); ( 0 ) sum 0 MPI_Allreduce MPI 2 4 4.1 C List 6: (C ) main( argc, char **argv){ 2 C/C++ ( & ) Fortran ( ) 7

value = 0; scanf("%d",&value); prf("value = %d\n",value); C++ List 7: (C++ ) #include <iostream> main( argc, char **argv){ value = 0; std::cin >> value; std::cout << "value = " << value << std::endl; $./a.out 123 value = 123 123 0 List 8: main( argc, char **argv){ rank = 0; MPI_Comm_rank(MPI_COMM_WORLD,&rank); value = 0; if(0 == rank){ scanf("%d",&value); MPI_Bcast(&value, 1, MPI_INT, 0, MPI_COMM_WORLD); prf("rank = %d: value = %d\n",rank, value); MPI_Bcast MPI_Bcast(void *buffer, count, MPI_Datatype datatype, root, MPI_Comm comm ) buffer ( ) count root ( 0 ) 123 0 123 rank = 0: value = 123 8

rank = 1: value = 123 rank = 2: value = 123 rank = 3: value = 123 ( ) (double ) parameter parameter List 9: struct parameter{ seed; double temperature; ; main( argc, char **argv){ rank = 0; MPI_Comm_rank(MPI_COMM_WORLD,&rank); parameter param; if(0 == rank){ scanf("%d",&param.seed); scanf("%lf",&param.temperature); MPI_Bcast(&param, sizeof(param), MPI_BYTE, 0, MPI_COMM_WORLD); prf("rank = %d: seed = %d temperature = %f\n",rank, param.seed, param. temperature); sizeof MPI_BYTE 123 0.7 rank = 0: seed = 123 temperature = 0.700000 rank = 1: seed = 123 temperature = 0.700000 rank = 2: seed = 123 temperature = 0.700000 rank = 3: seed = 123 temperature = 0.700000 123 0.7 input.cfg $ cat input.cfg 123 0.7 < input.cfg rank = 0: seed = 123 temperature = 0.700000 rank = 1: seed = 123 temperature = 0.700000 rank = 2: seed = 123 temperature = 0.700000 rank = 3: seed = 123 temperature = 0.700000 C C++ 9

List 10: (C++ ) #include <iostream> struct parameter{ seed; double temperature; ; main( argc, char **argv){ rank = 0; MPI_Comm_rank(MPI_COMM_WORLD,&rank); parameter param; if(0 == rank){ std::cin >> param.seed; std::cin >> param.temperature; MPI_Bcast(&param, sizeof(param), MPI_BYTE, 0, MPI_COMM_WORLD); std::cout << "rank = " << rank; std::cout << " seed = " << param.seed; std::cout << " temperature = " << param.temperature << std::endl; 4.2 ( ) MPI conf.000.dat,conf.001.dat List 11: (C ) main(void){ const SIZE = 10; array[size]; for( i=0;i<size;i++){ array[i] = i; FILE *fp = fopen("data.dat","w"); for( i=0;i<size;i++){ fprf(fp,"%d\n",array[i]); fclose(fp); #include <iostream> #include <fstream> main(void){ List 12: (C++ ) 10

const SIZE = 10; array[size]; for( i=0;i<size;i++){ array[i] = i; std::ofstream ofs("data.dat"); for( i=0;i<size;i++){ ofs << array[i] << std::endl; $./a.out $ cat data.dat 0 1 2 3 4 5 6 7 8 9 array data.dat ( 0 ) ( SIZE 2 ) List 13: (C ) main( argc, char **argv){ rank, procs; MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &procs); const SIZE = 2; array[size]; for( i=0;i<size;i++){ array[i] = rank; FILE *fp; if(0 == rank){ fp = fopen("data.dat","w"); fclose(fp); for( j=0;j<procs;j++){ MPI_Barrier(MPI_COMM_WORLD); if(j!= rank)continue; fp = fopen("data.dat","a"); for( i=0;i<size;i++){ fprf(fp,"%d\n",array[i]); fclose(fp); 11

$ cat data.dat 0 0 1 1 2 2 3 3 0 data.dat C++ List 14: (C++ ) #include <iostream> #include <fstream> main( argc, char** argv){ rank, procs; MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &procs); const SIZE = 2; array[size]; for( i=0;i<size;i++){ array[i] = rank; if(0==rank){ std::ofstream ofs("data.dat"); ofs.close(); for( j=0;j<procs;j++){ MPI_Barrier(MPI_COMM_WORLD); if(j!=rank)continue; std::ofstream ofs("data.dat",std::ios::app); for( i=0;i<size;i++){ ofs << array[i] << std::endl; ofs.close(); 4.3 (Allreduce ) 12

double double List 15: (C ) main( argc, char **argv){ const SIZE = 10; double data[size]; for( i=0;i<size;i++){ data[i] = (double)i; FILE *fp = fopen("data.dat","wb"); fwrite(data,sizeof(double),size,fp); fclose(fp); data.dat hexdump 3 $./a.out $ hexdump -v -e "%f\n" data.dat 0.000000 1.000000 2.000000 3.000000 4.000000 5.000000 6.000000 7.000000 8.000000 9.000000 data 0 0 List 16: Gather(C ) main( argc, char **argv){ rank,procs; MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &procs); const SIZE = 2; double data[size]; for( i=0;i<size;i++){ data[i] = (double)rank; double *buf; if(0==rank){ buf = new double[size*procs]; MPI_Gather(data, SIZE, MPI_DOUBLE, buf, SIZE, MPI_DOUBLE, 0, MPI_COMM_WORLD); if(0==rank){ 3 hexdump c prf hexdump *( ) -v 13

FILE *fp = fopen("data.dat","wb"); fwrite(buf,sizeof(double),size*procs,fp); fclose(fp); delete [] buf; 0 MPI_Gather data buf MPI_Gather MPI_Gather(void *sendbuffer, sendcount, MPI_Datatype sendtype, void *recvbuffer, recvcount, MPI_Datatype recvtype, root, MPI_Comm comm ) sendcount recvcount sendtype recvtype root $ hexdump -v -e "%f\n" data.dat 0.000000 0.000000 1.000000 1.000000 2.000000 2.000000 3.000000 3.000000 C++ #include <iostream> #include <fstream> List 17: (C++ ) main( argc, char **argv){ const SIZE = 10; double data[size]; for( i=0;i<size;i++){ data[i] = (double)i; std::ofstream ofs("data.dat", std::ios::binary); ofs.write((char*)data,sizeof(double)*size); #include <iostream> #include <fstream> List 18: Gather(C++ ) main( argc, char **argv){ rank,procs; MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &procs); const SIZE = 2; double data[size]; 14

for( i=0;i<size;i++){ data[i] = (double)rank; double *buf; if(0==rank){ buf = new double[size*procs]; MPI_Gather(data, SIZE, MPI_DOUBLE, buf, SIZE, MPI_DOUBLE, 0, MPI_COMM_WORLD); if(0==rank){ std::ofstream ofs("data.dat",std::ios::binary); ofs.write((char*)buf,sizeof(double)*size*procs); delete [] buf; 5 5.1 MPI_Send MPI_Recv 0 1 List 19: main( argc, char **argv){ rank; MPI_Comm_rank(MPI_COMM_WORLD,&rank); send_value = rank; recv_value = -1; const TAG = 0; MPI_Status st; if(0==rank){ MPI_Send(&send_value, 1, MPI_INT, 1, TAG, MPI_COMM_WORLD); else if(1==rank){ MPI_Recv(&recv_value, 1, MPI_INT, 0, TAG, MPI_COMM_WORLD,&st); prf("rank = %d: recv_value = %d\n",rank, recv_value); $ mpirun -np 2./a.out rank = 0: recv_value = -1 rank = 1: recv_value = 0 MPI_Send MPI_Recv MPI_Send(void *sendbuffer, sendcount, MPI_Datatype sendtype, sendtag, dest, MPI_Comm comm ) MPI_Recv(void *recvbuffer, recvcount, MPI_Datatype recvtype, recvtag, src, MPI_Status *st, MPI_Comm comm ) sendbuffer recvbuffer sendcount recvcount sendtype recvtype tag 15

0 MPI_Status MPI_Send MPI_Recv 4 rank = 0: recv_value = -1 rank = 1: recv_value = 0 rank = 2: recv_value = -1 rank = 3: recv_value = -1 2 3 0 1 4 List 20: main( argc, char **argv){ rank; MPI_Comm_rank(MPI_COMM_WORLD,&rank); send_value = rank; recv_value = -1; const TAG = 0; MPI_Status st; if(0==rank){ MPI_Send(&send_value, 1, MPI_INT, 1, TAG, MPI_COMM_WORLD); MPI_Recv(&recv_value, 1, MPI_INT, 1, TAG, MPI_COMM_WORLD,&st); else if(1==rank){ MPI_Send(&send_value, 1, MPI_INT, 0, TAG, MPI_COMM_WORLD); MPI_Recv(&recv_value, 1, MPI_INT, 0, TAG, MPI_COMM_WORLD,&st); prf("rank = %d: recv_value = %d\n",rank, recv_value); MPI_Send MPI_Recv 0 1 List 21: main( argc, char **argv){ rank; MPI_Comm_rank(MPI_COMM_WORLD,&rank); send_value = rank; recv_value = -1; const TAG = 0; MPI_Status st; if(0==rank){ MPI_Send(&send_value, 1, MPI_INT, 1, TAG, MPI_COMM_WORLD); 4 MPI 16

MPI_Recv(&recv_value, 1, MPI_INT, 1, TAG, MPI_COMM_WORLD,&st); else if(1==rank){ MPI_Recv(&recv_value, 1, MPI_INT, 0, TAG, MPI_COMM_WORLD,&st); MPI_Send(&send_value, 1, MPI_INT, 0, TAG, MPI_COMM_WORLD); prf("rank = %d: recv_value = %d\n",rank, recv_value); $ mpirun -np 2./a.out rank = 0: recv_value = 1 rank = 1: recv_value = 0 MPI_Sendrecv MPI_Sendrecv MPI_Sendrecv(void *sendbuf, sendcount, MPI_Datatype sendtype, dest, sendtag, void *recvbuf, recvcount, MPI_Datatype recvtype, src, recvtag, MPI_Comm comm, MPI_Status *status ) MPI_Send MPI_Recv MPI_Sendrecv List 22: Sendrecv main( argc, char **argv){ rank; MPI_Comm_rank(MPI_COMM_WORLD,&rank); send_value = rank; recv_value = -1; const TAG = 0; MPI_Status st; if(0==rank){ MPI_Sendrecv(&send_value, 1, MPI_INT, 1, TAG, &recv_value, 1, MPI_INT, 1, TAG, MPI_COMM_WORLD,&st); else if(1==rank){ MPI_Sendrecv(&send_value, 1, MPI_INT, 0, TAG, &recv_value, 1, MPI_INT, 0, TAG, MPI_COMM_WORLD,&st); prf("rank = %d: recv_value = %d\n",rank, recv_value); MPI_Sendrecv List 23: Sendrecv main( argc, char **argv){ rank, procs; MPI_Comm_rank(MPI_COMM_WORLD,&rank); 17

MPI_Comm_size(MPI_COMM_WORLD,&procs); send_value = rank; recv_value = -1; dest_rank = (rank+1)%procs; src_rank = (rank-1+procs)%procs; const TAG = 0; MPI_Status st; MPI_Sendrecv(&send_value, 1, MPI_INT, dest_rank, TAG, &recv_value, 1, MPI_INT, src_rank, TAG, MPI_COMM_WORLD,&st); prf("rank = %d: recv_value = %d\n",rank, recv_value); if if if rank = 0: recv_value = 3 rank = 1: recv_value = 0 rank = 2: recv_value = 1 rank = 3: recv_value = 2 2 3 1 MPI_Sendrecv MPI_Send MPI_Recv MPI_Sendrecv 5.2 MPI_Send MPI_Recv MPI_Isend MPI_Irecv MPI_REQUEST_MAX MPI_Request_free 6 MPI 18