MPI () MPIMessage Passing Interface MPI MPI OpenMP 7 ( ) 1

Similar documents
Microsoft PowerPoint MPI.v...O...~...O.e.L.X.g(...Q..)

2 T 1 N n T n α = T 1 nt n (1) α = 1 100% OpenMP MPI OpenMP OpenMP MPI (Message Passing Interface) MPI MPICH OpenMPI 1 OpenMP MPI MPI (trivial p


86

演習 II 2 つの講義の演習 奇数回 : 連続系アルゴリズム 部分 偶数回 : 計算量理論 部分 連続系アルゴリズム部分は全 8 回を予定 前半 2 回 高性能計算 後半 6 回 数値計算 4 回以上の課題提出 ( プログラム + 考察レポート ) で単位

スライド 1

para02-2.dvi

講義の流れ 並列プログラムの概要 通常のプログラムと並列プログラムの違い 並列プログラム作成手段と並列計算機の構造 OpenMP による並列プログラム作成 処理を複数コアに分割して並列実行する方法 MPI による並列プログラム作成 ( 午後 ) プロセス間通信による並列処理 処理の分割 + データの

C/C++ FORTRAN FORTRAN MPI MPI MPI UNIX Windows (SIMD Single Instruction Multipule Data) SMP(Symmetric Multi Processor) MPI (thread) OpenMP[5]

コードのチューニング

目 目 用方 用 用 方

Microsoft PowerPoint - KHPCSS pptx

untitled

MPI コミュニケータ操作

MPI usage

±é½¬£²¡§£Í£Ð£É½éÊâ

Microsoft PowerPoint - 演習2:MPI初歩.pptx

情報処理概論(第二日目)

スライド 1

演習準備 2014 年 3 月 5 日神戸大学大学院システム情報学研究科森下浩二 1 RIKEN AICS HPC Spring School /3/5

NUMAの構成

120802_MPI.ppt

スライド 1


スライド 1

58 7 MPI 7 : main(int argc, char *argv[]) 8 : { 9 : int num_procs, myrank; 10 : double a, b; 11 : int tag = 0; 12 : MPI_Status status; 13 : 1 MPI_Init

Microsoft PowerPoint - 講義:コミュニケータ.pptx

<4D F736F F F696E74202D C097F B A E B93C782DD8EE682E890EA97705D>

コードのチューニング

かし, 異なったプロセス間でデータを共有するためには, プロセス間通信や特殊な共有メモリ領域を 利用する必要がある. このためマルチプロセッサマシンの利点を最大に引き出すことができない. こ の問題はマルチスレッドを用いることで解決できる. マルチスレッドとは,1 つのプロセスの中に複 数のスレッド

Microsoft PowerPoint - 演習1:並列化と評価.pptx

44 6 MPI 4 : #LIB=-lmpich -lm 5 : LIB=-lmpi -lm 7 : mpi1: mpi1.c 8 : $(CC) -o mpi1 mpi1.c $(LIB) 9 : 10 : clean: 11 : -$(DEL) mpi1 make mpi1 1 % mpiru

Microsoft PowerPoint _MPI-03.pptx

WinHPC ppt

Microsoft PowerPoint 並列アルゴリズム04.ppt

¥Ñ¥Ã¥±¡¼¥¸ Rhpc ¤Î¾õ¶·

課題 S1 解説 C 言語編 中島研吾 東京大学情報基盤センター

Fundamental MPI 1 概要 MPI とは MPI の基礎 :Hello World 全体データと局所データ グループ通信 (Collective Communication) 1 対 1 通信 (Point-to-Point Communication)

Microsoft PowerPoint - 講義:片方向通信.pptx

Fundamental MPI 1 概要 MPI とは MPI の基礎 :Hello World 全体データと局所データタ グループ通信 (Collective Communication) 1 対 1 通信 (Point-to-Point Communication)

Microsoft PowerPoint - S1-ref-F.ppt [互換モード]

Fundamental MPI 1 概要 MPI とは MPI の基礎 :Hello World 全体データと局所データタ グループ通信 (Collective Communication) 1 対 1 通信 (Point-to-Point Communication)

Microsoft PowerPoint - scls_biogrid_lecture_v2.pptx

CS

MPI 超 入門 (FORTRAN 編 ) 東京大学情報基盤センター C 言語編は以下 /ohshima/seminars/t2k201111/ (MPI による並列アプリケーション開発入門 2)

演習準備

課題 S1 解説 Fortran 編 中島研吾 東京大学情報基盤センター

MPI

DKA ( 1) 1 n i=1 α i c n 1 = 0 ( 1) 2 n i 1 <i 2 α i1 α i2 c n 2 = 0 ( 1) 3 n i 1 <i 2 <i 3 α i1 α i2 α i3 c n 3 = 0. ( 1) n 1 n i 1 <i 2 < <i

Microsoft PowerPoint - MPIprog-C [互換モード]

Microsoft PowerPoint - MPIprog-C2.ppt [互換モード]

Microsoft PowerPoint - MPIprog-C1.ppt [互換モード]

chap2.ppt

Microsoft PowerPoint - MPIprog-C1.ppt [互換モード]

MPI によるプログラミング概要 C 言語編 中島研吾 東京大学情報基盤センター

Page 2 本資料は, 東北大学サイバーサイエンスセンターと NEC の共同により作成され, 大阪大学サイバーメディアセンターの環境で実行確認を行い, 修正を加えたものです. 無断転載等は, ご遠慮下さい.

Microsoft PowerPoint _MPI-01.pptx

Microsoft PowerPoint - MPIprog-F1.ppt [互換モード]

untitled

<4D F736F F F696E74202D C097F B A E B93C782DD8EE682E890EA97705D>

2007年度 計算機システム演習 第3回

Microsoft PowerPoint - MPIprog-F [互換モード]

115 9 MPIBNCpack 9.1 BNCpack 1CPU X = , B =

HPCセミナー

Microsoft PowerPoint - MPIprog-F2.ppt [互換モード]

Microsoft PowerPoint - 阪大CMSI pptx

HPCセミナー

untitled

情報処理演習 II

Sae x Sae x 1: 1. {x (i) 0 0 }N i=1 (x (i) 0 0 p(x 0) ) 2. = 1,, T a d (a) i (i = 1,, N) I, II I. v (i) II. x (i) 1 = f (x (i) 1 1, v(i) (b) i (i = 1,

PowerPoint プレゼンテーション

Microsoft PowerPoint - MPIprog-F1.ppt [互換モード]

MPI によるプログラミング概要 Fortran 編 中島研吾 東京大学情報基盤センター

05-opt-system.ppt

XcalableMP入門

Microsoft Word - 計算科学演習第1回3.doc

Microsoft PowerPoint - MPIprog-C1.ppt [互換モード]

スライド 1

Microsoft PowerPoint - MPIprog-F1.ppt [互換モード]

ex01.dvi

mpi-report-j.dvi

MPI コミュニケータ操作

PowerPoint プレゼンテーション

4th XcalableMP workshop 目的 n XcalableMPのローカルビューモデルであるXMPのCoarray機能を用 いて Fiberミニアプリ集への実装と評価を行う PGAS(Pertitioned Global Address Space)言語であるCoarrayのベ ンチマ

並列計算プログラミング超入門

Reedbush-Uアカウントの発行

nakao

Microsoft PowerPoint - ishikawa.ppt

XACC講習会

GNU開発ツール

MPI MPI MPI.NET C# MPI Version2

2012年度HPCサマーセミナー_多田野.pptx

ex01.dvi

GNU開発ツール

86 8 MPIBNCpack 15 : int n, myid, numprocs, i; 16 : double pi, start_x, end_x; 17 : double startwtime = 0.0, endwtime; 18 : int namelen; 19 : char pro

2002 avidemux MPEG-4 : : : G99P045-1

untitled

XMPによる並列化実装2

1.overview

2 /83

Transcription:

7 MPI / 7 (2014 05 21 )

MPI () MPIMessage Passing Interface MPI MPI OpenMP 7 (2014 05 21 ) 1

(MPI) 7 (2014 05 21 ) 2

(OpenMP) 7 (2014 05 21 ) 3

(MPI + OpenMP) 7 (2014 05 21 ) 4

MPI (1) MPI1 OpenMP 1 pragma for OpenMP 7 (2014 05 21 ) 5

MPI (2) OpenMPint n int nn n MPIint n int nn n MPI 7 (2014 05 21 ) 6

MPI () MPI1980 MPI-119946 MPI-219977 MPI-320129 MPIhttp://www.mpi-forum.org/docs/docs.html 7 (2014 05 21 ) 7

(Intel MPI) laurelcc++fortran code.*** hoge C: mpiicc -o hoge code.c C++: mpiicpc -o hoge code.cpp Fortran: mpiifort -o hoge code.fcode.f90 8 mpiexec.hydra -n 8./hoge http://web.kudpc.kyoto-u.ac.jp/manual/ja/library /intelmpi 7 (2014 05 21 ) 8

(OpenMPI) module load module load openmpi/1.6_intel-12.1 module load pgi module load openmpi/1.6_pgi-12.3 module load openmpi/1.6_gnu-4.4.6 C : mpicc -o hoge code.c C++: mpic++ -o hoge code.cpp Fortran 77 : mpif77 -o hoge code.f Fortran 90 : mpif90 -o hoge code.f90 8 mpiexec -n 8./hoge http://web.kudpc.kyoto-u.ac.jp/manual/ja/library/openmpi 7 (2014 05 21 ) 9

(Intel MPI) #!/bin/bash #QSUB -q eb #QSUB -W 0:30 #QSUB -A p=16:t=1:c=1:m=3840m set -x mpiexec.hydra./hoge job.txt eb 0 30 p=16t=11c=11 tm=3840m 1 qsub < job.txt qjobs qkill JOBIDJOBID https://web.kudpc.kyoto-u.ac.jp/manual/ja/run/batchjob/systembc 7 (2014 05 21 ) 10

CFortran CMPI Fortran CFortran Cmpi.h Fortranmpif.h Fortranierr C MPI_INIT C++ 7 (2014 05 21 ) 11

. 7 (2014 05 21 ) 12

MPI_INIT : MPI MPI_FINALIZE : MPI MPI_COMM_SIZE : MPI_COMM_RANK : MPI_SEND : MPI_RECV : 7 (2014 05 21 ) 13

MPI_COMM_WORLD MPI_COMM_WORLD 7 (2014 05 21 ) 14

MPI INIT MPI_INIT1 MPI_INITCmpi.hFortranmpif.h include int MPI_Init(int *argc, char ***argv) main int MPI_INIT(IERROR) INTEGER IERROR MPI_INITargcargv 7 (2014 05 21 ) 15

MPI FINALIZE MPI_FINALIZE MPI_INIT 1 MPI MPI_INITMPI_FINALIZE int MPI_Finalize(void) MPI_FINALIZE(IERROR) 7 (2014 05 21 ) 16

ex1.c (4) 01 #include<stdio.h> 02 #include<mpi.h> 03 04 int main(int argc, char **argv){ 05 puts("hi"); 06 return 0; 07 } Abort Hi 4 7 (2014 05 21 ) 17

ex2.c (4) 01 #include<stdio.h> 02 #include<mpi.h> 03 04 int main(int argc, char **argv){ 05 MPI_Init(&argc, &argv); 06 puts("hi"); 07 MPI_Finalize(); 08 return 0; 09 } Hi Hi Hi Hi 7 (2014 05 21 ) 18

ex3.c (4) 01 #include<stdio.h> 02 #include<mpi.h> 03 04 int main(int argc, char **argv){ 05 puts("hi"); 06 MPI_Finalize(); 07 return 0; 08 } Abort 7 (2014 05 21 ) 19

ex4.c (4) 01 #include<stdio.h> 02 #include<mpi.h> 03 04 int main(int argc, char **argv){ 05 MPI_Init(&argc, &argv); 06 puts("hi"); 07 return 0; 08 } Abort 7 (2014 05 21 ) 20

ex5.c (4) 01 #include<stdio.h> 02 #include<mpi.h> 03 04 int main(int argc, char **argv){ 05 MPI_Init(&argc, &argv); 06 puts("hi"); 07 MPI_Finalize(); 08 MPI_Init(&argc, &argv); 09 puts("hi"); 10 MPI_Finalize(); 11 return 0; 12 } Abort 7 (2014 05 21 ) 21

MPI COMM SIZE MPI_COMM_SIZE MPI_COMM_WORLD int MPI_Comm_size(MPI_Comm comm, int *size) MPI_Comm comm size MPI_COMM_SIZE(COMM, SIZE, IERROR) INTEGER COMMFortranINTEGER 7 (2014 05 21 ) 22

MPI COMM RANK MPI_COMM_RANK 0 int MPI_Comm_rank(MPI_Comm comm, int *rank) MPI_Comm comm rank MPI_COMM_RANK(COMM, RANK, IERROR) 7 (2014 05 21 ) 23

ex6.c (4) 01 #include<stdio.h> 02 #include<mpi.h> 03 04 int main(int argc, char **argv){ 05 int my_rank, num_proc; 06 07 MPI_Init(&argc, &argv); 08 MPI_Comm_size(MPI_COMM_WORLD, &num_proc); 09 MPI_Comm_rank(MPI_COMM_WORLD, &my_rank); 10 11 printf("hi, my rank = %d, and size = %d\n", my_rank, num_proc); 12 13 MPI_Finalize(); 14 return 0; 15 } Hi, my rank = 1, and size = 4 Hi, my rank = 2, and size = 4 Hi, my rank = 3, and size = 4 Hi, my rank = 0, and size = 4 7 (2014 05 21 ) 24

ex6.f (4) 01 include "mpif.h" 02 INTEGER ierr, my_rank, num_proc 03 call MPI_INIT(ierr) 04 call MPI_COMM_SIZE(MPI_COMM_WORLD, num_proc, ierr) 05 call MPI_COMM_RANK(MPI_COMM_WORLD, my_rank, ierr) 06 write(*,*) Hi, my rank =, my_rank,, and size =, num_proc 07 call MPI_FINALIZE(ierr) 08 end Hi, my rank = 3, and size = 4 Hi, my rank = 0, and size = 4 Hi, my rank = 1, and size = 4 Hi, my rank = 2, and size = 4 7 (2014 05 21 ) 25

ex6.f90 (4) 01 program main 02 include "mpif.h" 03 INTEGER my_rank, num_proc, ierr 04 call MPI_INIT(ierr) 05 call MPI_COMM_SIZE(MPI_COMM_WORLD, num_proc, ierr) 06 call MPI_COMM_RANK(MPI_COMM_WORLD, my_rank, ierr) 07 print *, Hi, my rank =, my_rank,, and size =, num_proc 08 call MPI_FINALIZE(ierr) 09 end program main Hi, my rank = 3, and size = 4 Hi, my rank = 1, and size = 4 Hi, my rank = 0, and size = 4 Hi, my rank = 2, and size = 4 (ex6.*) 7 (2014 05 21 ) 26

MPI SEND MPI_SEND int MPI_Send(void *buf, int count, MPI_Datatype type, int dest, int tag, MPI_Comm comm) MPI_SEND(BUF, COUNT, TYPE, DEST, TAG, COMM, IERROR) INTEGER TYPE Fortran void *buf int count MPI_Datatype type int dest int tagmpi_recv tag MPI_Comm comm 7 (2014 05 21 ) 27

MPI RECV MPI_RECV RECVRecieve int MPI_Recv(void *buf, int count, MPI_Datatype type, int source, int tag, MPI_Comm comm, MPI_Status *status) MPI_RECV(BUF, COUNT, TYPE, SOURCE, TAG, COMM, STATUS, IERROR) INTEGER STATUS(MPI_STATUS_SIZE) Fortran void *buf int count MPI_Datatype type int source int tagmpi_sendtag MPI_Comm comm MPI_Status *status 7 (2014 05 21 ) 28

*buf C 1 int a MPI_Send(&a, 1, MPI_INT, dest, tag, MPI_COMM_WORLD); int a[10] 5a[0]a[4] MPI_Send(a, 5, MPI_INT, dest, tag, MPI_COMM_WORLD); MPI_Send(&a[0], 5, MPI_INT, dest, tag, MPI_COMM_WORLD); int a[10] 6a[3]a[8] MPI_Send(a+3, 6, MPI_INT, dest, tag, MPI_COMM_WORLD); MPI_Send(&a[3], 6, MPI_INT, dest, tag, MPI_COMM_WORLD); 7 (2014 05 21 ) 29

MPI Datatypes (C) MPI_SENDMPI_RECV MPI_Datatype type C char : MPI_CHAR short : MPI_SHORT int : MPI_INT long : MPI_LONG float : MPI_FLOAT double : MPI_DOUBLE unsigned char : MPI_UNSIGNED_CHAR unsigned short : MPI_UNSIGNED_SHORT unsigned : MPI_UNSIGNED unsigned long : MPI_UNSIGNED_LONG long double : MPI_LONG_DOUBLE 7 (2014 05 21 ) 30

MPI Datatypes (Fortran) MPI_SENDMPI_RECVINTEGER TYPEFortran INTEGER : MPI_INTEGER REAL : MPI_REAL REAL*8 : MPI_REAL8 DOUBLE PRECISION : MPI_DOUBLE_PRECISION COMPLEX : MPI_COMPLEX LOGICAL : MPI_LOGICAL CHARACTER : MPI_CHARACTER 7 (2014 05 21 ) 31

MPI Status MPI_RECVMPI_Status *statusranktag MPI_WAIT_ALL MPI_Status *status MPI_Status *statusmpi_status_ignore MPI_SENDint countmpi_recv int count MPI_GET_COUNT MPI_Status *status 7 (2014 05 21 ) 32

MPI GET COUNT MPI_GET_COUNT int MPI_Get_count(MPI_Status *status, MPI_Datatype type, int *count) MPI_RECV(STATUS, TYPE, COUNT, IERROR) MPI_Status *status int count 7 (2014 05 21 ) 33

MPI GET COUNT ex12.c (2) 13 if(my_rank == 0){ 14 char send[] = "How are you?"; 15 MPI_Send(send, strlen(send)+1, MPI_CHAR, 1, 0, MPI_COMM_WORLD); 16 } else if(my_rank == 1){ 17 char recv[100]; int cnt; 18 MPI_Recv(recv, 20, MPI_CHAR, 0, 0, MPI_COMM_WORLD, &status); 19 printf("i recieved: %s\n", recv); 20 MPI_Get_count(&status, MPI_CHAR, &cnt); 21 printf("length = %d\n",cnt); 22 } I recieved: How are you? length = 13 7 (2014 05 21 ) 34

MPI GET COUNT ex13.c (2) 13 if(my_rank == 0){ 14 char send[] = "I send looooooooooooooooooong message. How are you?"; 15 MPI_Send(send, strlen(send)+1, MPI_CHAR, 1, 0, MPI_COMM_WORLD); 16 } else if(my_rank == 1){ 17 char recv[100]; int cnt; 18 MPI_Recv(recv, 20, MPI_CHAR, 0, 0, MPI_COMM_WORLD, &status); 19 printf("i recieved: %s\n", recv); 20 MPI_Get_count(&status, MPI_CHAR, &cnt); 21 printf("length = %d\n",cnt); 22 } Abort 7 (2014 05 21 ) 35

ex7.c (2) 04 #define N 3 08 int i, send[n], recv[n], source, dest; 09 MPI_Status status; 14 15 send[0] = 1; 16 for(i=1;i<n;i++) send[i] = (send[i-1] * 3 + my_rank) % 10007; 17 18 source = dest = 1 - my_rank; 19 MPI_Send(send, N, MPI_INT, dest, 0, MPI_COMM_WORLD); 20 MPI_Recv(recv, N, MPI_INT, source, 0, MPI_COMM_WORLD, &status); 7 (2014 05 21 ) 36

ex8.c (2) 04 #define N 3333 08 int i, send[n], recv[n], source, dest; 09 MPI_Status status; 14 15 send[0] = 1; 16 for(i=1;i<n;i++) send[i] = (send[i-1] * 3 + my_rank) % 10007; 17 18 source = dest = 1 - my_rank; 19 MPI_Send(send, N, MPI_INT, dest, 0, MPI_COMM_WORLD); 20 MPI_Recv(recv, N, MPI_INT, source, 0, MPI_COMM_WORLD, &status); Ctrl-C 7 (2014 05 21 ) 37

0 1 1 1 0 0 7 (2014 05 21 ) 38

() 0 1 1 1 0 0 MPI_SENDRECV 7 (2014 05 21 ) 39

MPI SENDRECV MPI_SENDRECV1 MPI int MPI_Sendrecv(void *sendbuf, int sendcount, MPI_Datatype sendtype, int dest, int sendtag, void *recvbuf, int recvcount, MPI_Datatype recvtype, int source, int recvtag, MPI_Comm comm, MPI_Status *status) MPI_SENDRECV(SENDBUF, SENDCOUNT, SENDTYPE, DEST, SENDTAG, RECVBUF, RECVCOUNT, RECVTYPE, SOURCE, RECVTAG, COMM, STATUS, IERROR) MPI_SENDRECV_REPLACE int MPI_Sendrecv_replace(void *buf, int count, MPI_Datatype type, int dest, int sendtag, int source, int recvtag, MPI_Comm comm, MPI_Status *status) 7 (2014 05 21 ) 40

ex23.c (4) 06 int my_val, recv_val; 11 12 my_val = my_rank * my_rank; 13 MPI_Sendrecv(&my_val, 1, MPI_INT, (my_rank+1)%num_proc, 0, 14 &recv_val, 1, MPI_INT, (my_rank+num_proc-1)%num_proc, 0, 15 MPI_COMM_WORLD, MPI_STATUS_IGNORE); 16 17 printf("my_rank %d, my_val %d, recv_val %d\n", my_rank, my_val, recv_val); my_rank 3, my_val 9, recv_val 4 my_rank 2, my_val 4, recv_val 1 my_rank 0, my_val 0, recv_val 9 my_rank 1, my_val 1, recv_val 0 7 (2014 05 21 ) 41

MPI_ISENDMPI_IRECV MPI_ISEND : MPI_IRECV : MPI_WAIT : 7 (2014 05 21 ) 42

MPI ISEND MPI_ISEND ISEND IImmediate int MPI_Isend(void *buf, int count, MPI_Datatype type, int dest, int tag, MPI_Comm comm, MPI_Request *request) MPI_ISEND(BUF, COUNT, TYPE, DEST, TAG, COMM, REQUEST, IERROR) INTEGER REQUEST Fortran MPI_Request *request 7 (2014 05 21 ) 43

MPI IRECV MPI_IRECV int MPI_Irecv(void *buf, int count, MPI_Datatype type, int source, int tag, MPI_Comm comm, MPI_Request *request) MPI_IRECV(BUF, COUNT, TYPE, SOURCE, TAG, COMM, REQUEST, IERROR) 7 (2014 05 21 ) 44

MPI WAIT MPI_WAIT int MPI_Wait(MPI_Request *request, MPI_Status *status) MPI_WAIT(REQUEST, STATUS, IERROR) MPI_Request *request MPI_Status *status 7 (2014 05 21 ) 45

ex9.c (2) 06 int i, source=0, dest=1, tag; 07 double a = 500.0, b = 0.05, c = 0.0; 08 MPI_Request request; 14 if(my_rank == 0){ 15 tag = 1; 16 MPI_Isend(&a, 1, MPI_DOUBLE, dest, tag, MPI_COMM_WORLD, &request); 17 tag = 0; 18 MPI_Isend(&b, 1, MPI_DOUBLE, dest, tag, MPI_COMM_WORLD, &request); 19 } else if(my_rank == 1){ 20 tag = 0; 21 MPI_Irecv(&c, 1, MPI_DOUBLE, source, tag, MPI_COMM_WORLD, &request); 22 printf("recieved %f\n", c); 23 tag = 1; 24 MPI_Irecv(&c, 1, MPI_DOUBLE, source, tag, MPI_COMM_WORLD, &request); 25 printf("recieved %f\n", c); 26 } recieved 0.000000 recieved 0.000000 7 (2014 05 21 ) 46

ex10.c (2) 06 int i, source=0, dest=1, tag; 07 double a = 500.0, b = 0.05, c = 0.0; 08 MPI_Request request; 09 MPI_Status status; 15 if(my_rank == 0){ 16 tag = 1; 17 MPI_Isend(&a, 1, MPI_DOUBLE, dest, tag, MPI_COMM_WORLD, &request); 18 tag = 0; 19 MPI_Isend(&b, 1, MPI_DOUBLE, dest, tag, MPI_COMM_WORLD, &request); 20 } else if(my_rank == 1){ 21 tag = 0; 22 MPI_Irecv(&c, 1, MPI_DOUBLE, source, tag, MPI_COMM_WORLD, &request); 23 MPI_Wait(&request, &status); 24 printf("recieved %f\n", c); 25 tag = 1; 26 MPI_Irecv(&c, 1, MPI_DOUBLE, source, tag, MPI_COMM_WORLD, &request); 27 MPI_Wait(&request, &status); 28 printf("recieved %f\n", c); recieved 0.050000 recieved 500.000000 7 (2014 05 21 ) 47

(1) 1 IMPI_IBCAST MPI_BCASTrank MPI_SCATTERrank 7 (2014 05 21 ) 48

(2) MPI_GATHERMPI_SCATTER 1 MPI_REDUCEMPI_GATHER 1 MPI_ALLTOALL MPI_ALLGATHERMPI_GATHER MPI_ALLREDUCEMPI_REDUCE 7 (2014 05 21 ) 49

MPI BCAST MPI_BCASTrank MPI_BCAST rootcommcounttype BCASTBroadcast int MPI_Bcast(void *buf, int count, MPI_Datatype type, int root, MPI_Comm comm) MPI_BCAST(BUF, COUNT, TYPE, ROOT, COMM, IERROR) void *buf int root 7 (2014 05 21 ) 50

MPI BCAST 0 a 0 a 1 a 2 a 3... 1 b 0 b 1 b 2 b 3... 2 c 0 c 1 c 2 c 3... 3 d 0 d 1 d 2 d 3... 0 a 0 a 1 a 2 a 3... 1 a 0 a 1 b 2 b 3... 2 a 0 a 1 c 2 c 3... 3 a 0 a 1 d 2 d 3... 7 (2014 05 21 ) 51

ex15.c (4) 06 int i, arr[10], sum; 12 if(my_rank==0){ 13 for(i=0;i<10;i++) arr[i] = i; 14 } 16 MPI_Bcast(arr, 10, MPI_INT, 0, MPI_COMM_WORLD); 18 sum = 0; 19 for(i=0;i<10;i++) sum += arr[i]; 21 printf("my_rank = %d, sum = %d\n", my_rank, sum); my_rank = 0, sum = 45 my_rank = 3, sum = 45 my_rank = 1, sum = 45 my_rank = 2, sum = 45 7 (2014 05 21 ) 52

MPI SCATTER MPI_SCATTERrank int MPI_Scatter(void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm Comm) MPI_SCATTER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR) void *sendbuf int root int sendcount 1 int recvcount void *recvbuf rank root void *sendbufrankroot MPI_IN_PLACEMPI-2 int root 7 (2014 05 21 ) 53

MPI SCATTER 0 () z 0 z 1 z 2 z 3... 0 () a 0 a 1 a 2 a 3... 1 b 0 b 1 b 2 b 3... 2 c 0 c 1 c 2 c 3... 3 d 0 d 1 d 2 d 3... 0 () z 0 z 1 z 2 z 3... 0 () z 0 a 1 a 2 a 3... 1 z 1 a 1 b 2 b 3... 2 z 2 a 1 c 2 c 3... 3 z 3 a 1 d 2 d 3... 7 (2014 05 21 ) 54

ex16.c (4) 06 int i, arr[12], myarr[3], sum; 12 if(my_rank==0){ 13 for(i=0;i<3*num_proc;i++) arr[i] = i; 14 } 16 MPI_Scatter(arr, 3, MPI_INT, myarr, 3, MPI_INT, 0, MPI_COMM_WORLD); 18 sum = 0; 19 for(i=0;i<3;i++) sum += myarr[i]; 21 printf("my_rank = %d, sum = %d\n", my_rank, sum); my_rank = 3, sum = 30 my_rank = 0, sum = 3 my_rank = 1, sum = 12 my_rank = 2, sum = 21 7 (2014 05 21 ) 55

ex17.c (4) 06 int i, arr[12], sum; 12 if(my_rank==0){ 13 for(i=0;i<3*num_proc;i++) arr[i] = i; 14 } 15 16 if(my_rank == 0) 17 MPI_Scatter(arr, 3, MPI_INT, MPI_IN_PLACE, 3, MPI_INT, 0, MPI_COMM_WORLD); 18 else 19 MPI_Scatter(arr, 3, MPI_INT, arr, 3, MPI_INT, 0, MPI_COMM_WORLD); 20 21 sum = 0; 22 for(i=0;i<3;i++) sum += arr[i]; 23 24 printf("my_rank = %d, sum = %d\n", my_rank, sum); my_rank = 2, sum = 21 my_rank = 3, sum = 30 my_rank = 0, sum = 3 my_rank = 1, sum = 12 7 (2014 05 21 ) 56

MPI GATHER MPI_GATHER MPI_SCATTER int MPI_Gather(void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm Comm) MPI_GATHER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR) void *recvbuf int root int recvcount 1 int sendcount void *sendbuf MPI_IN_PLACE int root 7 (2014 05 21 ) 57

MPI GATHER 0 () z 0 z 1 z 2 z 3... 0 () a 0 a 1 a 2 a 3... 1 b 0 b 1 b 2 b 3... 2 c 0 c 1 c 2 c 3... 3 d 0 d 1 d 2 d 3... 0 () a 0 b 0 c 0 d 0... 0 () a 0 a 1 a 2 a 3... 1 b 0 b 1 b 2 b 3... 2 c 0 c 1 c 2 c 3... 3 d 0 d 1 d 2 d 3... 7 (2014 05 21 ) 58

MPI REDUCE MPI_REDUCE int MPI_Reduce(void *sendbuf, void *recvbuf, int count, MPI_Datatype type, MPI_Op op, int root, MPI_Comm comm) MPI_REDUCE(SENDBUF, RECVBUF, COUNT, TYPE, OP, ROOT, COMM, IERROR) INTEGER OP Fortran void *sendbuf MPI_IN_PLACE void *recvbuf int root MPI_Op op int root 7 (2014 05 21 ) 59

MPI REDUCE 0 () z 0 z 1 z 2 z 3... 0 () 1 5 a 2 a 3... 1 2 6 b 2 b 3... 2 3 7 c 2 c 3... 3 4 8 d 2 d 3... 0 () 10 26 z 2 z 3... 0 () 1 5 a 2 a 3... 1 2 6 b 2 b 3... 2 3 7 c 2 c 3... 3 4 8 d 2 d 3... 7 (2014 05 21 ) 60

MPI MPI_REDUCEMPI_ALLREDUCEMPI_Op op MPI_MAX : MPI_MIN : MPI_SUM : MPI_PROD : MPI_LAND : MPI_BAND : MPI_LOR : MPI_BOR : MPI_LXOR : MPI_BXOR : MPI_MAXLOC : MPI_MINLOC : 7 (2014 05 21 ) 61

ex11.c (4) 01 #include<stdio.h> 02 #include<mpi.h> 03 04 int main(int argc, char **argv){ 05 int my_rank, num_proc; 06 int i, my[2], sum[2]; 07 08 MPI_Init(&argc, &argv); 09 MPI_Comm_size(MPI_COMM_WORLD, &num_proc); 10 MPI_Comm_rank(MPI_COMM_WORLD, &my_rank); 11 12 for(i=0;i<2;i++) my[i] = i*4 + my_rank + 1; 13 MPI_Reduce(my, sum, 2, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD); 14 if(my_rank == 0) printf("%d %d\n", sum[0], sum[1]); 15 16 MPI_Finalize(); 17 return 0; 18 } 10 26 7 (2014 05 21 ) 62

MPI ALLTOALL MPI_ALLTOALL int MPI_Alltoall(void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm) MPI_ALLTOALL(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, COMM, IERROR) void *sendbuf int sendcount1 int recvcount void *recvbuf int recvcount1 int sendcount 7 (2014 05 21 ) 63

MPI ALLTOALL 0 () a 0 a 1 a 2 a 3... 1 () b 0 b 1 b 2 b 3... 2 () c 0 c 1 c 2 c 3... 3 () d 0 d 1 d 2 d 3... 0 () a 0 b 0 c 0 d 0... 1 () a 1 b 1 c 1 d 1... 2 () a 2 b 2 c 2 d 2... 3 () a 3 b 3 c 3 d 3... 7 (2014 05 21 ) 64

MPI ALLGATHER MPI_ALLGATHER MPI_GATHER MPI_GATHERroot int MPI_Allgather(void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm Comm) MPI_ALLGATHER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, COMM, IERROR) 7 (2014 05 21 ) 65

MPI ALLGATHER 0 () a 0 a 1 a 2 a 3... 1 () b 0 b 1 b 2 b 3... 2 () c 0 c 1 c 2 c 3... 3 () d 0 d 1 d 2 d 3... 0 () a 0 b 0 c 0 d 0... 1 () a 0 b 0 c 0 d 0... 2 () a 0 b 0 c 0 d 0... 3 () a 0 b 0 c 0 d 0... 7 (2014 05 21 ) 66

MPI ALLREDUCE MPI_ALLREDUCE MPI_REDUCE MPI_REDUCEroot int MPI_Allreduce(void *sendbuf, void *recvbuf, int count, MPI_Datatype type, MPI_Op op, MPI_Comm comm) MPI_ALLREDUCE(SENDBUF, RECVBUF, COUNT, TYPE, OP, COMM, IERROR) 7 (2014 05 21 ) 67

MPI ALLREDUCE 0 () 1 5 a 2 a 3... 1 () 2 6 b 2 b 3... 2 () 3 7 c 2 c 3... 3 () 4 8 d 2 d 3... 0 () 10 26 x 2 x 3... 1 () 10 26 y 2 y 3... 2 () 10 26 z 2 z 3... 3 () 10 26 u 2 u 3... 7 (2014 05 21 ) 68

7 (2014 05 21 ) 69

MPI BARRIER MPI_BARRIER int MPI_Barrier(MPI_Comm comm) MPI_BARRIER(COMM) 7 (2014 05 21 ) 70

MPI WAITALL MPI_WAITALL int MPI_Waitall(int count, MPI_Request *array_of_requests, MPI_Status *array_of_statuses) MPI_WAITALL(COUNT, ARRAY_OF_REQUESTS, ARRAY_OF_STATUSES, IERROR) int count MPI_Request *array_of_requests int count MPI_Status *array_of_statuses MPI_STATUSES_IGNORE MPI_STATUS_IGNORE 11 MPI_WAITANY 1 MPI_WAITSOME 7 (2014 05 21 ) 71

MPI TEST MPI_TEST MPI_WAITMPI_TESTALL MPI_TESTANYMPI_TESTSOME int MPI_Test(MPI_Request *request, int *flag, MPI_Status *status) MPI_MPI_TEST(REQUEST, FLAG, STATUS, IERROR) LOGICAL FLAG Fortran LOGICAL int flagtruefalse 0 7 (2014 05 21 ) 72

MPI WTIME MPI_WTIME 2 double MPI_Wtime(void) DOUBLE PRECISION MPI_WTIME() 7 (2014 05 21 ) 73

ex14.c (4) 07 double start_time, end_time; 08 static double arr[10000000]; /* around 80MB */ 14 start_time = MPI_Wtime(); 15 16 if(my_rank == 0){ 17 MPI_Send(arr, 10000000, MPI_DOUBLE, 1, 0, MPI_COMM_WORLD); 18 } else if(my_rank == 1){ 19 MPI_Recv(arr, 10000000, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD, 20 MPI_STATUS_IGNORE); 21 } else if(my_rank == 2){ 22 int i; 23 for(i=0;i<10000000;i++) arr[i] = 10000.0 / i; 24 } 25 26 end_time = MPI_Wtime(); 27 printf("rank = %d, elapsed = %f = %f - %f\n", 28 my_rank, end_time-start_time, end_time, start_time); 29 7 (2014 05 21 ) 74

ex14.c (4) rank = 3, elapsed = 0.000000 = 1369794662.705628-1369794662.705628 rank = 0, elapsed = 0.044439 = 1369794662.750048-1369794662.705609 rank = 1, elapsed = 0.044445 = 1369794662.750048-1369794662.705603 rank = 2, elapsed = 0.062007 = 1369794662.767653-1369794662.705646 7 (2014 05 21 ) 75

MPI BSEND, MPI SSEND, MPI RSEND MPI_SEND3 MPI_SEND MPI_RECV MPI_BSEND : B Buffered MPI_SSEND : S Synchronous MPI_RSEND : R Ready ex7.c MPI_BSENDex8.c MPI_SSEND MPI_IBSENDMPI_ISSENDMPI_IRSEND MPI_ISEND 7 (2014 05 21 ) 76

MPI ANY SOURCEMPI ANY TAG MPI_RECVsourceIDtag tag source MPI_ANY_SOURCE tagmpi_any_tag 7 (2014 05 21 ) 77

MPI PROC NULL MPI_SENDdestMPI_PROC_NULL MPI_RECVsourceMPI_PROC_NULL 7 (2014 05 21 ) 78

MPI_INTMPI_DOUBLE MPI_TYPE_COMMIT MPI_TYPE_CONTIGUOUS MPI_TYPE_VECTOR MPI_TYPE_INDEXED MPI_TYPE_CREATE_SUBARRAY n MPI_TYPE_CREATE_STRUCT 7 (2014 05 21 ) 79

MPI TYPE COMMIT MPI_TYPE_COMMIT 1 MPI_TYPE_COMMIT int MPI_Type_commit(MPI_Datatype *type) MPI_TYPE_COMMIT(TYPE, IERROR) MPI_Datatype *type TYPE Fortran INTEGER 7 (2014 05 21 ) 80

MPI TYPE VECTOR MPI_TYPE_VECTOR oldtype blen 1 count newtype oldtypestride int MPI_Type_vector(int count, int blen, int stride, MPI_Datatype oldtype, MPI_Datatype *newtype) MPI_TYPE_VECTOR(COUNT, BLEN, STRIDE, OLDTYPE, NEWTYPE, IERROR) NEWTYPE Fortran INTEGER 7 (2014 05 21 ) 81

ex18.c (2) 06 int i, arr[30]; 07 MPI_Datatype my_type; 13 if(my_rank==0){ 14 for(i=0;i<30;i++) arr[i] = i; 15 } else if(my_rank == 1){ 16 for(i=0;i<30;i++) arr[i] = -1; 17 } 18 19 MPI_Type_vector(4, 2, 3, MPI_INT, &my_type); 20 MPI_Type_commit(&my_type); 21 22 if(my_rank == 0) 23 MPI_Send(arr, 1, my_type, 1, 0, MPI_COMM_WORLD); 24 else if(my_rank == 1) 25 MPI_Recv(arr, 1, my_type, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE); 26 27 if(my_rank == 1){ 28 for(i=0;i<30;i++) printf("%2d%c", arr[i], i%10==9? \n : ); 29 } 7 (2014 05 21 ) 82

ex18.c (2) 0 1-1 3 4-1 6 7-1 9 10-1 -1-1 -1-1 -1-1 -1-1 -1-1 -1-1 -1-1 -1-1 -1-1 7 (2014 05 21 ) 83

ex19.c (2) 06 int i, arr[30]; 07 MPI_Datatype my_type; 13 if(my_rank==0){ 14 for(i=0;i<30;i++) arr[i] = i; 15 } else if(my_rank == 1){ 16 for(i=0;i<30;i++) arr[i] = -1; 17 } 18 19 MPI_Type_vector(4, 2, 3, MPI_INT, &my_type); 20 MPI_Type_commit(&my_type); 21 22 if(my_rank == 0) 23 MPI_Send(arr, 2, my_type, 1, 0, MPI_COMM_WORLD); 24 else if(my_rank == 1) 25 MPI_Recv(arr, 2, my_type, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE); 26 27 if(my_rank == 1){ 28 for(i=0;i<30;i++) printf("%2d%c", arr[i], i%10==9? \n : ); 29 } 7 (2014 05 21 ) 84

ex19.c (2) 0 1-1 3 4-1 6 7-1 9 10 11 12-1 14 15-1 17 18-1 20 21-1 -1-1 -1-1 -1-1 -1 7 (2014 05 21 ) 85

ex20.c (2) 06 int i, arr[30]; 07 MPI_Datatype my_type; 12 13 if(my_rank==0){ 14 for(i=0;i<30;i++) arr[i] = i; 15 MPI_Type_vector(4, 2, 3, MPI_INT, &my_type); 16 } else if(my_rank == 1){ 17 for(i=0;i<30;i++) arr[i] = -1; 18 MPI_Type_vector(2, 4, 10, MPI_INT, &my_type); 19 } 20 MPI_Type_commit(&my_type); 21 22 if(my_rank == 0) 23 MPI_Send(arr, 1, my_type, 1, 0, MPI_COMM_WORLD); 24 else if(my_rank == 1) 25 MPI_Recv(arr, 1, my_type, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE); 26 27 if(my_rank == 1){ 28 for(i=0;i<30;i++) printf("%2d%c", arr[i], i%10==9? \n : ); 29 } 7 (2014 05 21 ) 86

ex20.c (2) 0 1 3 4-1 -1-1 -1-1 -1 6 7 9 10-1 -1-1 -1-1 -1-1 -1-1 -1-1 -1-1 -1-1 -1 7 (2014 05 21 ) 87

21.c0 N 1 22.c0 N 1 7 (2014 05 21 ) 88