OpenMP (1) 1, 12 1 UNIX (FUJITSU GP7000F model 900), 13 1 (COMPAQ GS320) FUJITSU VPP5000/64 1 (a) (b) 1: ( 1(a))
|
|
|
- きみえ よせ
- 7 years ago
- Views:
Transcription
1 OpenMP (1) 1, 12 1 UNIX (FUJITSU GP7000F model 900), 13 1 (COMPAQ GS320) FUJITSU VPP5000/64 1 (a) (b) 1: ( 1(a)) {nanri,amano}@cc.kyushu-u.ac.jp 1
2 ( ) 1. VPP Fortran[6] HPF[3] VPP Fortran 2. MPI[5] PVM[2] 3. 1 MPI PVM ( 1(b)) OpenMP Fortran (C, C++ ) MPI OpenMP MPI 1 Thinking Machines CM-5 C C* CM-5 2
3 VPP Fortran MPI OpenMP GP7000F, GS320 OpenMP Chandra, R., Menon, R., Dagum, L., Kohr, D., Maydan, D. and McDonald, J.: Parallel Programming in OpenMP, Morgan Kaufmann Publishers, OpenMP OpenMP OpenMP web OpenMP OpenMP FAQ (Frequently Asked Questions, ) (RWCP) ( ) 3 OpenMP OpenMP (1) ( ) OpenMP OpenMP OpenMP (2) ( ) (Fortran DO C(C++) for ) OpenMP OpenMP (3) ( ) OpenMP 1 2 OpenMP OpenMP OpenMP 3,. 4,, OpenMP. 5, OpenMP,. 3
4 ,. OpenMP, Fortran, Fortran C., C., Fortran, OpenMP. 2 OpenMP 2.1 OpenMP OpenMP OpenMP OpenMP Architecture Review Board (ARB) OpenMP OpenMP ARB OpenMP SPEC OpenMP OpenMP 2.2 OpenMP OpenMP Fortran C(C++) pragma pragma OpenMP 4
5 OpenMP 2 program sequential end proguram sequential program parallel...!omp parallel!omp end parallel... end program parallel 2: OpenMP Fortran OpenMP OpenMP ( ) OMP_NUM_THREADS
6 OMP_NUM_THREADS 4 OpenMP OMP_GET_THREAD_NUM() if OpenMP OpenMP 1 OpenMP 1. MPI PVM 2. OpenMP 3. OpenMP 4. OpenMP (OpenMP ARB) 6
7 MPI PVM OpenMP 3,, OpenMP.,.,.. 3.1,,., 2 3.,., ( ).,,.,,,.,.,. GS320 UNIX GP7000F,, ,,.,,.,,,.,, 7
8 1: GS320 (kyu-ss.cc.kyushu-u.ac.jp) -check bounds -check format -check overflow -check underflow -tune ev6 -arch ev6 GS320 CPU EV68. -unroll N.. (N :. 6, ) -fast.. ( 15 ) -O ,.,,. UNIX GP7000F model 900 (kyu-cc.cc.kyushu-u.ac.jp) -Haesux -Kfast -Keval -Kfast_GP=2,prefetch=4 -O4 Solaris, C/C++ -Kfast_GP=2,prefetch -Kmfunc. 8
9 ,,, OpenMP OpenMP,,,., OpenMP OpenMP,., OpenMP OpenMP, OpenMP OpenMP,... OpenMP 2.!$omp #pragma omp OpenMP (Fortran ).,. 2 Fortran parallel do, shared(a). C/C++ parallel for. OpenMP,., OpenMP, OpenMP OpenMP,., OpenMP, OpenMP OpenMP.,. OpenMP, OpenMP,. OpenMP, OpenMP. OpenMP, OpenMP ( 3).,. 9
10 2: OpenMP Fortran ( )!$omp OpenMP. (. ) :!$omp parallel do shared(a) Fortran ( )!$omp c$omp *$omp OpenMP. 6 0 OpenMP. : c$omp parallel do shared(a) C/C++ #pragma omp OpenMP. (. ) : #pragma omp parallel for shared(a) Fortran( ) Fortran( ) C/C++ 3: OpenMP!$,!$ OpenMP,. :!$ call parallel_init(a)!$ c$ *$ OpenMP,. :!$ call parallel_init(a) OpenMP _OPENMP. : #ifdef _OPENMP parallel_init(a); #else serial_init(a); #endif 10
11 4.1.2 OpenMP,.,.,,,.,.,. OpenMP C C++,. #include <omp.h>, Fortran OpenMP,,., omp_get_num_threads(). integer omp_get_num_threads OpenMP 3 OpenMP,. Fortran, 2 OpenMP.!$omp parallel!$omp end parallel parallel, end parallel. Fortran OpenMP, parallel end parallel., C/C++. #pragma omp parallel parallel. C, C++ OpenMP, parallel.,. 11
12 Fortran program hello implicit none integer omp_get_thread_num print *, " "!$omp parallel print *, ". ", omp_get_thread_num()!$omp end parallel print *, " " end program hello C #include <stdio.h> #include <omp.h> main() { printf(" \n"); #pragma omp parallel { printf(". %d\n", omp_get_thread_num()); } printf(" \n"); } 3: parallel 12
13 3 : , 4., parallel 4. print *, ". ", omp_get_thread_num() 4.,. program hello integer omp_get_thread_num print *, " "!$omp parallel print *, " "...!$omp end parallel print *, " " print *, " "... print *, " "... print *, " "... print *, " "... print *, " " end print *, " " 4: 3,., OpenMP, 1.,. print (printf ),. OpenMP,..,,., omp_set_num_threads(). 13
14 , OMP_NUM_THREADS.,., omp_get_thread_num() OpenMP,. 4, print.,. OpenMP,. OpenMP OpenMP ,,. 4.2, OpenMP parallel do (C/C++ parallel for ). 5, OpenMP., x a, y z. 1, i ,.,, 6 OpenMP, i. 4, 7. i 4, i =1 25 0, i = , i = , i = ,, 3., 1/4. parallel do (parallel for ), do (for ),. 3 OpenMP,
15 Fortran program ex1 implicit none integer i double precision z(100), a, x(100), y do i = 1, 100 z(i) = 0.0 x(i) = 2.0 end do a = 4.0 y = 1.0 call daxpy(z, a, x, y) end program ex1 subroutine daxpy(z, a, x, y) integer i double precision z(100), a, x(100), y do i = 1, 100 z(i) = a * x(i) + y end do return end C #include <stdio.h> #include <omp.h> main() { int i; double z[100], a, x[100], y; } for (i = 0; i < 100; i++){ z[i] = 0.0; x[i] = 2.0; } a = 4.0; y = 1.0; daxpy(z, a, x, y); void daxpy(z, a, x, y) double z[], a, x[], y; { int i; for (i = 0; i < 100; i++) z[i] = a * x[i] + y; } 5: 15
16 Fortran program ex1 implicit none integer i double precision z(100), a, x(100), y do i = 1, 100 z(i) = 0.0 x(i) = 2.0 end do a = 4.0 y = 1.0 call daxpy(z, a, x, y) end program ex1 subroutine daxpy(z, a, x, y) integer i double precision z(100), a, x(100), y!$omp parallel do do i = 1, 100 z(i) = a * x(i) + y end do return end C #include <stdio.h> #include <omp.h> main() { int i; double z[100], a, x[100], y; } for (i = 0; i < 100; i++){ z[i] = 0.0; x[i] = 2.0; } a = 4.0; y = 1.0; daxpy(z, a, x, y); void daxpy(z, a, x, y) double z[], a, x[], y; { int i; #pragma omp parallel for for (i = 0; i < 100; i++) z[i] = a * x[i] + y; } 6: OpenMP 16
17 subroutine daxpy(z, a, x, y) integer i double precision z(100), a, x(100), y!$omp parallel do do i = 1, 100 z(i) = a * x(i) + y end do i = 1 to 25 z(i) = a * x(i) + y i = 26 to 50 z(i) = a * x(i) + y i = 51 to 75 z(i) = a * x(i) + y i = 76 to 100 z(i) = a * x(i) + y return end 7: 6, parallel do (parallel for ).,.. do i = 2, 100 z[i] = z[i] + z[i - 1] end do, z[i] z[i-1]. z[i-1], i, i. i parallel do (parallel for ), i,,., parallel for (parallel do ),., OpenMP (2). 4.3, OpenMP., OpenMP. 6, x, a, y, z., x[1] x[1]., 0 x[1], 1 2 x[1]., x. a, y., z. 0 a*x[1]+y 17
18 3.0, z[1], z[1] 3.0., z., i. i, z., i, i,. 0 i=1, i 1 i 26. 0, i 25,,.,,., i,., i,., 0 i 1 25, 1 i subroutine daxpy(z, a, x, y) integer i double precision z(100), a, x(100), y z a x y i!$omp parallel do do i = 1, 100 z(i) = a * x(i) + y end do i i i i return end 8: 8.,,., x, z a, y,., i. i, i. OpenMP,,.,., shared.!$omp parallel shared(a) 18
19 ,, private.!$omp parallel private(a) shared private,, shared private.,,( ),.!$omp parallel shared(a, b, c) private(d, e), 6 parallel do (parallel for ),., OpenMP. OpenMP,.,,,. 6,, 9. Fortran subroutine daxpy(z, a, x, y) integer i double precision z(100), a, x(100), y!$omp parallel do shared(z, a, x, y) private(i) do i = 1, 100 z(i) = a * x(i) + y end do return end C void daxpy(z, a, x, y) double z[], a, x[], y; { int i; #pragma omp parallel for shared(z, a, x, y) private(i) for (i = 0; i < 100; i++) z[i] = a * x[i] + y; } 9:, OpenMP, shared. private, 19
20 . C/C++ OpenMP, 4. 2,.,,.,,. 2, OpenMP 10. Fortran subroutine matvec(a, x, y) integer i, j double precision a(100, 100), x(100), y(100)!$omp parallel do private(j) do i = 1, 100 do j = 1, 100 y(i) = a(j, i) * x(i) end do end do return end C void matvec(a, x, y) double a[][100], x[], y[]; { int i, j; #pragma omp parallel for private(j) for (i = 0; i < 100; i++) for (j = 0; j < 100; j++) y[i] += a[i][j] * x[i]; } 10: OpenMP 4.4, 11., 6 parallel do (parallel for ) i 4 Fortran OpenMP,,., C C++ Fortran,. 20
21 Fortran function total(x) integer i double precision t, total, x(100) t = 0.0 do i = 1, 100 t = t + x(i) end do total = t end C double total(x) double x[]; { int i; double t; t = 0.0; for (i = 0; i < 100; i++) t += x[i]; return t; } 11: ( )., 4 25,.,, 11 x t., i., t., x 1.0., x, t ,.,,.,. 1: t. 2: x[i]. 3:. 4: t., 12(a)., 12(b) 21
22 i=1 1:t 2:x[0] 3: 4: t i=2 1:t 2:x[1] 3: 4: t i=3 1:t 2:x[2] 3: 4: t... i=1 1:t 2:x[0] 3: 4: t i=2 1:t 2:x[1] 3: 4: t i=3 1:t 2:x[2] 3: 4: t... i=26 1:t 2:x[25] 3: 4: t i=27 1:t 2:x[26] 3: 4: t i=28 1:t 2:x[27] 3: 4: t... i=51 1:t 2:x[50] 3: 4: t i=52 1:t 2:x[51] 3: 4: t i=53 1:t 2:x[52] 3: 4: t... i=76 1:t 2:x[75] 3: 4: t i=77 1:t 2:x[76] 3: 4: t i=78 1:t 2:x[77] 3: 4: t... (a) (b) 12:, OpenMP,,,., 1 4, 1., t. t x[i] 1.0, t t,, t 13.0.,,., ,. OpenMP,.., 1.,,. 11 t = t + x(i), t t t,., i 1,.,., 22
23 , OpenMP.,.,,.,.,,.,,,,., parallel for reduction , reduction reduction(+:t), + ( ) t. reduction(+:t,u,v),( ). t, t,., t x[i]. i, t, t., i,,. 5 OpenMP, UNIX OpenMP,.,. 5.1 OpenMP 2, COMPAQ GS320 COMPAQ GS320,. GS320 Alpha (731MHz) CPU. CPU,.., GS320 64GByte, 1,, 16GByte. 23
24 Fortran function total(x) integer i double precision t, total, x(100) t = 0.0!$omp parallel do reduction(+:t) do i = 1, 100 t = t + x(i) end do total = t return end C double total(x) double x[]; { int i; double t; t = 0.0; #pragma omp parallel for reduction(+:t) for (i = 0; i < 100; i++) t += x[i]; return t; } 13: ( ) 24
25 COMPAQ GS320 UNIX GP7000F model 900 GS320 OpenMP Fortran C. OpenMP C++., GS320 C Fortran. OpenMP. OpenMP, OpenMP. GS320,, MPI, PVM, HPF., VPP5000 UNIX GP7000F model 900. Compaq Tru64 UNIX Digital UNIX Alpha CPU GS320., Alpha GS320,.,, UNIX kyu-cc.cc.kyushu-u.ac.jp, touroku. kyu-cc% touroku kyu-ss Password: kyu-cc... kyu-cc%,. kyu-ss.cc.kyushu-u.ac.jp 25
26 5.1.2 UNIX GP7000F model 900 UNIX GP7000F model 900,. GP7000F model 900, SPARC64-GP (300MHz) 64. SPARC, OS( ) Solaris 7, SPARC,., GP7000F model GByte, 1,, 32GByte. GP7000F model 900 OpenMP Fortran, C C++,, C Fortran.,,, MPI, VPP5000 GS320. UNIX,. kyu-cc.cc.kyushu-u.ac.jp 5.2 OpenMP, OpenMP. OpenMP, OpenMP,., OpenMP, GS320 GS320, Fortran C -omp, OpenMP., GS320.f90 Fortran.,.f.for Fortran., -o example. kyu-ss% f90 -omp example.f90 -o example kyu-ss% cc -omp example.c -o example 26
27 GS320, -check omp_bindings (C -check_omp), OpenMP., OpenMP GP7000F GP7000F, Fortran, C, C++ -KOMP, OpenMP., GP7000F.f90.f95 Fortran.,.f.for Fortran., -o example. Fortran: kyu-cc% frt -KOMP example.f90 -o example C: kyu-cc% fcc -KOMP example.c -o example C++: kyu-cc% FCC -KOMP example.cc -o example Fortran OpenMP,. 27
28 -Kspinwait -Knospinwait -Kthreadstacksize=N -Kspinwait, CPU, CPU.,,. -KOMP,. -Kspinwait -Knospinwait, -Kspinwait. -Knospinwait,, CPU. CPU., CPU,. -KOMP,. -Kthreadstacksize=N, K (1 N ).,.,. -KOMP,., THREAD_STACK_SIZE. 5.3 OpenMP OpenMP,,., OpenMP,. OMP_NUM_THREADS. omp_set_num_threads(),., UNIX GP7000F, THREAD_STACK_SIZE. K ,., UNIX timex., OpenMP (ex1-seri), OpenMP (ex1-para) timex,., OMP_NUM_THREADS 2, UNIX. 28
29 kyu-cc% timex./ex1-seri real user sys 4.40 kyu-cc% timex./ex1-para real user sys 4.46 real ( ). user CPU, sys CPU. OpenMP 1CPU, OpenMP, OMP_NUM_THREADS 2 2CPU, , 15.93/10.54 = 1.51., CPU 5..,. CPU,.,, UNIX GP7000F ( : kyu-cc.cc.kyushu-u.ac.jp) sc8 sc32., qsub.,,,.,. [1] Chandra, R., Menon, R., Dagum, L., Kohr, D., Maydan, D. and McDonald, J.: Parallel Programming in OpenMP, Morgan Kaufmann Publishers, OpenMP [2] Geist, A., Beguelin, A., Dognarra, J., Jiang, W., Manchek, R. and Sunderam, V.: PVM: Parallel Virtual Machine A Users Guide and Tutorial for Networked Parallel Computing, The MIT Press, PVM 5, CPU. 29
30 PVM Web. [3] High Performance Fortran Forum,,,,,, NEC: High Performance Fortran2.0,, HPF 2.0,. HPF Web. kyushu-u.ac.jp/scp/system/library/fortran/hpf.html [4] OpenMP Architecture Review Board: OpenMP Fortran Application Program Interface, October Fortran OpenMP 1.0 ( ) Fortran 2.0 C(C++) (RWCP) [5] Pacheco, P. S.: Parallel Programming with MPI, Morgan Kaufmann Publishers, P. / MPI MPI Web. system/library/mpl/mpi.html [6] UXP/V VPP Fortran V VPP Fortran web MPI Web. library/fortran/vpp_fortran.html 30
OpenMP¤òÍѤ¤¤¿ÊÂÎó·×»»¡Ê£±¡Ë
2011 5 26 scalar Open MP Hello World Do (omp do) (omp workshare) (shared, private) π (reduction) scalar magny-cours, 48 scalar scalar 1 % scp. ssh / authorized keys 133. 30. 112. 246 2 48 % ssh 133.30.112.246
OpenMP¤òÍѤ¤¤¿ÊÂÎó·×»»¡Ê£±¡Ë
2012 5 24 scalar Open MP Hello World Do (omp do) (omp workshare) (shared, private) π (reduction) PU PU PU 2 16 OpenMP FORTRAN/C/C++ MPI OpenMP 1997 FORTRAN Ver. 1.0 API 1998 C/C++ Ver. 1.0 API 2000 FORTRAN
01_OpenMP_osx.indd
OpenMP* / 1 1... 2 2... 3 3... 5 4... 7 5... 9 5.1... 9 5.2 OpenMP* API... 13 6... 17 7... 19 / 4 1 2 C/C++ OpenMP* 3 Fortran OpenMP* 4 PC 1 1 9.0 Linux* Windows* Xeon Itanium OS 1 2 2 WEB OS OS OS 1 OS
コードのチューニング
OpenMP による並列化実装 八木学 ( 理化学研究所計算科学研究センター ) KOBE HPC Spring School 2019 2019 年 3 月 14 日 スレッド並列とプロセス並列 スレッド並列 OpenMP 自動並列化 プロセス並列 MPI プロセス プロセス プロセス スレッドスレッドスレッドスレッド メモリ メモリ プロセス間通信 Private Private Private
2. OpenMP OpenMP OpenMP OpenMP #pragma#pragma omp #pragma omp parallel #pragma omp single #pragma omp master #pragma omp for #pragma omp critica
C OpenMP 1. OpenMP OpenMP Architecture Review BoardARB OpenMP OpenMP OpenMP OpenMP OpenMP Version 2.0 Version 2.0 OpenMP Fortran C/C++ C C++ 1997 10 OpenMP Fortran API 1.0 1998 10 OpenMP C/C++ API 1.0
演習1: 演習準備
演習 1: 演習準備 2013 年 8 月 6 日神戸大学大学院システム情報学研究科森下浩二 1 演習 1 の内容 神戸大 X10(π-omputer) について システム概要 ログイン方法 コンパイルとジョブ実行方法 OpenMP の演習 ( 入門編 ) 1. parallel 構文 実行時ライブラリ関数 2. ループ構文 3. shared 節 private 節 4. reduction 節
OpenMP¤òÍѤ¤¤¿ÊÂÎó·×»»¡Ê£²¡Ë
2013 5 30 (schedule) (omp sections) (omp single, omp master) (barrier, critical, atomic) program pi i m p l i c i t none integer, parameter : : SP = kind ( 1. 0 ) integer, parameter : : DP = selected real
11042 計算機言語7回目 サポートページ:
11042 7 :https://goo.gl/678wgm November 27, 2017 10/2 1(print, ) 10/16 2(2, ) 10/23 (3 ) 10/31( ),11/6 (4 ) 11/13,, 1 (5 6 ) 11/20,, 2 (5 6 ) 11/27 (7 12/4 (9 ) 12/11 1 (10 ) 12/18 2 (10 ) 12/25 3 (11
openmp1_Yaguchi_version_170530
並列計算とは /OpenMP の初歩 (1) 今 の内容 なぜ並列計算が必要か? スーパーコンピュータの性能動向 1ExaFLOPS 次世代スハ コン 京 1PFLOPS 性能 1TFLOPS 1GFLOPS スカラー機ベクトル機ベクトル並列機並列機 X-MP ncube2 CRAY-1 S-810 SR8000 VPP500 CM-5 ASCI-5 ASCI-4 S3800 T3E-900 SR2201
Microsoft Word - 計算科学演習第1回3.doc
スーパーコンピュータの基本的操作方法 2009 年 9 月 10 日高橋康人 1. スーパーコンピュータへのログイン方法 本演習では,X 端末ソフト Exceed on Demand を使用するが, 必要に応じて SSH クライアント putty,ftp クライアント WinSCP や FileZilla を使用して構わない Exceed on Demand を起動し, 以下のとおり設定 ( 各自のユーザ
ex01.dvi
,. 0. 0.0. C () /******************************* * $Id: ex_0_0.c,v.2 2006-04-0 3:37:00+09 naito Exp $ * * 0. 0.0 *******************************/ #include int main(int argc, char **argv) double
OpenMPプログラミング
OpenMP 基礎 岩下武史 ( 学術情報メディアセンター ) 1 2013/9/13 並列処理とは 逐次処理 CPU1 並列処理 CPU1 CPU2 CPU3 CPU4 処理 1 処理 1 処理 2 処理 3 処理 4 処理 2 処理 3 処理 4 時間 2 2 種類の並列処理方法 プロセス並列 スレッド並列 並列プログラム 並列プログラム プロセス プロセス 0 プロセス 1 プロセス間通信 スレッド
C
C 1 2 1.1........................... 2 1.2........................ 2 1.3 make................................................ 3 1.4....................................... 5 1.4.1 strip................................................
ex01.dvi
,. 0. 0.0. C () /******************************* * $Id: ex_0_0.c,v.2 2006-04-0 3:37:00+09 naito Exp $ * * 0. 0.0 *******************************/ #include int main(int argc, char **argv) { double
スパコンに通じる並列プログラミングの基礎
2018.06.04 2018.06.04 1 / 62 2018.06.04 2 / 62 Windows, Mac Unix 0444-J 2018.06.04 3 / 62 Part I Unix GUI CUI: Unix, Windows, Mac OS Part II 2018.06.04 4 / 62 0444-J ( : ) 6 4 ( ) 6 5 * 6 19 SX-ACE * 6
nakao
Fortran+Python 4 Fortran, 2018 12 12 !2 Python!3 Python 2018 IEEE spectrum https://spectrum.ieee.org/static/interactive-the-top-programming-languages-2018!4 Python print("hello World!") if x == 10: print
スパコンに通じる並列プログラミングの基礎
2016.06.06 2016.06.06 1 / 60 2016.06.06 2 / 60 Windows, Mac Unix 0444-J 2016.06.06 3 / 60 Part I Unix GUI CUI: Unix, Windows, Mac OS Part II 0444-J 2016.06.06 4 / 60 ( : ) 6 6 ( ) 6 10 6 16 SX-ACE 6 17
MPI usage
MPI (Version 0.99 2006 11 8 ) 1 1 MPI ( Message Passing Interface ) 1 1.1 MPI................................. 1 1.2............................... 2 1.2.1 MPI GATHER.......................... 2 1.2.2
XcalableMP入門
XcalableMP 1 HPC-Phys@, 2018 8 22 XcalableMP XMP XMP Lattice QCD!2 XMP MPI MPI!3 XMP 1/2 PCXMP MPI Fortran CCoarray C++ MPIMPI XMP OpenMP http://xcalablemp.org!4 XMP 2/2 SPMD (Single Program Multiple Data)
C言語によるアルゴリズムとデータ構造
Algorithms and Data Structures in C 4 algorithm List - /* */ #include List - int main(void) { int a, b, c; int max; /* */ Ÿ 3Ÿ 2Ÿ 3 printf(""); printf(""); printf(""); scanf("%d", &a); scanf("%d",
120802_MPI.ppt
CPU CPU CPU CPU CPU SMP Symmetric MultiProcessing CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CP OpenMP MPI MPI CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU MPI MPI+OpenMP CPU CPU CPU CPU CPU CPU CPU CP
スパコンに通じる並列プログラミングの基礎
2018.09.10 [email protected] ( ) 2018.09.10 1 / 59 [email protected] ( ) 2018.09.10 2 / 59 Windows, Mac Unix 0444-J [email protected] ( ) 2018.09.10 3 / 59 Part I Unix GUI CUI:
¥Ñ¥Ã¥±¡¼¥¸ Rhpc ¤Î¾õ¶·
Rhpc COM-ONE 2015 R 27 12 5 1 / 29 1 2 Rhpc 3 forign MPI 4 Windows 5 2 / 29 1 2 Rhpc 3 forign MPI 4 Windows 5 3 / 29 Rhpc, R HPC Rhpc, ( ), snow..., Rhpc worker call Rhpc lapply 4 / 29 1 2 Rhpc 3 forign
/* do-while */ #include <stdio.h> #include <math.h> int main(void) double val1, val2, arith_mean, geo_mean; printf( \n ); do printf( ); scanf( %lf, &v
1 http://www7.bpe.es.osaka-u.ac.jp/~kota/classes/jse.html [email protected] /* do-while */ #include #include int main(void) double val1, val2, arith_mean, geo_mean; printf( \n );
2012年度HPCサマーセミナー_多田野.pptx
! CCS HPC! I " [email protected]" " 1 " " " " " " " 2 3 " " Ax = b" " " 4 Ax = b" A = a 11 a 12... a 1n a 21 a 22... a 2n...... a n1 a n2... a nn, x = x 1 x 2. x n, b = b 1 b 2. b n " " 5 Gauss LU
ÊÂÎó·×»»¤È¤Ï/OpenMP¤Î½éÊâ¡Ê£±¡Ë
2015 5 21 OpenMP Hello World Do (omp do) Fortran (omp workshare) CPU Richardson s Forecast Factory 64,000 L.F. Richardson, Weather Prediction by Numerical Process, Cambridge, University Press (1922) Drawing
インテル(R) Visual Fortran Composer XE 2013 Windows版 入門ガイド
Visual Fortran Composer XE 2013 Windows* エクセルソフト株式会社 www.xlsoft.com Rev. 1.1 (2012/12/10) Copyright 1998-2013 XLsoft Corporation. All Rights Reserved. 1 / 53 ... 3... 4... 4... 5 Visual Studio... 9...
untitled
Fortran90 ( ) 17 12 29 1 Fortran90 Fortran90 FORTRAN77 Fortran90 1 Fortran90 module 1.1 Windows Windows UNIX Cygwin (http://www.cygwin.com) C\: Install Cygwin f77 emacs latex ps2eps dvips Fortran90 Intel
1.ppt
/* * Program name: hello.c */ #include int main() { printf( hello, world\n ); return 0; /* * Program name: Hello.java */ import java.io.*; class Hello { public static void main(string[] arg)
Intel® Compilers Professional Editions
2007 6 10.0 * 10.0 6 5 Software &Solutions group 10.0 (SV) C++ Fortran OpenMP* OpenMP API / : 200 C/C++ Fortran : OpenMP : : : $ cat -n main.cpp 1 #include 2 int foo(const char *); 3 int main()
±é½¬£²¡§£Í£Ð£É½éÊâ
2012 8 7 1 / 52 MPI Hello World I ( ) Hello World II ( ) I ( ) II ( ) ( sendrecv) π ( ) MPI fortran C wget http://www.na.scitec.kobe-u.ac.jp/ yaguchi/riken2012/enshu2.zip unzip enshu2.zip 2 / 52 FORTRAN
2 2.1 Mac OS CPU Mac OS tar zxf zpares_0.9.6.tar.gz cd zpares_0.9.6 Mac Makefile Mekefile.inc cp Makefile.inc/make.inc.gfortran.seq.macosx make
Sakurai-Sugiura z-pares 26 9 5 1 1 2 2 2.1 Mac OS CPU......................................... 2 2.2 Linux MPI............................................ 2 3 3 4 6 4.1 MUMPS....................................
I117 II I117 PROGRAMMING PRACTICE II SOFTWARE DEVELOPMENT ENV. 1 Research Center for Advanced Computing Infrastructure (RCACI) / Yasuhiro Ohara
I117 II I117 PROGRAMMING PRACTICE II SOFTWARE DEVELOPMENT ENV. 1 Research Center for Advanced Computing Infrastructure (RCACI) / Yasuhiro Ohara [email protected] / SCHEDULE 1. 2011/06/07(Tue) / Basic of
040312研究会HPC2500.ppt
2004312 e-mail : [email protected] 1 2 PRIMEPOWER VX/VPP300 VPP700 GP7000 AP3000 VPP5000 PRIMEPOWER 2000 PRIMEPOWER HPC2500 1998 1999 2000 2001 2002 2003 3 VPP5000 PRIMEPOWER ( 1 VU 9.6 GF 16GB 1 VU
解きながら学ぶC言語
printf 2-5 37 52 537 52 printf("%d\n", 5 + 37); 5370 source program source file.c ex00.c 0 comment %d d 0 decimal -2 -p.6 3-2 5 37 5 37-22 537 537-22 printf("537%d\n", 5-37); function function call ( )argument,
Second-semi.PDF
PC 2000 2 18 2 HPC Agenda PC Linux OS UNIX OS Linux Linux OS HPC 1 1CPU CPU Beowulf PC (PC) PC CPU(Pentium ) Beowulf: NASA Tomas Sterling Donald Becker 2 (PC ) Beowulf PC!! Linux Cluster (1) Level 1:
導入基礎演習.ppt
Multi-paradigm Programming Functional Programming Scheme Haskell ML Scala X10 KL1 Prolog Declarative Lang. C Procedural Lang. Java C++ Python Object-oriented Programming / (root) bin home lib 08 09
10/ / /30 3. ( ) 11/ 6 4. UNIX + C socket 11/13 5. ( ) C 11/20 6. http, CGI Perl 11/27 7. ( ) Perl 12/ 4 8. Windows Winsock 12/11 9. JAV
[email protected] [email protected] http://www.misojiro.t.u-tokyo.ac.jp/ tutimura/sem3/ 2002 12 11 p.1/33 10/16 1. 10/23 2. 10/30 3. ( ) 11/ 6 4. UNIX + C socket 11/13 5. ( ) C 11/20
WinHPC ppt
MPI.NET C# 2 2009 1 20 MPI.NET MPI.NET C# MPI.NET C# MPI MPI.NET 1 1 MPI.NET C# Hello World MPI.NET.NET Framework.NET C# API C# Microsoft.NET java.net (Visual Basic.NET Visual C++) C# class Helloworld
PC Windows 95, Windows 98, Windows NT, Windows 2000, MS-DOS, UNIX CPU
1. 1.1. 1.2. 1 PC Windows 95, Windows 98, Windows NT, Windows 2000, MS-DOS, UNIX CPU 2. 2.1. 2 1 2 C a b N: PC BC c 3C ac b 3 4 a F7 b Y c 6 5 a ctrl+f5) 4 2.2. main 2.3. main 2.4. 3 4 5 6 7 printf printf
untitled
16 4 1 17 1 50 -1- -2- -3- -4- -5- -6- -7- 1 2-8- -9- -10- -11- Web -12- (1) (2)(1) (3) (4) (1)()(2) (3)(4) -13- -14- -15- -16- -17- -18- -19- -20- -21- -22- -23- (2)(1) (3) -24- -25- -26- -27- -28- -29-
インテル(R) Visual Fortran Composer XE
Visual Fortran Composer XE 1. 2. 3. 4. 5. Visual Studio 6. Visual Studio 7. 8. Compaq Visual Fortran 9. Visual Studio 10. 2 https://registrationcenter.intel.com/regcenter/ w_fcompxe_all_jp_2013_sp1.1.139.exe
I I / 47
1 2013.07.18 1 I 2013 3 I 2013.07.18 1 / 47 A Flat MPI B 1 2 C: 2 I 2013.07.18 2 / 47 I 2013.07.18 3 / 47 #PJM -L "rscgrp=small" π-computer small: 12 large: 84 school: 24 84 16 = 1344 small school small
( CUDA CUDA CUDA CUDA ( NVIDIA CUDA I
GPGPU (II) GPGPU CUDA 1 GPGPU CUDA(CUDA Unified Device Architecture) CUDA NVIDIA GPU *1 C/C++ (nvcc) CUDA NVIDIA GPU GPU CUDA CUDA 1 CUDA CUDA 2 CUDA NVIDIA GPU PC Windows Linux MaxOSX CUDA GPU CUDA NVIDIA
Microsoft PowerPoint - 演習1:並列化と評価.pptx
講義 2& 演習 1 プログラム並列化と性能評価 神戸大学大学院システム情報学研究科横川三津夫 [email protected] 2014/3/5 RIKEN AICS HPC Spring School 2014: プログラム並列化と性能評価 1 2014/3/5 RIKEN AICS HPC Spring School 2014: プログラム並列化と性能評価 2 2 次元温度分布の計算
超初心者用
3 1999 10 13 1. 2. hello.c printf( Hello, world! n ); cc hello.c a.out./a.out Hello, world printf( Hello, world! n ); 2 Hello, world printf n printf 3. ( ) int num; num = 100; num 100 100 num int num num
C/C++ FORTRAN FORTRAN MPI MPI MPI UNIX Windows (SIMD Single Instruction Multipule Data) SMP(Symmetric Multi Processor) MPI (thread) OpenMP[5]
MPI ( ) [email protected] 1 ( ) MPI MPI Message Passing Interface[2] MPI MPICH[3],LAM/MPI[4] (MIMDMultiple Instruction Multipule Data) Message Passing ( ) (MPI (rank) PE(Processing Element)
1-4 int a; std::cin >> a; std::cout << "a = " << a << std::endl; C++( 1-4 ) stdio.h iostream iostream.h C++ include.h 1-4 scanf() std::cin >>
1 C++ 1.1 C C++ C++ C C C++ 1.1.1 C printf() scanf() C++ C hello world printf() 1-1 #include printf( "hello world\n" ); C++ 1-2 std::cout
A Responsive Processor for Parallel/Distributed Real-time Processing
E-mail: yamasaki@{ics.keio.ac.jp, etl.go.jp} http://www.ny.ics.keio.ac.jp etc. CPU) I/O I/O or Home Automation, Factory Automation, (SPARC) (SDRAM I/F, DMAC, PCI, USB, Timers/Counters, SIO, PIO, )
3. :, c, ν. 4. Burgers : t + c x = ν 2 u x 2, (3), ν. 5. : t + u x = ν 2 u x 2, (4), c. 2 u t 2 = c2 2 u x 2, (5) (1) (4), (1 Navier Stokes,., ν. t +
B: 2016 12 2, 9, 16, 2017 1 6 1,.,,,,.,.,,,., 1,. 1. :, ν. 2. : t = ν 2 u x 2, (1), c. t + c x = 0, (2). e-mail: [email protected],. 1 3. :, c, ν. 4. Burgers : t + c x = ν 2 u x 2, (3), ν. 5. : t +
<4D F736F F F696E74202D D F95C097F D834F E F93FC96E5284D F96E291E85F8DE391E52E >
SX-ACE 並列プログラミング入門 (MPI) ( 演習補足資料 ) 大阪大学サイバーメディアセンター日本電気株式会社 演習問題の構成 ディレクトリ構成 MPI/ -- practice_1 演習問題 1 -- practice_2 演習問題 2 -- practice_3 演習問題 3 -- practice_4 演習問題 4 -- practice_5 演習問題 5 -- practice_6
プラズマ核融合学会誌5月号【81-5】/内外情報_ソフト【注:欧フォント特殊!】
PROGRAM PLOTDATA USE NUM_KINDS, ONLY : wp=>dp, i4b USE MYLIB, ONLY : GET_SIZE, GET_DATA INTEGER(i4b) :: ntime, nx REAL(wp), ALLOCATABLE :: time(:), x(:), Temp(:,:) Fortran Temp, temp, TEMP temporal REAL(wp)
num2.dvi
[email protected] http://kanenko.a.la9.jp/ 16 32...... h 0 h = ε () 0 ( ) 0 1 IEEE754 (ieee754.c Kerosoft Ltd.!) 1 2 : OS! : WindowsXP ( ) : X Window xcalc.. (,.) C double 10,??? 3 :, ( ) : BASIC,
pptx
iphone 2010 8 18 C [email protected] C Hello, World! Hello World hello.c! printf( Hello, World!\n );! os> ls! hello.c! os> cc hello.c o hello! os> ls! hello!!hello.c! os>./hello! Hello, World!! os>! os>
untitled
I 9 MPI (II) 2012 6 14 .. MPI. 1-3 sum100.f90 4 istart=myrank*25+1 iend=(myrank+1)*25 0 1 2 3 mpi_recv 3 isum1 1 isum /tmp/120614/sum100_4.f90 program sum100_4 use mpi implicit none integer :: i,istart,iend,isum,isum1,ip
07-二村幸孝・出口大輔.indd
GPU Graphics Processing Units HPC High Performance Computing GPU GPGPU General-Purpose computation on GPU CPU GPU GPU *1 Intel Quad-Core Xeon E5472 3.0 GHz 2 6 MB L2 cache 1600 MHz FSB 80 GFlops 1 nvidia
tuat1.dvi
( 1 ) http://ist.ksc.kwansei.ac.jp/ tutimura/ 2012 6 23 ( 1 ) 1 / 58 C ( 1 ) 2 / 58 2008 9 2002 2005 T E X ptetex3, ptexlive pt E X UTF-8 xdvi-jp 3 ( 1 ) 3 / 58 ( 1 ) 4 / 58 C,... ( 1 ) 5 / 58 6/23( )
I ASCII ( ) NUL 16 DLE SP P p 1 SOH 17 DC1! 1 A Q a q STX 2 18 DC2 " 2 B R b
I 4 003 4 30 1 ASCII ( ) 0 17 0 NUL 16 DLE SP 0 @ P 3 48 64 80 96 11 p 1 SOH 17 DC1! 1 A Q a 33 49 65 81 97 113 q STX 18 DC " B R b 34 50 66 8 98 114 r 3 ETX 19 DC3 # 3 C S c 35 51 67 83 99 115 s 4 EOT
連載講座 : 高生産並列言語を使いこなす (4) ゲーム木探索の並列化 田浦健次朗 東京大学大学院情報理工学系研究科, 情報基盤センター 目次 1 準備 問題の定義 αβ 法 16 2 αβ 法の並列化 概要 Young Brothers Wa
連載講座 : 高生産並列言語を使いこなす (4) ゲーム木探索の並列化 田浦健次朗 東京大学大学院情報理工学系研究科, 情報基盤センター 目次 1 準備 16 1.1 問題の定義 16 1.2 αβ 法 16 2 αβ 法の並列化 17 2.1 概要 17 2.2 Young Brothers Wait Concept 17 2.3 段数による逐次化 18 2.4 適応的な待機 18 2. 強制終了
