GPU MapReduce 1 1 1, 2, 3 MapReduce GPGPU GPU GPU MapReduce CPU GPU GPU CPU GPU CPU GPU Map K-Means CPU 2GPU CPU 1.02-1.93 Improving MapReduce Task Scheduling for CPU-GPU Heterogeneous Environments Koichi Shirahata, 1 Hitoshi Sato 1 and Satoshi Matsuoka 1, 2, 3 MapReduce is a programming model that enables efficient massive data processing in a large-scale computing environment such as supercomputers and clouds. On the other hand, recent such large-scale computers tend to employ GPUs to enjoy its good peak performance and high memory bandwidth. However, scheduling MapReduce tasks onto CPUs and GPUs for efficient execution is difficult, since it depends on running application characteristics and underlying computing environments. To address this problem, we propose a hybrid online scheduling technique for GPU-based computing clusters, which minimizes the execution time of a submitted job using dynamic profiles of map tasks running on CPUs or GPUs. Our experimental results using a K-Means application show that the proposed technique achieves 1.02-1.93 times faster than simple techniques, such as ones that CPU only or GPU only schedulings. 1. Google MapReduce 1) GPGPU 2) GPU GPU CUDA 3) TSUBAME2.0 3 GPU CPU GPU MapReduce CPU GPU CPU GPU I/O GPU CPU GPU CPU GPU CPU GPU CPU GPU Map CPU GPU Map CPU GPU CPU GPU Map Map ( 1) K-Means 4),5) Map CPU 1 2 3 1 c 2010 Information Processing Society of Japan
HBase Phoenix API Mars GPU MapReduce GPU Hadoop Hadoop Hadoop MapReduce : (1) MapReduce (2)JobTracker 1 Hadoop CPU GPU Fig. 1 The overview of CPU-GPU hybrid processing on Hadoop GPU 1.0-1.25 CPU 2GPU 1.02-1.93 2. MapReduce GPGPU MapReduce GPGPU CPU 2.1 MapReduce MapReduce Google Map Shuffle Reduce 3 Map key-value Shuffle key value Reduce key-value key-value Map Reduce MapReduce MapReduce Hadoop 6) Phoenix 7) Mars 8) Hadoop GFS(Google File System) MapReduce Java MapReduce HDFS (3)TaskTracker (4) 3 JobTracker TaskTracker JobTracker Map Reduce Map ( 64MB) 2.2 GPGPU GPGPU (General-purpose computing on GPU) 2) GPU GPU GPU GPU GPU SIMD CPU GPU CPU CPU GPU GPU GPU CPU GPU GPU CPU GPU CPU 2 c 2010 Information Processing Society of Japan
( 2) MapReduce Mapper Reducer Hadoop Pipes Hadoop Pipes Hadoop MapReduce C++ Map Reduce Streaming Pipes TaskTracker C++ Map Reduce JNI 2 Hadoop Streaming Hadoop Pipes Fig. 2 Hadoop Straming and Hadoop Pipes GPGPU NVIDIA C CUDA CUDA C 3. CPU GPU Map CPU GPU Hadoop GPU CPU GPU 3.1 Hadoop CUDA Hadoop CPU GPU GPU Hadoop Hadoop Java Java GPU Hadoop GPU Hadoop Streaming Hadoop Pipes JNI jcuda Hadoop Streaming Hadoop Streaming Hadoop Unix JNI JNI(Java Native Interface) JVM Java C C++ JVM Java jcuda jcuda(java for CUDA) 9) CUDA API Java Java CUDA GPU jcuda CUDA CUDA2.1 API CUDA2.1 API CUFFT OpenGL CUBLAS Hadoop CUDA Hadoop Streaming Hadoop Pipes Hadoop key JNI Java JNI Java 3 c 2010 Information Processing Society of Japan
Java jcuda CUDA2.1 CUDA2.2 jcuda CUDA Hadoop Pipes 3.2 CPU GPU MapRecuce MapReduce GPU Map CPU GPU CPU GPU Map CPU GPU GPU CPU GPU Map CPU GPU Map CPU Map CPU GPU GPU Reduce Map Reduce Map GPU Map Reduce 3.3 CPU GPU CPU GPU Map CPU GPU CPU GPU 10) Map CPU GPU Map CPU GPU CPU GPU CPU GPU 3.4 Map CPU GPU CPU GPU Map N CPU n GPU m CPU GPU a 1 GPU Map t Map 1 CPU 1 GPU CPU GPU a( ) a = mean map task time run on CP U mean map task time run on GP U 1 GPU Map t CPU at x CPU Map y CPU Map Map minimize f(x, y) subject to f(x, y) = max{ x n at, y m t} x + y = N x, y 0 : CPU x GPU y Map : N Map CPU GPU x, y CPU GPU Map x 0 Map GPU y 0 CPU 4 c 2010 Information Processing Society of Japan
Map Reduce Pipes Child JVM Map Reduce C++ Map Reduce key-value 2 CPU GPU C++ Map CPU GPU Child JVM GPU Pipes CPU Map CPU Map GPU GPU 3 Hadoop Fig. 3 The structure of task scheduling on Hadoop 4. Hadoop CUDA CPU GPU Map Hadoop CUDA JobTracker TaskTracker GPU Map Map ( 3) 4.1 Hadoop GPU Hadoop CUDA CUDA C C++ C++ Hadoop Pipes Hadoop Pipes C++ C++ Java Pipes Java key-value Map Reduce key-value Java TaskTracker TaskTracker Map Reduce Hadoop : (1)MapReduce JobClient (2)JobClient JobTracker (3)JobTracker TaskTracker Map Reduce (4)TaskTracker Child JVM CPU GPU CPU GPU Map CPU GPU CPU GPU Map CPU GPU CPU GPU JobTracker TaskTracker JobTracker Map TaskTracker CPU GPU JobTracker Map TaskTracker DataNode CPU GPU JobTracker TaskTracker Map CPU GPU Map Map CPU GPU CPU GPU TaskTracker TaskTracker JobTracker CPU Map CPU GPU GPU GPU GPU Map GPU TaskTracker GPU JobTracker Map Map 5 c 2010 Information Processing Society of Japan
4.2 Hadoop GPU Map JobTracker CPU GPU TaskTracker Map TaskTracker JobTracker TaskTracker JobTracker TaskTracker Task- Tracker Map Map Map JobTracker TaskTracker TaskTracker Map Map CPU GPU CPU GPU CPU GPU Map Map CPU GPU CPU GPU Map JobTracker TaskTracker Map CPU GPU 5. CPU GPU Map 5.1 CPU GPU CPU GPU Map K-Means Map GPU 1 CPU GPU AMD Opteron(Dual Core) Tesla S1070 2.4GHz 1.296-1.44GHzGHz 1.0GB 16GB Map K-Means Reduce Map K-Means K-Means (1)k (2) (3) k (4) 1 k 128 2 262144 4000 20GB TSUBAME GPU 1 64 Lustre 4 I/O 32MB write 180MB/s read 610MB/s CPU GPU 1 1 CPU 16 GPU 2 GPU Map GPU CPU GPU 1CPU 1GPU 15 CPU 1 GPU Map 2GPU 14 CPU 2 GPU Map 32MB Reduce 1 16 64 1 15 5.2 4 Map CPU GPU 1.0-1.25 CPU 2GPU 1.02-1.93 GPU 15CPU 1GPU 14CPU 2GPU 6 c 2010 Information Processing Society of Japan
1.29 1.02 64 32 Map 20GB 32MB 619 64 1 16 Map I/O GPU MapReduce GPU 1GPU CPU GPU Map CPU GPU 6. CPU GPU 10) CPU GPU CPU GPU CPU GPU 11) CPU GPU 12) CPU GPU 4 TSUBAME K-Means Fig. 4 Total Job Time of K-Means on TSUBAME CPU GPU 13) 7. CPU GPU MapReduce Hadoop Map GPU CPU GPU Map CPU GPU K-Means Map CPU GPU 7 c 2010 Information Processing Society of Japan
1.0-1.25 CPU 2GPU 1.02-1.93 Map 18049028 JST CREST ULP-HPC: support for enabling generalized reduction computations on heterogeneous parallel configurations, ICS 10: Proceedings of the 24th ACM International Conference on Supercomputing, New York, NY, USA, ACM, pp.137 146 (2010). 11) Lu, C.-K., Hong, S. and Kim, H.: Qilin: Exploiting Parallelism on Heterogeneous Multiprocessors with Adaptive Mapping, MICRO 09, pp.45 55 (2009). 12) Zaharia, M., Konwinski, A., Joseph, A. D., Katz, R. and Stoica, I.: Improving MapReduce Performance in Heterogeneous Environments, Technical report, EECS Department, University of California, Berkeley (2008). 13) Vol.47, No.SIG 1 8(ACS 1 6), pp.92 114(2006). 1) Dean, J. and Ghemawat, S.: MapReduce: Simplified Data Processing on Large Clusters, OSDI 04, Sixth Symposium on Operating System Design and Implementation, pp.137 150 (2004). 2) D.Owens, J., Houston, M., Luebke, D., Green, S., E.Stone, J. and C.Phillips, J.: GPU Computing, Proc IEEE, Vol.96, No.5, pp.879 899 (2008). 3) John, N., Ian, B., Michael, G. and Kevin, S.: Scalable Parallel Programming with CUDA, Queue, Vol.6, No.2, pp.40 53 (2008). 4) K., J.A. and C., D.R.: Algorithms for clustering data, Prentice-Hall, Inc., Upper Saddle River, NJ, USA (1988). 5) Hong-tao, B., Li-li, H., Dan-tong, O., Zhan-shan, L. and He, L.: K-Means on Commodity GPUs with CUDA, Computer Science and Information Engineering, 2009 WRI World Congress, pp.651 655 (2009). 6) Bialecki, A., Cordova, M., Cutting, D. and O Malley, O.: Hadoop: a framework for running applications on large clusters built of commodity hardware (2005). 7) Ranger, C., Raghuraman, R., Penmetsa, A., Bradski, G. and Kozyrakis, C.: Evaluating MapReduce for Multi-core and Multiprocessor Systems, Proceedings of the 13th Intl. Symposium on High-Performance Computer Architecture (HPCA) (2007). 8) He, B., Fang, W., Luo, Q., K.Govindaraju, N. and Wang, T.: Mars: A MapReduce Framework on Graphics Processors, Parallel Architectures and Compilation Techniques, pp.260 269 (2008). 9) Company for Advanced Supercomputing Solutions Ltd.: jcuda, http://hoopoecloud.com/solutions/jcuda/default.aspx. 10) Vignesh, T. R., Wenjing, M., David, C. and Gagan, A.: Compiler and runtime 8 c 2010 Information Processing Society of Japan