Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Memory bandwidth optimization of SpMV on GPGPUs

  • Research Article
  • Published:
Frontiers of Computer Science Aims and scope Submit manuscript

Abstract

It is an important task to improve performance for sparse matrix vector multiplication (SpMV), and it is a difficult task because of its irregular memory access. General purpose GPU (GPGPU) provides high computing ability and substantial bandwidth that cannot be fully exploited by SpMV due to its irregularity. In this paper, we propose two novel methods to optimize the memory bandwidth for SpMV on GPGPU. First, a new storage format is proposed to exploit memory bandwidth of GPU architecture more efficiently. The new storage format can ensure that there are as many non-zeros as possible in the format which is suitable to exploit the memory bandwidth of the GPU. Second, we propose a cache blocking method to improve the performance of SpMV on GPU architecture. The sparse matrix is partitioned into sub-blocks that are stored in CSR format.With the blocking method, the corresponding part of vector x can be reused in the GPU cache, so the time to access the global memory for vector x is reduced heavily. Experiments are carried out on three GPU platforms, GeForce 9800 GX2, GeForce GTX 480, and Tesla K40. Experimental results show that both new methods can efficiently improve the utilization of GPU memory bandwidth and the performance of the GPU.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Xu W, Liu Z, Wu J, Ye X, Jiao S, Wang D, Song F, Fan D. Auto-tuning GEMV on many-core GPU. In: Proceedings of the 18th IEEE International Conference on Parallel and Distributed Systems. 2012, 30–36

    Google Scholar 

  2. Yan C G, Zhang Y D, Xu J Z, Dai F, Li L, Dai Q H, Wu F. A highly parallel framework for HEVC coding unit partitioning tree decision on many-core processors. IEEE Signal Processing letters, 2014, 21(5): 573–576

    Article  Google Scholar 

  3. Yan C G, Zhang Y D, Dai F, Zhang J, Li L, Dai Q H. Efficient parallel HEVC intra prediction on many-core processor. Electronics Letters (in press)

  4. Yan C, Zhang Y, Xu J, Dai F, Zhang J, Dai Q, Wu F. Efficient parpallel framework for HEVC motion estimation on many-core processors. IEEE Transactions on Circuits and Systems for Video Technology, 2014, 99: 1

    Article  Google Scholar 

  5. Yan C G, Zhang Y D, Dai F, Li L. Highly parallel framework for HEVC motion estimation on many-core platform. In: Proceedings of Data Compression Conference, 2013, 63–72

    Google Scholar 

  6. Zhang Y D, Yan C G, Dai F, Ma Y. Efficient parallel framework for H.264/AVC deblocking filter on many-core platform. IEEE Transactions on Multimedia, 2012, 14(3): 510–524

    Article  Google Scholar 

  7. Yan C G, Dai F, Zhang Y D, Ma Y, Chen L C, Fan L J, Zheng Y S. Parallel deblocking filter for H.264/AVC implemented on Tile64 platform. In: Proceedings of the International Conference on Multimedia and Expo. 2011, 1–68

    Google Scholar 

  8. Bell N, Garland M. Implementing sparse matrix-vector multiplication on throughput-oriented processors. In: Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis. 2009, 18

    Google Scholar 

  9. Nvidia C. Compute Unified Device Architecture Programming Guide. 2007

    Google Scholar 

  10. Im E. Optimizing the performance of sparse matrix-vector multiplication. Dissertation for the Doctoral Degree. Berkeley: University of California, 2000

    Google Scholar 

  11. Vuduc R W. Automatic performance tuning of sparse matrix kernels. Dissertation for the Doctoral Degree. Berkeley: University of California, 2003

    Google Scholar 

  12. Williams S W. Auto-tuning performance on multicore computers. Dissertation for the Doctoral Degree. Berkeley: University of California, 2008

    Google Scholar 

  13. Williams S, Oliker L, Vuduc R, Shalf J, Yelick K, Demmel J. Optimization of sparse matrix-vector multiplication on emerging multicore platforms. Parallel Computing, 2009, 35(3): 178–194

    Article  Google Scholar 

  14. Bolz J, Farmer I, Grinspun E, Schröoder P. Sparse matrix solvers on the GPU: conjugate gradients and multigrid. ACM Transactions on Graphics, 2003, 22(3): 917–924

    Article  Google Scholar 

  15. Sengupta S, Harris M, Zhang Y, Owens J D. Scan primitives for GPU computing. In: Proceedings of Graphics Hardware. 2007, 97–106

    Google Scholar 

  16. Bell N, Garland M. Efficient Sparse Matrix-vector Multiplication on Cuda. Technical Report, NVIDIA Technical Report NVR-2008-004. 2008

    Google Scholar 

  17. Baskaran M M, Bordawekar R. Optimizing Sparse Matrix-vector Multiplication on GPUs Using Compile-time and Run-time Strategies. IBM Reserach Report RC24704 (W0812-047). 2008.

    Google Scholar 

  18. Cevahir A, Nukada A, Matsuoka S. Fast conjugate gradients with multiple GPUs. In: Proceedings of the Computational Science. 2009, 893–903.

    Google Scholar 

  19. Vázquez F, Garzón E M, Martnez J A, Fernández J J. The sparse matrix vector product on GPUs. In: Proceedings of the 2009 International Conference on Computational and Mathematical Methods in Science and Engineering. 2009, 1081–1092

    Google Scholar 

  20. Monakov A, Lokhmotov A, Avetisyan A. Automatically tuning sparse matrix-vector multiplication for GPU architectures. In: Proceedings of the High Performance Embedded Architectures and Compilers. 2010, 111–125

    Chapter  Google Scholar 

  21. Choi J W, Singh A, Vuduc R W. Model-driven autotuning of sparse matrix-vector multiply on GPUs. ACM Sigplan Notices, 2010, 45(5): 115–126

    Article  Google Scholar 

  22. Guo P, Wang L. Auto-tuning cuda parameters for sparse matrixvector multiplication on GPUs. In: Proceedings of the 2010 International Conference on Computational and Information Sciences (ICCIS). 2010, 1154–1157

    Chapter  Google Scholar 

  23. Yang X, Parthasarathy S, Sadayappan P. Fast sparse matrix-vector multiplication on GPUs: implications for graph mining. Proceedings of the VLDB Endowment, 2011, 4(4): 231–242

    Article  Google Scholar 

  24. Xu W, Zhang H, Jiao S, Wang D, Song F, Liu Z. Optimizing sparse matrix vector multiplication using cache blocking method on fermi GPU. In: Proceedings of the 13th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel & Distributed Computing (SNPD). 2012, 231–235

    Google Scholar 

  25. Buluc A, Williams S, Oliker L, Demmel J. Reduced-bandwidth multithreaded algorithms for sparse matrix-vector multiplication. In: Proceedings of the 2011 IEEE International Parallel & Distributed Processing Symposium. 2011, 721–733

    Chapter  Google Scholar 

  26. Buluç A, Fineman J T, Frigo M, Gilbert J R, Leiserson C E. Parallel sparse matrix-vector and matrix-transpose-vector multiplication using compressed sparse blocks. In: Proceedings of the 21st annual symposium on Parallelism in algorithms and architectures. 2009, 233–244

    Google Scholar 

  27. Kourtis K, Karakasis V, Goumas G, Koziris N. Csx: an extended compression format for spmv on shared memory systems. ACM SIGPLAN Notices, 2011, 46(8): 247–256

    Article  Google Scholar 

  28. Willcock J, Lumsdaine A. Accelerating sparse matrix computations via data compression. In: Proceedings of the 20th annual international conference on Supercomputing. 2006, 307–316

    Chapter  Google Scholar 

  29. Xu WZ, Liu Z Y, Fan D R, Jiao S, Ye X C, Song F L, Yan C. G Accelerating sparse matrix vector multiplication on many-core GPUs.World Academy of Science, Engineering and Technology, 2012, 6(1): 71–78

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Weizhi Xu.

Additional information

Chenggang Clarence Yan received his PhD from the Institute of Computing Technology, Chinese Academy of Sciences, China in 2013. He is a post-doctoral research fellow with the Department of Automation, Tsinghua University, China. His research interests include parallel computing, video coding, computational photography, computer vision, and multimedia communication.

Hui Yu, is a PhD candidate in the Chinese Academy of Sciences Key Laboratory of Intelligent Information Processing, Institute of Computing Technology. Her research interests include parallel computing, natural language processing, and machine translation.

Weizhi Xu received his PhD in Computer Architecture from Institute of Computing Technology, Chinese Academy of Sciences. He is a postdoctoral researcher in the Institute of Microelectronics, Tsinghua University. His research interests include high performance algorithms and architecture.

Yingping Zhang received his PhD in Computer Application Technology from the Institute of Computing Technology, Chinese Academy of Sciences. He is an R&D engineer of the State Grid Information & Communication Company of Hunan EPC. His research interests include high performance algorithms and computer graphics & vision.

Bochuan Chen received his MS in control engineering from Wuhan University, China. He is the general manager of the State Grid Information & Communication Company of Hunan EPC. His research interests include simulation machine and manufacturing automation.

Zhu Tian received her BS in Digital Media Technology from the School of Mechanical and Information Engineering, Shandong University, China. Her research interests include high performance algorithms and parallel computing.

Yuxuan Wang received her BS in Digital Media Technology from the School of Mechanical, Electrical, and Information Engineering, Shandong University, China. Her research interests include high performance algorithms and programming on multi core architecture.

Jian Yin is an associate professor and PhD candidate in computer science, from Shandong University, China. He received his MS from Harbin Institute of Technology, China. His research interests include computer graphics and digital image processing.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yan, C.C., Yu, H., Xu, W. et al. Memory bandwidth optimization of SpMV on GPGPUs. Front. Comput. Sci. 9, 431–441 (2015). https://doi.org/10.1007/s11704-014-4127-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11704-014-4127-1

Keywords