Share This Article:

Parallel Multicore CSB Format and Its Sparse Matrix Vector Multiplication

DOI: 10.4236/alamt.2014.41001    2,862 Downloads   5,495 Views   Citations

ABSTRACT

Sparse Matrix Vector Multiplication (SpMV) is one of the most basic problems in scientific and engineering computations. It is the basic operation in many realms, such as solving linear systems or eigenvalue problems. Nowadays, more than 90 percent of the world’s highest performance parallel computers in the top 500 use multicore architecture. So it is important practically to design the efficient methods of computing SpMV on multicore parallel computers. Usually, algorithms based on compressed sparse row (CSR) format suffer from a number of nonzero elements on each row so hardly as to use the multicore structure efficiently. Compressed Sparse Block (CSB) format is an effective storage format which can compute SpMV efficiently in a multicore computer. This paper presents a parallel multicore CSB format and SpMV based on it. We carried out numerical experiments on a parallel multicore computer. The results show that our parallel multicore CSB format and SpMV algorithm can reach high speedup, and they are highly scalable for banded matrices.

Conflicts of Interest

The authors declare no conflicts of interest.

Cite this paper

Yang, B. , Gu, S. , Gu, T. , Zheng, C. and Liu, X. (2014) Parallel Multicore CSB Format and Its Sparse Matrix Vector Multiplication. Advances in Linear Algebra & Matrix Theory, 4, 1-8. doi: 10.4236/alamt.2014.41001.

References

[1] http://www.top500.org/
[2] Im, E.J., Yelick, K. and Vuduc, R. (2004) Sparsity: Optimization Framework for Sparse Matrix Kernels. International Journal of High Performance Computing Applications, 18, 135-158.
http://dx.doi.org/10.1177/1094342004041296
[3] Morton, G.M. (1966) A Computer Oriented Geodetic Data Base and a New Technique in File Sequencing. IBM Ltd., Ottawa.
[4] Jain, A. (2008) pOSKI: An Extensible Autotuning Framework to Perform Optimized SpMVs on Multicore Architectures. Master Thesis, Computer Science Department, University of California at Berkeley, Berkeley.
[5] Saad, Y. (2003) Iterative Methods for Sparse Linear Systems. Society for Industrial Applied Mathematics.
http://dx.doi.org/10.1137/1.9780898718003
[6] Bulu, C.A., Fineman, J.T., Frigo, M., Gilbert, J.R. and Leiserson, E. (2009) Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication Using Compressed Sparse Blocks. Symposium on Parallelism in Algorithms and Architectures, 233-244.
[7] Rose, D.J. (1973) A Graph-Theoretic Study of the Numerical Solution of Sparse Positive Definite Systems of Linear Equations. Graph Theory and Computing, 183-217.
[8] Blumofe, R.D., Joerg, C.F., Kuszmaul, B.C., Leiserson, C.E., Randall, K.H., and Zhou, Y. (1995) Cilk: An Efficient Multithreaded Runtime System. Principles and Practice of Parallel Programming, Santa Barbara, 207-216.
[9] Boisvert, R., Pozo, R., Remington, K., Miller, B. and Lipman, R. (2000) The Matrix Market.
http://math.nist.gov/MatrixMarket/
[10] Davis, T. and Hu, Y. (2013) Sparse Matrix Collection. University of Florida, Gainesville.
http://www.cise.ufl.edu/research/sparse/matrices/

  
comments powered by Disqus

Copyright © 2019 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.