Share This Article:

NAMD Package Benchmarking on the Base of Armenian Grid Infrastructure

Full-Text HTML Download Download as PDF (Size:480KB) PP. 34-40
DOI: 10.4236/cn.2012.41005    3,973 Downloads   6,627 Views   Citations


The parallel scaling (parallel performance up to 48 cores) of NAMD package has been investigated by estimation of the sensitivity of interconnection on speedup and benchmark results—testing the parallel performance of Myrinet, Infiniband and Gigabit Ethernet networks. The system of ApoA1 of 92 K atoms, as well as 1000 K, 330 K, 210 K, 110 K, 54 K, 27 K and 16 K has been used as testing systems. The Armenian grid infrastructure (ArmGrid) has been used as a main platform for series of benchmarks. According to the results, due to the high performance of Myrinet and Infiniband networks, the ArmCluster system and the cluster located in the Yerevan State University show reasonable values, meanwhile the scaling of clusters with various types of Gigabit Ethernet interconnections breaks down when interconnection is activated. However, the clusters equipped by Gigabit Ethernet network are sensitive to change of system, particularly for 1000 K systems no breakdown in scaling is observed. The infiniband supports in comparison with Myrinet, make it possible to receive almost ideally results regardless of system size. In addition, a benchmarking formula is suggested, which provides the computational throughput depending on the number of processors. These results should be important, for instance, to choose most appropriate amount of processors for studied system.

Cite this paper

A. Poghosyan, L. Arsenyan, H. Astsatryan, M. Gyurjyan, H. Keropyan and A. Shahinyan, "NAMD Package Benchmarking on the Base of Armenian Grid Infrastructure," Communications and Network, Vol. 4 No. 1, 2012, pp. 34-40. doi: 10.4236/cn.2012.41005.


[1] Armenian National Grid Initiative, 2011.
[2] H. Astsatryan, Yu. Shoukourian and V. Sahakyan, “Creation of High-Performance Computation Cluster and Databases in Armenia,” Proceedings of the 2nd International Conference on Parallel Computations and Control Problems (PACO ’2004), Moscow, 4-6 October 2004, pp. 466-470.
[3] H. Astsatryan, Yu. Shoukourian and V. Sahakyan, “The ArmCluster Project: Brief Introduction,” Proceedings of the International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA ’2004), CSREA Press, Las Vegas, Vol. 3, 2004, pp. 1291-1295.
[4] I. Foster, C. Kesselman and S. Tuecke, “The Anatomy of the Grid-Enabling Scalable Virtual Organizations,” International Journal Supercomputer Applications, Vol. 15, No. 3, 2001, p. 3.
[5] I. Foster and C. Kesselman, “Computational Grids Chapter 2 of ‘The Grid: Blueprint for a New Computing Infrastructure’,” Morgan-Kaufman, San Francisco, 1999.
[6] H. Astsatryan, H. Keropyan, V. Sahakyan, Yu. Shoukourian, B. Chiladze, D. Chkhaberidze, G. Kobiashvili and R. Kvatadze, “Introduction of Armenian-Georgian Grid Infrastructures,” Proceedings of the International Conference on Computer Science and Information Technologies (CSIT’09), Yerevan, 28 September-2 October, 2009, pp. 391-394.
[7] H. Astsatryan, “Introduction of Armenian National Grid Infrastructure,” 22nd International Symposium on Nuclear Electronics & Computing, Varna, 7-14 September 2009, p. 22.
[8] H. Astsatryan, V. Sahakyan, Yu. Shoukouryan, M. Daydé, A. Hurault, M. Pantel and E. Caron, “A Grid-Aware Web Interface with Advanced Service Trading for Linear Algebra Calculations,” High Performance Computing for Computational Science—VECPAR 2008, Lecture Notes in Computer Science, Springer Toulouse, 24-28 June 2008.
[9] Core Infrastructure Centre of Enabling Grids for E-sciencE (EGEE) Project, 2011.
[10] EU FP7, “‘South East European eInfrastructure for Regional eScience’ Project,” 2011.
[11] EU FP7, “‘Black Sea Interconnection’ Project,” 2011.
[12] ISTC, “‘Development of Scientific Computing Grid on the Base of Armcluster for South Caucasus Region’ Project,” A-1451, 2011.
[13] ISTC, “‘Development of Armenian-Georgian Grid Infrastructure and Applications in the Fields of High Energy Physics, Astrophysics and Quantum Physics’ Project,” A-1606, 2011.
[14] European Grid Initiative, 2011.
[15] M. E. Tuckerman, D. A. Yarne, S. O. Samuelson, A. L. Hughes and G. J. Martyna, “Exploiting Multiple Levels of Parallelism in Molecular Dynamics Based Calculations via Modern Techniques and Software Paradigms on Distributed Memory Computers,” Computer Physics Communications, Vol. 128, No. 1-2, 2000, pp. 333-376. doi:10.1016/S0010-4655(00)00077-1
[16] M. E. Tuckerman and G. J. Martyna, “Understanding Modern Molecular Dynamics Methods: Techniques and Applications,” The Journal of Physical Chemistry, Vol. 104, No. 2, 2000, pp. 159-178. doi:10.1021/jp992433y
[17] A. H. Poghosyan, L. H. Arsenyan, H. H. Gharabekyan, J. Koetz and A. A. Shahinyan, “Molecular Dynamics Study of Poly(diallyldimethylammonium chloride) (PDADMAC)/Sodium Dodecyl Sulfate (SDS)/Decanol/Water Systems,” The Journal of Physical Chemistry B, Vol. 113, No. 5, 2009, pp. 1303-1310. doi:10.1021/jp806289c
[18] W. Dubitzky, A. Schuster, P. Sloot, M. Schroeder and M. Romberg, “Distributed, High-Performance and Grid Computing in Computational Biology,” Lecture Notes in Bioinformatics, Vol. 4360, 2007, p. 192.
[19] S. Sild, U. Maran, A. Lomaka and M. Karelson, “Open Computing Grid for Molecular Science and Engineering,” Journal of Chemical Information and Modeling, Vol. 46, No. 3, 2006, pp. 953-959. doi:10.1021/ci050354f
[20] B. Schuller, M. Romberg and L. Kirtchakova, “Application Driven Grid Developments in the OpenMolGRID Project,” In P. M. A. Sloot, et al., Eds., Advances in Grid Computing, Lecture Notes in Computers Sciences, Springer, Berlin, Vol. 3470, 2005, pp. 23-29.
[21] W. Dubitzky, D. McCourt, M. Galushka, M. Romberg and B. Schuller, “Grid-enabled Data Warehousing for Molecular Engineering,” Special Issue on High-Performance and Parallel Bio-Computing in Parallel Computing, Vol. 30, No. 9-10, 2004, pp. 019-1035.
[22] L. Kale, R. Skeel, M. Bhandarkar, R. Brunner, A. Gursoy, N. Krawetz, J. Phillips, A. Shinozaki, K. Varadarajan and K. Schulten, “NAMD2: Greater Scalability for Parallel Molecular Dynamics,” Journal of Computational Physics, Vol. 151, No. 1, 1999, pp. 283-312. doi:10.1006/jcph.1999.6201
[23] D. van der Spoel, A. R. van Buuren, E. Apol, P. J. Meulenhoff, D. P. Tieleman, A. L. T. M. Sijbers, B. Hess, K. A. Feenstra, E. Lindahl, R. van Drunen and H. J. C. Berendsen, “Gromacs User Manual Version 3.1.1,” Nijenborgh 4, Groningen, 2002.
[24] B. R. Brooks, R. E. Bruccoleri, B. D. Olafson, D. J. States, S. Swaminathan and M. Karplus, “CHARMM: A Program for Macromolecular Energy, Minimization, and Dynamics Calculations,” Journal of Computational Chemistry, Vol. 4, No. 2, 1983, pp. 187-217. doi:10.1002/jcc.540040211
[25] D. A. Pearlman, D. A. Case, J. W. Caldwell, W. R. Ross, T. E. Cheatham, S. DeBolt, D. Ferguson, G. Seibel and P. Kollman, “AMBER, a Computer Program for Applying Molecular Mechanics, Normal Mode Analysis, Molecular Dynamics and Free Energy Calculations to Elucidate the Structures and Energies of Molecules,” Computer Physics Communications, Vol. 91, No. 1-3, 1995, pp. 1-41. doi:10.1016/0010-4655(95)00041-D
[26] A. H. Poghosyan, G. A. Yeghiazaryan, H. H. Gharabekyan and A. A. Shahinyan, “The Gromacs and NAMD Software Packages Comparision,” Computer Physics Communications, Vol. 1, No. 4, 2006, pp. 736-743.
[27] C. Kutzner, et al., “Speeding up Parallel GROMACS on High-Latency Networks,” Journal of Computational Chemistry, Vol. 28, No. 12, 2007, pp. 2075-2084. doi:10.1002/jcc.20703
[28] S. Kumar, C. Huang, G. Almasi and L. V. Kale, “Achieving Strong Scaling with NAMD on Blue Gene/L, ipdps,” Proceedings of the IEEE International Parallel and Distributed Processing Symposium, Rhodes Island, 25-29 April 2006, p. 41.
[29] J. C. Philips, G. Zheng, S. Kumar and L. V. Kale, “NAMD Biomolecular Simulation on Thousands of Processors,” Proceedings of South Carolina 2002, Baltimore, September 2002.
[30] L. V. K. S. Krishnan, “CHARM++: A Portable Concurrent Object Oriented System Based on C++,” ACM SIGPLAN, Vol. 28, No. 10, 1993, pp. 91-108.

comments powered by Disqus

Copyright © 2017 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.