A Distributed Virtual Machine for Microsoft .NET


Today, an ever increasing number of natural scientists use computers for data analysis, modeling, simulation and visualization of complex problems. However, in the last decade the computer architecture has changed significantly, making it increasingly difficult to fully utilize the power of the processor, unless the scientist is a trained programmer. The reasons for this shift include the change from single-core to multi-core processors, as well as the decreasing price of hardware, which allows researchers to build cluster computers made from commodity hardware. Therefore, scientists must not only be able to handle multi-core processors, but also the problems associated with writing distributed memory programs and handle communication between hundreds of multi-core machines. Fortunately, there are a number of systems to help the scientist e.g. Message Parsing Interface (MPI) [1] for handling communication, DistNumPy [2] for handling data distribution and Communicating Sequential Processes (CSP) [3] for handling concurrency related problems. Having said that, it must be emphasized that all of these methods require that the scientists learn a new method and then rewrite their programs, which mean more work for the scientist. A solution that does not require much work for the scientists is automatic parallelization. However, research dating back three decades has yet to find fully automated parallelization as a feasible solution for programs in general, but some classes of programs can be automatically parallelized to an extent. This paper describes an external library which provides a Parallel. For loop construct, allowing the body of a loop to be run in Parallel across multiple networked machines, i.e. on distributed memory architectures. The individual machines themselves may be shared memory nodes of course. The idea is inspired by Microsoft’s Parallel Library that supplies multiple Parallel constructs. However, unlike Microsoft’s Library our library supports distributed memory architectures. Preliminary tests have shown that simple problems may be distributed easily and achieve good scalability. Unfortunately, the tests show that the scalability is limited by the number of accesses made to shared variables. Thus the applicability of the library is not general but limited to a subset of applications with only limited communication needs.

Share and Cite:

M. Larsen and B. Vinter, "A Distributed Virtual Machine for Microsoft .NET," Journal of Software Engineering and Applications, Vol. 5 No. 12, 2012, pp. 1023-1030. doi: 10.4236/jsea.2012.512119.

Conflicts of Interest

The authors declare no conflicts of interest.


[1] A. Geist, et al., “MPI-2: Extending the Message-Passing Interface,” Euro-Par’96 Parallel Processing, Springer, Berlin/Heidelberg, 1996, pp. 128-135.
[2] M. R. B. Kristensen and B. Vinter, “Numerical Python for Scalable Architectures,” Proceedings of the Fourth Conference on Partitioned Global Address Space Programming Model (PGAS’10), New York, 12-15 October 2010, pp. 15:1-15:9. doi:10.1145/2020373.2020388
[3] C. A. R. Hoare, “Communicating Sequential Processes,” Communications of ACM, Vol. 21, No. 8, 1978, pp. 666-677. doi:10.1145/359576.359585
[4] Microsoft, “Parallel Programming in the .NET Framework,” 2012. http://msdn.microsoft.com/en-us/library/dd460693
[5] OpenMP, “OpenMP,” 2012. http://www.openmp.org
[6] J. P. Hoeflinger, “Extending OpenMP to Clusters,” 2012. http://www.hearne.co.uk/attachments/OpenMP.pdf
[7] T. Seidmann, “Distributed Shared Memory Using the .NET Framework,” 3rd IEEE/ACM International Symposium on Cluster Computing and the Grid, Tokyo, 12-15 May 2003, pp. 457-462. doi:10.1109/CCGRID.2003.1199401
[8] K. Asanovic, “The Landscape of Parallel Computing Research: A View from Berkeley,” University of California, Berkeley, 2006.
[9] L. Lamport, “How to Make a Multiprocessor That Correctly Executes Multiprocess Programs,” IEEE Transactions on Computers, Vol. C-28, No. 9, 1979, pp. 690-691. doi:10.1109/TC.1979.1675439
[10] S. V. Adve and H. D. Mark, “Weak ordering—A New Definition,” Proceedings of the 17th annual international symposium on Computer Architecture (ISCA’90), Seattle, 28-31 May 1990, pp. 2-14.
[11] M. S. Papamarcos and J. H. Patel, “A Low-Overhead Coherence Solution for Multiprocessors with Private Cache Memories,” Procedings of the 11th Anual Internation Symposium on Computer Architecture, Ann Arbor Michigan, 5-7 June 1984, pp. 348-354.
[12] J. P. Hoeflinger, “Extending OpenMP to Clusters,” 2012. http://www.hearne.co.uk/attachments/OpenMP.pdf

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.