Space Complexity of Algorithm for Modular Multiplicative Inverse
Boris S. Verkhovsky
.
DOI: 10.4236/ijcns.2011.46041   PDF    HTML     4,107 Downloads   7,855 Views   Citations

Abstract

In certain computational systems the amount of space required to execute an algorithm is even more restrictive than the corresponding time necessary for solution of a problem. In this paper an algorithm for modular multiplicative inverse is introduced and its computational space complexity is analyzed. A tight upper bound for bit storage required for execution of the algorithm is provided. It is demonstrated that for range of numbers used in public-key encryption systems, the size of bit storage does not exceed a 2K-bit threshold in the worst-case. This feature of the Enhanced-Euclid algorithm allows designing special-purpose hardware for its implementation as a subroutine in communication-secure wireless devices.

Share and Cite:

B. Verkhovsky, "Space Complexity of Algorithm for Modular Multiplicative Inverse," International Journal of Communications, Network and System Sciences, Vol. 4 No. 6, 2011, pp. 357-363. doi: 10.4236/ijcns.2011.46041.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] A. J. Menezes, P. C. van Oorschot and S. A. Vanstone, “Handbook of Applied Cryptography,” CRC Press, Boca Raton, 1997.
[2] D. E. Knuth, “Fundamental Algorithms,” 3rd Edition, The Art of Computer Programming, Vol. 1, Addison-Wesley, Reading, MA, 1997.
[3] B. Verkhovsky, “Enhanced Euclid Algorithm for Modu- lar Multiplicative Inverse and Its Complexity,” Proceed- ings of 10th International Conference on Systems Re- search, Informatics and Cybernetics, Baden-Baden, 17-22 August 1998.
[4] B. Verkhovsky, “Enhanced Euclid Algorithm for Modu- lar Multiplicative Inverse and Its Cryptographic Applica- tion,” International Journal of Communications, Network and System Sciences, Vol. 3, No. 12, 2010, pp. 901-906. doi:10.4236/ijcns.2010.312123
[5] D. Harkin, “On the Mathematical Works of Francois- Edouard-Anatole Lucas,” Enseignement Mathematique, Vol. 3, No. 2, 1957, pp. 276-288.
[6] G. S. Lueker, “Some Techniques for Solving Recur- rences,” ACM Computing Surveys, Vol. 12, No. 4, 1980, pp. 410-436. doi:10.1145/356827.356832
[7] O. Perron, “Zur Theorie der Matrizen,” Mathematische Annalen, in German, Vol. 64, 1907, pp. 248-263.
[8] J. Hartmanis and R. E. Stearns, “On the Computational Complexity of Algorithms,” Transactions of the Ameri- can Mathematical Society, Vol. 117, No. 5, 1965, pp. 285-306. doi:10.1090/S0002-9947-1965-0170805-7
[9] L. Fortnow and S. Homer, “A Short History of Compu- tational Complexity,” Bulletin of the European Associa- tion for Theoretical Computer Science, Vol. 80, 2003, pp. 95-133.
[10] O. Goldreich, “Computational Complexity: A Conceptual Perspective,” Cambridge University Press, Cambridge, 2008.
[11] P. Ivey, S. N. Walker, J. M. Stern and S. Davidson, “An Ultra-High Speed Public Key Encryption Processor,” Pro- ceeding of IEEE Custom Integrated Circuits Conference, Boston, 3-6 May 1992, pp. 19.6.1-19.6.4. doi:10.1109/CICC.1992.591336
[12] J. S. Vitter, “Design and Analysis of Dynamic Huffman Codes,” Journal of the ACM, Vol. 34, No. 4, 1987, pp. 825-845. doi:10.1145/31846.42227
[13] J. S. Vitter, “ALGORITHM 673: Dynamic Huffman Co- ding,” ACM Transactions on Mathematical Software, Vol. 15, No. 2, 1989, pp. 158-167. doi:10.1145/63522.214390

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.