Data Acquisition System for the Mexican Sign Language

Abstract

Communication is a fundamental human need; however, people with hearing impairments still face barriers that limit their social and educational inclusion. In Mexico, Mexican Sign Language (MSL) is the primary communication medium for the deaf community, yet its lack of recognition among the hearing population continues to create a communication gap. This project proposes the development of a data acquisition system applied to a translator glove capable of interpreting the manual movements of MSL and converting them into text. The methodology integrates flex sensors connected to an Arduino microcontroller board, which processes the signals generated by finger movements and displays them on a digital interface. The resulting prototype translates nine representative phrases of MSL and demonstrates the technical feasibility of the system, providing an accessible and low-cost alternative to promote inclusive communication and technological accessibility.

Share and Cite:

Laureano-Cruces, A. , Mora-Torres, M. , Garduño-Bonilla, I. , Santiago-Pérez, N. and Hernández-Bautista, R. (2026) Data Acquisition System for the Mexican Sign Language. International Journal of Intelligence Science, 16, 59-69. doi: 10.4236/ijis.2026.161003.

1. Introduction

Communication is a fundamental human need and a universal right. However, people with hearing impairments continue to face significant barriers that limit their full participation in society. In Mexico, Mexican Sign Language (MSL) constitutes the main means of communication for this community, allowing interaction between deaf and hearing people in educational, occupational, and social contexts [1]. Nevertheless, the widespread lack of knowledge of MSL among the hearing population remains one of the primary causes of exclusion and communicative isolation.

In this context, the development of inclusive technologies has emerged as a viable alternative to reduce communication gaps and promote accessibility [2]-[4]. Among these solutions, sensor-based and microcontroller-based systems stand out for their ability to translate the characteristic manual movements of sign languages into text or synthesized speech [5]-[8]. These tools not only facilitate communication but also promote independence and social inclusion among individuals with hearing or speech disabilities [9] [10].

The present project aims to design and develop a technological glove capable of capturing and processing hand movements associated with MSL, using flex sensors connected to an Arduino board. The system converts the analog signals generated by the sensors into digital information that can be translated into visible text through a user interface [11] [12].

Additionally, this project seeks to offer an affordable and reproducible alternative to other high-cost commercial devices or applications—such as SignARTE, Svisual, or Visualfy Home—which, despite their functionality, often require advanced technological infrastructure or proprietary licenses [9] [10]. This approach responds to the need to democratize access to assistive technologies, emphasizing ergonomics, usability, and efficiency in data acquisition.

The methodology includes the selection of commonly used phrases in everyday communication, the calibration and training of flex sensors, their integration into an ergonomic glove, the processing and storage of acquired data, and the experimental validation through translation tests. Accessible materials and open-source tools were employed, ensuring the system’s replicability and potential for future improvement [11] [12].

As a result, a functional prototype was developed, capable of translating nine representative phrases of Mexican Sign Language—three main phrases and two alternative phrases for each—while incorporating a haptic and ergonomic design that supports natural and comfortable gestural communication. This achievement represents a relevant contribution to the field of inclusive technology, offering a practical, low-cost data acquisition system oriented toward equitable social interaction.

2. Methodology

This work proposes a data acquisition and processing system aimed at translating gestures of Mexican Sign Language (MSL) into text displayed in real time. The design focuses on developing an intelligent glove capable of capturing the user’s hand movements through flex sensors, processing them via an Arduino microcontroller board, and displaying the results on an LED matrix.

For this first version of the prototype, three core phrases of Mexican Sign Language—“Hello,” “No,” and “Goodbye”—were selected due to their high frequency in everyday interactions and their relevance to fundamental communicative acts: greeting, negation, and leave-taking. Studies in interpersonal communication indicate that greetings and farewells are universal, high-frequency communicative routines in daily interaction [13] [14].

In the case of MSL, these phrases exhibit clear, discrete, and stable gestural configurations, making them an appropriate starting point for an initial prototype based solely on flex sensors. This selection provides a reliable basis for evaluating system accuracy before moving toward broader vocabularies or dynamic gesture recognition.

2.1. General System Design

The general scheme of the system consists of three main modules: capture, processing, and visualization. In the capture module, the flex sensors detect finger curvature, generating voltage variations proportional to the intensity of movement. In the processing module, the Arduino board interprets the analog signals and converts them into digital data, assigning each movement pattern to a predefined phrase. Finally, in the visualization module, the LED matrix displays the corresponding text on screen, completing the translation process [8] [11].

This design aims to balance accuracy, comfort, and accessibility, ensuring that the glove is functional for users with different levels of mobility and adaptable to diverse communication situations.

2.2. Components and Materials

For the construction of the glove, low-cost, highly available, and easily integrable components were selected to ensure replicability and sustainability.

The main components are:

  • Flex Sensors: Five flexible sensors placed on the upper side of the fingers to detect flexion and extension movements [5]-[8].

  • Arduino Board: Serves as the central control unit, receiving sensor signals and executing the interpretation program [11] [12].

  • LED Matrix: Output device responsible for visualizing the translated text synchronized with the hand movements.

  • Dupont Connectors and Protoboard: Used to establish electrical interconnections between the sensors, the board, and the LED matrix.

  • Base Glove: Made of elastic, resistant, and breathable material to ensure comfort and freedom of movement.

The glove’s physical design prioritized ergonomics and assembly stability, using springs and reinforced seams to keep the sensors properly aligned without compromising comfort. Adjustment tests were conducted to determine the optimal placement of components, avoiding tension on electrical joints or restrictions in finger mobility.

2.3. Development Stages

The development of the translator glove was carried out in four main phases: hardware design, sensor calibration, software development, and prototype validation.

1) Hardware Design and Construction

During this phase, the flex sensors were integrated into the glove, ensuring alignment with the natural axes of finger movement to maximize sensitivity [5] [6]. Each sensor was fixed using stitching and flexible adhesives, avoiding structural rigidity and enabling accurate readings in different hand positions.

The sensors were connected via Dupont wires to a protoboard, which linked the circuit to the Arduino board. Calibration resistors were included to stabilize input signals and prevent electrical interference. Finally, the LED matrix was integrated as the output module, configured to display clear and legible text synchronized with the processed data [9] [10]. The electronic design was optimized for low energy consumption and minimal response delay between gesture execution and visualization.

2) Sensor Measurement and Calibration

The system requires a short calibration process whenever it is used by a new person. Calibration consists of recording the minimum and maximum flexion values of each finger—fully extended and fully bent positions—to generate personalized thresholds that enable consistent gesture recognition. This procedure is repeated for the five sensors, and the resulting values are incorporated into the Arduino code.

It is important to state that the reported 95.2% accuracy was obtained with a single user, after completing this individual calibration. Although the system can be adjusted for different users by recalibrating these thresholds, accuracy may vary if the glove is used without performing this procedure for each new user.

3) Software Development

Programming was implemented in the Arduino IDE environment, using the MD_MAX72xx, MD_Parola, and SPI libraries, which facilitate communication with the LED matrix and text animation [11] [12]. The program was structured to receive analog data from the sensors, process it using logical conditionals, and translate it into predefined text strings.

Each flexion pattern corresponded to a common-use phrase such as “Hello”, “Don’t touch me”, “See you”, or “Goodbye”. The program was designed modularly, allowing new phrases or gestures to be added in future prototype versions without rewriting the entire code.

4) Assembly and Functional Testing

With hardware and software integrated, initial tests were conducted to verify the responsiveness of each sensor and the correlation between hand movements and the text displayed on the LED matrix. Fine-tuning of calibration and verification of electrical connections followed.

The ergonomic performance of the device was prioritized, assessing freedom of movement, user comfort, and the legibility of the displayed text. After the final tests, the system achieved effective recognition of nine MSL phrases, including expressive variants that convey different shades of intention or emotion [2]-[4].

2.4. Functional Validation of the Prototype

The final system was evaluated through precision, consistency, and response time tests. The results showed that the translator glove could identify gestures with sufficient accuracy to establish basic communication between deaf and hearing individuals.

The prototype demonstrated operational stability and immediate response between gesture execution and text display, validating the effectiveness of the proposed model. Furthermore, its affordable, modular, and reproducible design confirms its potential as a support tool in educational, therapeutic, and social inclusion contexts.

3. Results

The system development concluded with the creation of a functional prototype of a translator glove for Mexican Sign Language (MSL), designed to recognize and translate in real time three basic phrases (Hello, No, and Goodbye), each with two expressive variants. These variants reflect different levels of intention and communicative emphasis, enhancing the user’s naturalness and expressiveness during interaction (Figure 1).

Figure 1. Prototype of the translator glove for Mexican Sign Language (MSL).

The integration of the flex sensors with the Arduino Uno board and the LED matrix enabled an immediate, stable, and highly reliable response system. After calibration, the sensors provided consistent readings that ensured accurate gesture detection under various usage conditions.

During experimental testing, three key performance aspects were evaluated: gesture detection accuracy, system response time, and operational stability. Each parameter was measured repeatedly under controlled sessions, considering different users and variations in finger flexion. The average results are presented in Table 1.

The results demonstrate that the prototype achieves an average detection accuracy above 95%, with minimal variability between sessions. This indicates proper calibration of the flex sensors and stable performance of the Arduino microcontroller. The response time below one second validates the feasibility of real-time communication, while the low error rate for similar gestures highlights the system’s sensitivity and specificity.

Table 1. Performance evaluation of the MSL translator glove.

Evaluated Parameter

Description

Average Result

Gesture detection accuracy

Percentage of correspondence between executed gesture and translated phrase

95.2%

Response time

Interval between gesture execution and text appearance on the LED matrix

0.84 s

Sensor reading stability

Signal variation after 50 repeated use cycles

±3%

Error rate for similar gestures

Misclassification between gestures with close flexion patterns

4.8%

Estimated energy autonomy

Average operating time using 9 V power supply (continuous use)

2.5 h

During ergonomic testing, users reported high comfort and freedom of movement, with no interference in natural finger flexion. The flexible materials and strategic placement of the sensors ensured continuous signal acquisition without data loss, even after multiple use cycles.

The average accuracy of 95.2% corresponds to a single, individually calibrated user. The system showed stable gesture detection once the personalized thresholds were set. For additional users, performance depends on running the same calibration routine before use.

The software developed in the Arduino IDE environment proved to be robust and versatile; it allowed for easy modification of detection thresholds and integration of new phrases without altering the base code structure. Additionally, text visualization on the LED matrix was clear, with legible characters at a distance of up to one meter, facilitating direct visual communication.

Two complementary materials were also developed:

1) A User Manual for the translator glove, which includes configuration, calibration, and maintenance instructions.

2) A Coding File for the Arduino system, documenting the program used for automatic phrase translation and enabling future updates.

Overall, the results confirm that the translator glove represents an accessible, reproducible, and functional solution to enhance communicative inclusion for individuals who are deaf or have speech disabilities. Its technical performance, precision, and affordability position is a viable tool for education, rehabilitation, and everyday communication.

4. Discussion

The prototype achieved an average accuracy of 95.2% and a response time of approximately 0.84 s, sufficient for near real-time interaction. These results are consistent with reports on flex sensor + low-cost microcontroller gloves, which show high performance with limited vocabularies and discrete gestures (letters or short phrases), particularly when user-specific calibration and well-defined thresholds are applied [5]-[8]. Compared to approaches aimed at speech translation or smartphone integration [9] [10], our system prioritizes low latency and immediate on-device visual output, eliminating connectivity bottlenecks and reducing reliance on external processing.

4.1. Arduino and Flex Sensors

The Arduino + flex sensor architecture proved to be stable, reproducible, and cost-effective [11] [12]. In agreement with related academic projects [5]-[8], direct analog reading enables robust threshold definition and simplifies the decision pipeline. Unlike configurations that delegate inference to a smartphone [10], on-board computation reduces integration complexity and latency risks, although it currently limits vocabulary scalability and dynamic gesture learning.

4.2. Output and Communication Channel

While part of the literature focuses on speech synthesis or app-based integration [9] [10], our design opts for an integrated LED matrix as the output interface. This channel offers three practical advantages: 1) Ambient adaptability—readability remains stable in low-light conditions without requiring a speaker; 2) Contextual privacy—users control when to display text; 3) System robustness—fewer failure points.

The trade-off lies in reduced expressiveness of the output—an area where speech-based systems excel in naturalness [9], though at the cost of greater software complexity and power consumption.

4.3. Ergonomics, Haptics, and Usability

The comfort and freedom of movement reported during user testing align with design and user-centered interaction principles, where dorsal sensor placement and local stiffness control enable stable readings without restricting hand motion [2]-[4]. Compared with previous setups using rigid mounts or external wiring [5] [6], our prototype’s use of stitched and flexible adhesive fixation and balanced weight distribution contributed to signal stability (±3% after 50 cycles) and a low error rate (4.8%), even for similar gestures.

It is important to note that, in this work, the term “haptic” does not refer to active tactile feedback—such as vibration, actuators, or force response—but rather to passive tactile properties related to comfort, material flexibility, and the feel of the glove in contact with the hand. The prototype does not generate active haptic stimuli; instead, it incorporates ergonomic and passive tactile considerations that promote a comfortable fit and stable interaction with the flex sensors.

4.4. Vocabulary and Expressiveness

Unlike works focused on manual alphabet recognition (letter-by-letter translation) [4], this project deliberately constrained the vocabulary to frequent communicative phrases (“Hello,” “No,” “Goodbye,” and their variants), prioritizing fluency and contextual utility over granularity. This approach enhances immediate usability in everyday scenarios (greeting, negation, farewell) and aligns with studies recommending the prioritization of high-frequency semantic units in early deployments [5] [8]. However, compared to systems integrating speech translation or mobile platforms [9] [10], future challenges include expanding the vocabulary and capturing dynamic gestures (trajectory, speed), typically addressed through machine learning-based gesture modeling.

4.5. Operational Robustness and Power Efficiency

The system demonstrated operational stability and energy autonomy of approximately 2.5 hours with a 9 V supply—reasonable figures for prototypes employing LED matrices and continuous sensor readings. Some precedents that externalize the output (e.g., to a smartphone) achieve longer glove battery life by offloading consumption to another device [10]. Our “all-in-one” configuration simplifies the ecosystem but requires future optimization of energy management, such as implementing duty-cycle control or low-power modes.

4.6. Practical Contributions

The main contributions of this work are as follows:

1) Demonstration of near real-time translation using accessible hardware [11] [12];

2) Validation of nine phrases with expressive variants, introducing emotional nuance seldom reported in basic prototypes [5]-[8];

3) A modular, reproducible design, documented with a User Manual and Arduino source code, facilitating technology transfer to educational and community settings;

4) Explicit incorporation of haptic and ergonomic design principles in the system layout, consistent with literature on tactile perception and interaction [2]-[4].

4.7. Comparison with Existing Commercial Systems

Unlike commercial applications and systems such as SignARTE, SVisual, or Visualfy Home—which relies on mobile devices, internet connectivity, and advanced processing modules to perform translation, remote interpretation, or voice-based assistance, the prototype developed in this work performs all data acquisition, interpretation, and text output directly on the glove. This local processing architecture eliminates the need for smartphones, reduces latency associated with data transmission, and avoids potential failures caused by connectivity issues.

Whereas commercial solutions prioritize features such as app integration, video calls with interpreters, or speech synthesis, our system focuses on providing an autonomous, low-cost device designed for immediate communication through an integrated LED matrix. This distinction positions the prototype as a more accessible alternative for educational, community, or daily-use scenarios, particularly in environments with limited technological resources.

4.8. Limitations and Future Work

The main limitation—shared with other low-cost systems [5]-[8]—is the restricted vocabulary and reliance on user-specific static thresholds. Future iterations should aim to:

1) Integrate machine learning models to handle dynamic gestures and broader variability;

2) Evaluate BLE/Bluetooth connectivity for mobile output without latency penalties;

3) Explore speech synthesis as an alternative communication channel [9];

4) Conduct validation with a larger participant pool and varied scenarios (lighting, fatigue, perspiration) to enhance generalizability.

The flex sensors used in the prototype capture only finger-joint curvature, making them suitable for static gestures based on fixed hand configurations. However, these sensors cannot measure hand orientation, movement acceleration, or spatial trajectory, all of which are essential for recognizing dynamic gestures commonly present in Mexican Sign Language and other signed languages.

Dynamic gestures require tracking:

  • Movement direction (spatial trajectory)

  • Speed and acceleration

  • Changes in wrist and hand orientation

Because flex sensors respond exclusively to bending, they cannot infer these motion parameters. Recognizing dynamic gestures would require inertial sensors, such as:

  • Accelerometers (detect movement and acceleration)

  • Gyroscopes (detect orientation and rotation)

  • IMUs (Inertial Measurement Units) combining both

For this reason, the current system is limited to discrete static gestures, and future versions must incorporate inertial sensing to expand the vocabulary and enable recognition of continuous or sequential movements.

5. Conclusions

The development of the Mexican Sign Language (MSL) translator glove demonstrates that it is possible to expand the communicative capabilities of individuals with hearing and speech disabilities through accessible and low-cost technologies. The system enables immediate visual translation of gestures into text, facilitating interaction between deaf users and hearing individuals—even in contexts where the receiver is unfamiliar with MSL.

The prototype integrates an emotional dimension by allowing the representation of expressive variants of the same phrase, resulting in richer and more natural communication. This feature adds a semantic layer to conventional assistive devices, as it conveys not only literal information but also affective and contextual nuances.

From a social perspective, the glove represents a significant contribution to inclusion, providing a tool that reduces communication barriers and fosters the integration of deaf individuals in educational, occupational, and community settings. Furthermore, its ergonomic and modular design makes it reproducible and scalable, enabling its use in sign language education and therapeutic environments.

In future iterations, the project could evolve toward versions featuring larger vocabularies, dynamic gesture recognition, and integrated speech synthesis, leveraging machine learning algorithms and wireless communication modules. These enhancements would broaden its functional scope and position it as a reference system within haptic technologies for accessibility.

Overall, this work lays the foundation for the development of new generations of inclusive electronic prototypes, aimed at improving communication between people with hearing impairments and their surroundings—promoting a more empathetic, accessible, and human-centered technological society.

Acknowledgements

This work represents the research conducted by Nereyda Santiago-Pérez and Rigoberto Hernández-Bautista to obtain a Bachelor’s degree in Computer Engineering from Universidad Autónoma Metropolitana-Azcapotzalco. It is also part of the divisional project, “Design of Intelligent Interfaces for Simulating the Behavior of Living or Animate Organisms,” from the same University.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] de México, G. (2023) Lengua de Señas Mexicana (LSM). Consejo Nacional para el Desarrollo y la Inclusión de las Personas con Discapacidad (CONADIS).
https://www.gob.mx/conadis/articulos/lengua-de-senas-mexicana-lsm
[2] Plaisier, M.A., van Polanen, V. and Kappers, A.M.L. (2017) The Role of Connectedness in Haptic Object Perception. Scientific Reports, 7, Article No. 43868.[CrossRef] [PubMed]
[3] Morasso, P. (2007) The Crucial Role of Haptic Perception. In: Chella, A. and Manzzoti, R., Eds., Artificial Consciousness, Imprint Academic Press, 85-102.
[4] Laureano-Cruces, A.L., Ramírez-Rodríguez, J., Mora-Torres, M. and Sánchez-Guerrero, L. (2016) Artificial Self-Awareness for Emergent Behavior. Frontiers in Psychological and Behavioral Science, 5, 1-15.
[5] Fusió d’Arts, V. (2018) Guante traductor de señas para sordomudos. Proyecto Terminal, División de Ciencias Básicas e Ingeniería, Instituto Politécnico Nacional, México.
[6] Universidad Nacional de Colombia (UNAL) (2018) Guante interpretador de lenguaje de señas.
[7] Morales Iturralde, A.L. (2021) Diseño de un guante con sensores de flexibilidad que traducen letras del abecedario del lenguaje sordo mudo utilizando Micropython. Proyecto Terminal, Universidad Politécnica Salesiana, Sede Guayaquil, Ecuador.
[8] Aburto, C.J.G.A. (2018) Guante traductor de señas para sordomudos. Proyecto Terminal, División de Ciencias Básicas e Ingeniería, Instituto Politécnico Nacional, México.
[9] Lardinois, F. (2012) Guante traductor de lenguaje de señas al habla. TechCrunch.
https://techcrunch.com/2012/07/09/enable-talk-imagine-cup/
[10] Forbes Staff (2020) Guante traductor del lenguaje de señas a través de un smartphone. Forbes México.
https://www.forbes.com.mx/tecnologia-guantes-que-traducen-el-lenguaje-de-signos/
[11] Arduino (2024) Arduino Documentation.
https://docs.arduino.cc/
[12] Fernández, Y. (2022) Qué es Arduino, cómo funciona y qué puedes hacer con uno. Xataka.
https://www.xataka.com/basics/que-arduino-como-funciona-que-puedes-hacer-uno
[13] Al-Harahsheh, A. and Boucif, F. (2019) A Socio-Pragmatic Study of Greeting and Leave-Taking Patterns in Algerian Arabic in Mostaganem.
https://www.researchgate.net/publication/336020591
[14] Leemann, A., Steiner, C., Jeszenszky, P., Culpeper, J. and Josi, L. (2024) Saying Goodbye to and Thanking Bus Drivers in German-Speaking Switzerland. Journal of Pragmatics, 234, 78-98.[CrossRef

Copyright © 2025 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.