AR Technology-Based Game for Finding Treasures in Museums

Abstract

With the vigorous development of tourism and entertainment industry, the traditional way of museum information display has been increasingly unable to meet people’s growing entertainment needs. Benefiting from the development of AR technology, AR museum games, a method of combining traditional museums with emerging information technology, can transform the knowledge display of museums from boring learning to active exploration, thereby improving the fun of the journey. By combining the museum auxiliary guide with AR games, the story of the museum exhibits is processed with interest, and the knowledge display of the serious museum becomes more vivid. The design is based on Unity 3d platform, and the Vuforia plug-in and UGUI interface controls that can be stable and efficient for image recognition are used to complete the development of museum AR games.

Share and Cite:

Yang, X. (2023) AR Technology-Based Game for Finding Treasures in Museums. Journal of Computer and Communications, 11, 43-52. doi: 10.4236/jcc.2023.111004.

1. Introduction

With the continuous development of AR technology and the gradual maturity of the entertainment industry, AR games have become one of the favorite entertainment methods in contemporary society [1]. There are more and more examples of AR technology combined with all walks of life, and the research directions are becoming more and more in-depth. Due to the characteristics of AR technology that can visualize and intuitively present information, it is feasible to apply AR technology to the field of knowledge popularization and fun game application in museums.

This study aims to design and implement an AR game that allows visitors to gain a deeper understanding of the history of museums or historical places. In the design of the experiment, on the one hand, the design of the museum game is always the primary purpose of knowledge popularization; on the other hand, it is also considered that the game should have a clear purpose and rules, and pay attention to the diversity and fun of interaction methods in the implementation and design process.

Around the theme of “making it easier for visitors to understand the history of the museum”, the mission of the game is designed to collect key objects, and the visitors need to overcome the challenge of collecting clues hidden throughout the museum, and finally obtain a “treasure” of their own.

During the collection process, visitors can be guided through the system, complete pre-set challenges, and learn about relevant historical knowledge in a relaxed atmosphere.

At present, there are many application examples of related technologies suitable for AR game development, such as location-based mobile augmented reality technology games [2], children’s auxiliary games based on voice interaction and Unity 3D model [3], augmented reality games based on virtual touch technology [4] and so on. The main purpose of this study is to allow tourists to obtain richer knowledge and information, and also to have the interactivity of knowledge content. Therefore, the design of the museum game will be based on the Unity 3D platform, using the Vuforia plug-in [5], which can be stable and efficient for image recognition, to present the traditional museum exhibit information in the form of 3D models and stereo sound effects, so as to enhance the visual and auditory experience of visitors. In addition, UGUI interface controls are used in the interaction of the game to provide button response, scene jump and other interactive operation functions.

2. Research Methods

2.1. Game Design

The goal of the game is to overcome many challenges, collect clues hidden throughout the museum, and finally obtain a “treasure” belonging to the visitors. Before entering the museum, users first download and install a mobile software installation package, then launch the game and scan the given picture next to the collection, and they can see the 3d model corresponding to the collection presented in front of the visitor. Visitors can zoom in, zoom out or rotate the model, or click the button at the bottom of the UI interface to play the call audio corresponding to the collection, which can well enhance visitors ‘interest in the museum’s animal specimens, fossils, ancient tools and other exhibits, and can also let visitors feel the novelty of the combination of AR technology and museum collections.

After that, visitors can move on to the main quest, which is to collect clues to the treasure. The game designed two kinds of clue-gathering challenges for visitors of different ages to try, including a question and answer game corresponding to the model and a puzzle game with relatively high difficulty. If you successfully solve the corresponding problem in the game or put together the 9-square puzzle of the collection, you can obtain the position of the next clue. When visitors need clearer location navigation, they can click the “Map” button in the interface to view the map of the current location and the location of the next clue point. It is worth noting that by using gesture control technology, visitors can zoom in and rotate the map in the map function module to facilitate users to better determine the location.

2.2. Key Technology

The core technologies used in the whole development process mainly include AR camera, gesture control, button response, scene jump, reading files, image segmentation, and audio playback.

The final effect of the game is: the file is exported as an apk and installed on the Android system. When the user opens the game, they can see a UI interface containing six function buttons: play audio, pause audio, collect clues (answer the question), collect clues (puzzle game), map, and exit. Click different buttons to jump to a new scene to achieve the corresponding function. At the same time, scanning the phone for a given image reveals the animal model, and using gesture control technology, users can zoom in, out and rotate the model by tapping and swiping on the phone screen.

3. Technology Implementation

3.1. Environment Construction

Since different versions of Unity 3D require matching SDKS to work properly, and since the game will eventually need to be exported to the Android apk format, it is also necessary to configure the matching JDK and NDK environments. In the selection of platform and plug-in version, Unity 3D + SDK + NDK + JDK was finally selected to complete the configuration of the development environment.

After the environment is built, the AR Camera and Image Target are created by Vuforia Engine in Unity, and the images with high star recognition are selected as the scanning objects and imported into the database [5]. Then, add the required model under the object Image Target, adjust the spatial position of the model and the Image Target, proofread the Angle between AR camera and the model, and finally click the play button to see the presented effect. A basic AR recognition program is completed. The situation is shown in Figure 1.

3.2. Implementation of Interactive Functions

One of the core goals of the design was to improve the interaction between the game and the player, and between the player and the museum collection. Gesture control technology can achieve this goal very well [6]. Specifically, two scripts Rotate.cs to realize the rotation function of the model and gesture.cs to

Figure 1. Implementation of 3D model.

realize the zoom and shrink function of the model are added to the inspector part of the model to realize the operation control of the model. When visitors open the game on their mobile phone and scan the pictures to see the 3D model, they can zoom in, out or select the model by dragging and touching. In the case of a more refined model, visitors can see the body details of some animals more conveniently and quickly through this function.

In order to complete the two clue-gathering mini-games, we need to add buttons to the UI and create new scenes. For this reason, we also need to create the transition script ChangeScene.cs to allow the interface to jump to or return to other scenes when the user fires different button events. The design of the UI is shown in Figure 2, in different scene switching, let the tourists experience the immersion of the game.

3.3. Implementation of Game Functions

3.3.1. Game Sound Effects

Sound was also an important part of the design and implementation of the game. The AudioSource component is the audio source component, the source of sound, and its components contain many methods to control audio [7]. Because it is easy to control the Playback of audio using the AudioSource component and the buttons of the UI interface, the downloaded Sound file is put into the Onclick() of the “Sound Playback” button and the “Playback paused” button,The location of the Audiosource is shown in Figure 3. And call AudioSource.Stop() and AudioSource.Play() to complete the implementation of the function.

3.3.2. MapDisplay

The map display part is to solve the difficulty of GPS positioning in the museum and the situation that the map cannot be easily and clearly viewed. As shown in Figure 3, click the Mapbutton, and the scene will jump to NewScene.

Figure 2. Button in UI.

Figure 3. Audio Source.

At this time, the user interface will display the location map of the current clue point. With the help of the created gesture control script, visitors can enlarge or adjust the direction of the map by touching the mobile phone screen, so that they can view the map more clearly. In addition, each new Scene sets a Quit button, which when clicked will exit to the main SampleScene scene. Changing the sequence number of SceneManager.LoadScene() in the ChangeScenes.cs file can be used to switch between different interfaces.

3.3.3. Quiz Game

When the game was designed, it was proposed that visitors could easily understand the knowledge they are interested in while playing the game, so as to achieve the purpose of entertaining. Therefore, it is an appropriate idea to choose a quiz about the challenges and museum collections. The implementation process is to create a new Question Scene, place the corresponding component on the Canvas, and create a question.cs script that can read the file path, load the question and determine whether the text is correct.

When visitors click to enter the Question Scene (Figure 4), they can see that there is a set question at the top of the interface, which is about the museum collection in the current location of visitors. Visitors enter the answer in the text box below. If the answer is correct, they can click submit to enter the next question. After answering the question correctly, the interface will prompt the clue of treasure, that is, the location of the next key clue. If tourists think the current Question is difficult, they can choose to click “Loading Question” to replace the current question or click “Quit” to return to the main scene directly.

3.3.4. Puzzle Game

The last major technical component is jigsaw puzzle design. This is shown in Figure 5. The puzzle puzzle takes longer and is more difficult than the answer game, but in turn, visitors get double the treasure clues when they put together the correct picture of the collection. The specific implementation requires the Sprite component to split the imported image and put it into the empty Game Object. Write pintu.cs script, with the help of Sprite [] UI Show Image array and array index to complete the image initialization and position exchange, and then complete the puzzle game design.

Figure 4. Answer interface.

Figure 5. Puzzle game.

4. Testing

4.1. Testing Preparation

Before the test, the code of five major parts, namely gesture control part, audio playback part, map display part, jigsaw game part and problem game part, should be placed in a project in the aspect of software, and the switch between scenes can be realized through Change Scenes. And the game to use the problem text, pictures, audio, scripts and other resources to create folders for management, convenient subsequent adjustment.

4.2. Testing Method

The evaluation phase uses a selective verbal protocol (“think out loud”) approach to get some user-based feedback and evaluation, where users are asked to complete a series of tasks and speak their thoughts aloud during the process. Compared with other testing methods, it has the advantages of providing rich data sources, observing users’ thinking, analyzing problems from multiple perspectives, and being easy for subjects to operate [8].

Before conducting the test, the participants were required to fill out a consent form for ethical purposes, despite the fact that the participants were friends with the developers and the location of the test was chosen to be a student dormitory. In addition, the testing process will fully consider the safety of the subjects under the current Covid-19 prevention and control policy. All the equipment that the subjects need to contact, such as computers, keyboards, mice, Android phones, etc., will be cleaned and disinfected in advance, and the subjects can interrupt the testing process at any time.

The tasks that subjects need to complete are designed according to the main function of the game and the task objectives. When the user clicks on the installed program, according to the set task, click the button of the UI interface to operate. The subjects were asked to complete six tasks, the game was tested on the computer (Figure 6) and the phone (Figure 7) respectively. As they completed the task, they were also asked to verbally comment on their thoughts, decisions, and opinions.

Figure 6. The test on the computer.

Figure 7. The test on the Android phone.

After all the tasks are done, for the verbal protocol, this generates a large amount of data, much of which is used to suggest potential improvements to the UX of the game. Some of the most insightful data are recorded in Table 1. This approach provides qualitative data that often provides a better measure of how users intuitively “feel” about the game, as well as their immediate reactions and difficulties in tackling the task.

4.3. Test Results

After user testing, problems with the game’s design were exposed. They are:

1) The UI is too simple

2) The collection of clues part of the game contains two games of some difficulty

3) The interactivity of the 3D model can be further improved on the basis of the current design.

The problem of simple UI design has less impact. Limited by the development time and the proficiency level of the software, we only try our best to consider the functionality of the prototype when making the prototype. This problem can be solved by adding the picture material of the button and adding rich game background and sound effects.

In terms of the game difficulty of collecting clues, additional small game options with simple difficulty can be added in the subsequent development process, such as adding a virtual joy-stick to control the model to move, and eating the fruits in the scene within the specified time to obtain points. The development idea is similar to the existing answer game. Click the button of the UI interface to jump to a new scene to play the game.

This last question opens up new possibilities for the whole design. How a visitor interacts with a 3D model can greatly influence their interest in a museum collection, so going further, it is possible to add scripts that can move the model by continuously tapping the screen and allow simple voice interaction between the model and the visitor.

Table 1. Table type styles (Table caption is indispensable).

5. Conclusions

The purpose of the design of the game is to develop a mixed reality game that allows visitors to have a deeper understanding of the museum. The main task is to collect key items. In this collection process, visitors can get clues through the game and the system guide, and learn relevant historical knowledge in a relaxed and pleasant atmosphere.

The whole game development process of the game can be divided into two parts: one is the AR camera and gesture control part to attract visitors’ interest, and the other is the part to complete the game to get clues. As a game designer, you need to think about how to engage the user [9] and always design with the user in mind [10]. Compared with obtaining relevant information about the collection by reading the introduction posters of the collection, augmented reality provides a contextual experience, improves interest and participation, and enables visitors to have a more relaxed experience and a deeper understanding of the museum collection they interact with [11].

For visitors to the museum, novelty is one of the main feelings they want to obtain, which is also a “hidden” need that designers need to find out [12]. Although AR technology has been widely used in medical, entertainment, military, machine manufacturing and other fields.

However, it is still an interesting topic for most tourists, which can bring them a novel experience that they usually cannot get [13]. Based on this, in the design of the prototype, we first strive to meet the scanning pictures of the 3D model and tourists can interact with the model. Participants’ praise and praise for this part of the function in the test phase also proved that this design idea is right.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Samuli, L., et al. (2021) Why Playing Augmented Reality Games Feels Meaningful to Players? The Roles of Imagination and Social Experience. Computers in Human Behavior, 121, Article ID: 106816.
https://doi.org/10.1016/j.chb.2021.106816
[2] Sydorenko, T., Hellermann, J., Thorne, S.L. and Howe, V. (2019) Mobile Augmented Reality and Language-Related Episodes. TESOL Quarterly, 53, 712-740.
https://doi.org/10.1002/tesq.507
[3] Hu, L., et al. (2022) The Practice and Application of AR Games to Assist Children’s English Pronunciation Teaching. Occupational Therapy International, 2022, Article ID: 3966740.
https://doi.org/10.1155/2022/3966740
[4] Zarraonandia, T., et al. (2019) Magic Flowerpot: An AR Game for Learning about Plants. Computer-Human Interaction in Play Companion Extended Abstracts, 813-819.
[5] Linowes, J. and Babilinski, K. (2017) Augmented Reality for Developers: Build Practical Augmented Reality Applications with Unity. ARCore, ARKit, and Vuforia. Packt.
[6] Pham, T., et al. (2018) Scale Impacts Elicited Gestures for Manipulating Holograms. Proceedings of the 2018 Designing Interactive Systems Conference, June 2018, 227-240.
https://doi.org/10.1145/3196709.3196719
[7] Li, H.X. (2022) Convolutional Neural Network-Based Virtual Reality Real-Time Interactive System Design for Unity3D. Computational Intelligence and Neuroscience, 2022, Article ID: 2530836.
https://doi.org/10.1155/2022/2530836
[8] Aujla, N., et al. (2018) Evaluating a Stroke-Specific Version of the Illness Perception Questionnaire-Revised, Using the Think-Aloud Method. Journal of Health Psychology, 25, No. 12.
https://doi.org/10.1177/1359105318781942
[9] Doherty, K. and Doherty, G. (2019) Engagement in HCI. ACM Computing Surveys, 51, 1-39.
https://doi.org/10.1145/3234149
[10] Güzin, Ş. and Bahar, Ş. (2022) Experience Prototyping through Virtual Reality Head-Mounted Displays: Design Appraisals of Automotive User Interfaces. The Design Journal, 25, 807-827.
[11] Lu, Y., et al. (2022) ChordAR: An Educational AR Game Design for Children’s Music Theory Learning. Wireless Communications and Mobile Computing, 2022, Article ID: 5268586.
https://doi.org/10.1155/2022/5268586
[12] Lin, F.-H., et al. (2017) Empirical Research on Kano’s Model and Customer Satisfaction. PLOS ONE, 12, e0183888.
https://doi.org/10.1371/journal.pone.0183888
[13] Azuma, R.T. (1997) A Survey of Augmented Reality. Presence: Teleoperators and Virtual Environment, 6, 355-385.
https://doi.org/10.1162/pres.1997.6.4.355

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.