Immersive Robot Control in Virtual Reality to Command Robots in Space Missions

Abstract

We present an approach to control a semi-autonomous robot team remotely under low bandwidth conditions with a single operator. Our approach utilises virtual reality and autonomous robots to create an immersive user interface for multi-robot control. This saves a big amount of bandwidth, just because there is no need to transfer a constant steam of camera images. The virtual environment for control only has to be transferred once to the control station and only has to be updated when the map is out of date. Also, the camera position can easily be changed in virtual reality for more overview on the robots situation. The parts of this approach can easily be transferred to applications on earth e.g. for semi-autonomous robots in hazardous areas or under water applications.

Share and Cite:

Planthaber, S. , Mallwitz, M. and Kirchner, E. (2018) Immersive Robot Control in Virtual Reality to Command Robots in Space Missions. Journal of Software Engineering and Applications, 11, 341-347. doi: 10.4236/jsea.2018.117021.

1. Introduction

In future space missions, robots will help astronauts in collaborative teams. However, exploration of far away extraterrestrial sites such as Mars will not be possible in the near future with humans on site. Therefore, approaches such as robotic exploration are needed. Here, full autonomous behavior and exploration are technically not yet feasible, and control and guidance by human operators are required to solve challenging exploration tasks. But not only the environment is challenging, also the communication delay and bandwidth limitations are challenging for robot control.

User interfaces are needed that enable intuitive control and allow the human to perceive the environment by means of sensor data from the robotic system.

Hence, when robots are deployed in remote places to be remotely controlled several challenges have to be solved.

These challenges include an appropriate user interface and robust communication.

Developments for space applications do not only apply to extraterrestrial missions.

They also apply to robotic tasks on earth, such as surveillance of large structures, which may be on land or under water. This is also very challenging, since the available communication bandwidth is often limited. For intuitive interaction, the amount of immersion plays an important role for the efficiency of the operator.

We present an approach that enables a high level of immersion, while saving bandwidth by creating a virtual, three dimensional environment from robot telemetry and maps, but without transmitting images or videos taken from cameras in a continuous way.

Moreover, our approach is independent from the visibility in the visual spectrum. It can also create the virtual environment from other sensors that might be better suited for the robot’s environment.

2. Virtual Reality for Robot Control

Our approach is to control a team of semi-autonomous robots in a three dimensional virtual environment. This strongly relies on the robots ability to localize themselves within their map and also the capability to do autonomous movements and actions.

To create the virtual environment, the robots shared map is downloaded once to the control station, afterwards only the robot position has to be updated to display the virtual model of the robot within the map at the current position. The interface is separated into 3D and 2D elements, while the 2D elements are used to select the desired action for the next 3D command, clicking into the 3D map defines the location for that action. One novelty of this approach is that the robot is controlled by interacting with the robot and environment in virtual reality. This also allows for free camera positioning. The operator may choose free floating camera positions for optimal overview. For direct remote control of the robots movements, a view from above the robot can be beneficial. Traditional control approaches using hardware cameras, cannot provide this option.

Apart from generic computer screens, this user interface can be used with more immersive hardware, like Multi-Projection Environments or Virtual Reality (VR) headsets. We mainly used the Multi-Projection Environment for our experiments, which is a 3D test environment, where several large screens with a width of 1.52 m and a height of 2.03 m are arranged in a 180˚ semi-circle. Each screen is capable of displaying 3D images using polarized glasses.

3. Exoskeleton

The use of VR offers the ability to utilize a variety of 3D-Input devices apart from traditional 2D inputs like a standard mouse.

Nevertheless one mode of operation for the 3D devices is moving a cursor over a two dimensional overlay, which is used similar to a mouse. It can be used to select the robot and to define the position for a selected action in the underlying 3D-representation. Action selection is also performed using this cursor. This way, the ability to control the system is independent of the device and can also be performed using standard PC Hardware.

In case that the human operator has to take over direct control of the robot, the CAPIO exoskeleton can be used as input device (Figure 1). It has 18 active degrees of freedom (DOF). The active upper body exoskeleton is designed for teleoperation and is described in [1] . It can be used to control the VR pointer but also for direct interaction with the robot. The overall weight of the system is 19.8 kg which is mostly carried at the hip belt. All joint are designed as serial elastic actuators (SEA). They are optimized to archive a highly dynamic and safe movement.

The exoskeleton follows a distributed control approach in terms of local torque control of each joint. An external computer calculates the desired torques for a gravity compensation with the help of the RBDL library [2] . This allows the user to move free in a mechanical transparent robot. The end effector of the exoskeleton is an active hand interface providing a servo driven and force controlled button, a digital joystick and two more push buttons. The user can choose with that device between several modes for teleoperation.

The system is able to generate force feedback to the operators arm. A preliminary experiment of force feedback was executed by applying manually torques around all axes to the end effector of target system. The loads are transferred via satellite to the exoskeleton control room. The applied torques to the operator is shown in Figure 1.

4. Optimization of Man-Machine Interaction

For intuitive interaction and control it is crucial that the operator of a multi-robot team is not overwhelmed by presented information. Hence, the interface must be optimized with respect to different criteria. In our setting, we optimized 1) sensor data visualization, 2) information selection, and 3) information presentation style.

To optimize sensor data, visualization was most relevant (1), especially since robotic systems were equipped with laser sensors that generate output which is not easily understandable for humans in raw format.

The resulting visualization from this optimization is visible in Figure 2.

When controlling several robots information might be generated that would be too much to be analyzed by a single operator [3] . For information selection (2) we developed an approach that allows to present relevant information in 2D elements from one robot at the time when selected. To cover all critical information from all robots additional 2D elements would pop up in case there is a

Figure 1. The CAPIO active upper body exoskeleton. Left: The exoskeleton on its stand; Right: Applied torques to the CAPIO exoskeleton at force feedback experiment.

Figure 2. VR-based remote control of Robots. Left: The Multi-Projection Environment GUI with the exoskeleton ascontrol device; Right: The Robots: Sherpa TT and Coyote III in a Mars-Like Environment in Utah (USA).

critical situation that the operator must react on (independent on which robot is currently selected). By this dual approach we assured that the operator is on the one hand informed on all critical issues and on the other hand would not have to process too much information resulting in too high work load.

Since work load must be kept low during control we performed several experiments to investigate work load on the operator depending on the interface design (3). We performed offline and online analysis of electroencephalographic data from humans to evaluate the general design as well as task load within our setting. As a measure the event related potential P300 [4] was used. P300 is not only evoked by infrequent or unexpected events that are distinct from more frequent events [4] [5] but also by task relevant events [6] [7] . In case of very high work load, relevant information might be missed or not recognized as task relevant due to the high load on cognitive processing. This leads to a reduction of the amplitude of P300 or missing P300. Hence, one can infer that workload is too high in case that P300 is reduced in amplitude or is not evoked at all [6] [7] [8].

We used offline average analysis to optimize the style of information processing.

As result, important information is presented in a symbolic fashion instead of text. In the case that more than two robots were involved we presented the robots in different colors that were than mapped to the given information [8] .

This helped the operator to connect relevant information to the correct robot.

Since in the field test we used only two very different robots this approach was not applied.

Online P300 analysis was used in previous experiments to adapt the task load which was determined by the number of tasks given to the operator per fixed time interval. We analyzed in single trial using pySPACE [9] whether P300 was evoked after each relevant information presentation to infer whether the operator recognized this information [8] . In case that the operator failed, the interval between to relevant information was enhanced to enable the operator to cope with the reduced control load. Further, this interval was readapted all over the operation session, i.e., shortened in case that P300 was repeatedly detected and enhanced in case that P300 was not detected. A detailed overview on the theoretic analysis of the optimization of man-machine interaction can be found in [6]

5. Communication

The requirements for communication also had an impact on the control interface setup. The communication was expected to be unstable with a high latency along with a low bandwidth. To meet the requirements we decided to use the UDT protocol [10] , which provides delivery of all messages sent, also on high latency connection. Still, there is the low bandwidth requirement. Apart from saving bandwidth for camera images using VR control, also on-line selection of telemetry contents helps saving bandwidth. Map updates are sent only on request; joint telemetry can be disabled, when these values are not needed. Also, the send frequencies of the telemetry can be set. Before sending the telemetry, it is multiplexed into a single data package in order o save additional protocol Bandwidth. Also, the data of this package is compressed. Only afterwards the data package is sent using UDT.

6. Field Test

The approach described above was evaluated in a field trial where the robots (Sherpa TT and Coyote III, see Figure 2 were deployed in a mars-like environment (Utah, USA) [11] [12] .

They were controlled from the Multi-Projection Environment (Bremen, Germany). The communication was sent via a satellite modem, which showed up to 22 seconds of latency and varying bandwidth (up to 464 kbps, mostly less). Our approach showed a high usability during the experiments. Most of the time, when the systems were moving autonomously, the only telemetry exchanged was position messages of the robots in a interval of five seconds.

7. Conclusions and Suggestions

Although there are challenges left, our approach is perfectly suited to assist astronauts in hybrid teams with semi-autonomous robots to control them. Our goal is to transfer our results to earthbound technologies. The Exoskeleton is already used for rehabilitation research. The VR User interface will be further developed to support generic robots and also underwater applications. Also, a test subject based performance evaluation of the proposed approach is on out agenda.

Acknowledgements

The authors would like to thank Leif Christensen, Thomas Röhr, Roland Sonsalla, and Tobias Stark for their Work in Utah and also Michael Maurus for his work on the VR user interface within the Trans TerrA and FT\_Utah projects.

The contributing Projects funded by the German Space Agency (DLR Agentur) with federal funds of the Federal Ministry of Economics and Technology in accordance with the parliamentary resolution of the German Parliament, grant no. 50RA1406, 50RA1407, 50RA1301, 50RA1011, 50RA1012, 50RA1701, 50RA1702, 50RA1703, 50RA1621, and 50RA1622.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Mallwitz, M. and Will, N. and Teiwes, J. and Kirchner, E.A. (2015) The Capio Active Upper Body Exoskeleton and Its Application for Teleoperation. Proceedings of the 13th ESA/Estec Symposium on Advanced Space Technologies in Robotics and Automation (ASTRA).
[2] Felis, M.L. (2016) Rbdl: An Efficient Rigid-Body Dynamics Library Using Recursive Algorithms. Autonomous Robots, 1-17.
[3] Endsley, M.R. (1995) Toward a Theory of Situation Awareness in Dynamic Systems. Human Factors: The Journal of the Human Factors and Ergonomics Society, 37, 32-64.
[4] Polich, J. (2007) Updating P300: An Integrative Theory of P3a and P3b. Clin Neurophysiol, 118, 2128-2148.
https://doi.org/10.1016/j.clinph.2007.04.019
[5] Patel, S.H. and Azzam, P.N. (2005) Characterization of N200 and P300: Selected Studies of the Event-Related Potential. International Journal of Medical Sciences, 2, 147-154.
https://doi.org/10.7150/ijms.2.147
[6] Kirchner, E.A., Kim, S.K., Straube, S., Seeland, A., Wöhrle, H., Krell, M.M., Tabie, M. and Fahle, M. (2013) On the Applicability of Brain Reading for Predictive Human-Machineinterfaces in Robotics. PLoS ONE, 8, e81732.
https://doi.org/10.1371/journal.pone.0081732
[7] Wöhrle, H. and Kirchner, E.A. (2014) Online Classifier Adaptation for the Detection of p300 Target Recognition Processes in a Complex Teleoperation Scenario. Physiological Computing Systems, Lecture Notes in Computer Science, 105-118.
https://doi.org/10.1007/978-3-662-45686-6_7
[8] Kirchner, E.A., Kim, S.K., Tabie, M., Wöhrle, H., Maurus, M. and Kirchner, F. (2016) An Intelligent Man-Machine Interface—Multi-Robot Control Adapted for Task Engagement Based on Single-Trial Detectability of P300. Frontiers in Human Neuroscience, 10, 291.
https://doi.org/10.3389/fnhum.2016.00291
[9] Krell, M.M., Straube, S., Seeland, A., Wöhrle, H., Teiwes, J., Metzen, J.H., Kirchner, E.A. and Kirchner, F. (2013) pySPACE—A Signal Processing and Classification Environment in Python. Frontiers in Neuroinformatics, 7.
https://github.com/pyspace
https://doi.org/10.3389/fninf.2013.00040
[10] Gu, Y. and Grossman, R.L. (2007) Udt: Udp-Based Data Transfer for High-Speed Wide Area Networks. Computer Networks, 51, 1777-1799.
https://doi.org/10.1016/j.comnet.2006.11.009
[11] Sonsalla, R., Cordes, F., Christensen, L., Roehr, T.M., Stark, T., Planthaber, S., Maurus, M., Mallwitz, M. and Kirchner, E.A. (2017) Field Testing of a Cooperative Multi-Robot Sample Return Mission in Mars Analogue Environment. Proceedings of the 14th Symposium on Advanced Space Technologies in Robotics and Automation (ASTRA).
[12] Planthaber, S., Maurus, M., Bongardt, B., Mallwitz, M., Vaca Benitez, L.M., Christensen, L., Cordes, F., Sonsalla, R., Stark, T. and Roehr, T.M. (2017) Controlling a Semi-Autonomous Robot Team from a Virtual Environment. Proceedings of the HRI Conference. ACM/IEEE International Conference on Human-Robot Interaction (HRI).

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.