We present an approach to control a semi-autonomous robot team remotely under low bandwidth conditions with a single operator. Our approach utilises virtual reality and autonomous robots to create an immersive user interface for multi-robot control. This saves a big amount of bandwidth, just because there is no need to transfer a constant steam of camera images. The virtual environment for control only has to be transferred once to the control station and only has to be updated when the map is out of date. Also, the camera position can easily be changed in virtual reality for more overview on the robots situation. The parts of this approach can easily be transferred to applications on earth e.g. for semi-autonomous robots in hazardous areas or under water applications.
In future space missions, robots will help astronauts in collaborative teams. However, exploration of far away extraterrestrial sites such as Mars will not be possible in the near future with humans on site. Therefore, approaches such as robotic exploration are needed. Here, full autonomous behavior and exploration are technically not yet feasible, and control and guidance by human operators are required to solve challenging exploration tasks. But not only the environment is challenging, also the communication delay and bandwidth limitations are challenging for robot control.
User interfaces are needed that enable intuitive control and allow the human to perceive the environment by means of sensor data from the robotic system.
Hence, when robots are deployed in remote places to be remotely controlled several challenges have to be solved.
These challenges include an appropriate user interface and robust communication.
Developments for space applications do not only apply to extraterrestrial missions.
They also apply to robotic tasks on earth, such as surveillance of large structures, which may be on land or under water. This is also very challenging, since the available communication bandwidth is often limited. For intuitive interaction, the amount of immersion plays an important role for the efficiency of the operator.
We present an approach that enables a high level of immersion, while saving bandwidth by creating a virtual, three dimensional environment from robot telemetry and maps, but without transmitting images or videos taken from cameras in a continuous way.
Moreover, our approach is independent from the visibility in the visual spectrum. It can also create the virtual environment from other sensors that might be better suited for the robot’s environment.
Our approach is to control a team of semi-autonomous robots in a three dimensional virtual environment. This strongly relies on the robots ability to localize themselves within their map and also the capability to do autonomous movements and actions.
To create the virtual environment, the robots shared map is downloaded once to the control station, afterwards only the robot position has to be updated to display the virtual model of the robot within the map at the current position. The interface is separated into 3D and 2D elements, while the 2D elements are used to select the desired action for the next 3D command, clicking into the 3D map defines the location for that action. One novelty of this approach is that the robot is controlled by interacting with the robot and environment in virtual reality. This also allows for free camera positioning. The operator may choose free floating camera positions for optimal overview. For direct remote control of the robots movements, a view from above the robot can be beneficial. Traditional control approaches using hardware cameras, cannot provide this option.
Apart from generic computer screens, this user interface can be used with more immersive hardware, like Multi-Projection Environments or Virtual Reality (VR) headsets. We mainly used the Multi-Projection Environment for our experiments, which is a 3D test environment, where several large screens with a width of 1.52 m and a height of 2.03 m are arranged in a 180˚ semi-circle. Each screen is capable of displaying 3D images using polarized glasses.
The use of VR offers the ability to utilize a variety of 3D-Input devices apart from traditional 2D inputs like a standard mouse.
Nevertheless one mode of operation for the 3D devices is moving a cursor over a two dimensional overlay, which is used similar to a mouse. It can be used to select the robot and to define the position for a selected action in the underlying 3D-representation. Action selection is also performed using this cursor. This way, the ability to control the system is independent of the device and can also be performed using standard PC Hardware.
In case that the human operator has to take over direct control of the robot, the CAPIO exoskeleton can be used as input device (
The exoskeleton follows a distributed control approach in terms of local torque control of each joint. An external computer calculates the desired torques for a gravity compensation with the help of the RBDL library [
The system is able to generate force feedback to the operators arm. A preliminary experiment of force feedback was executed by applying manually torques around all axes to the end effector of target system. The loads are transferred via satellite to the exoskeleton control room. The applied torques to the operator is shown in
For intuitive interaction and control it is crucial that the operator of a multi-robot team is not overwhelmed by presented information. Hence, the interface must be optimized with respect to different criteria. In our setting, we optimized 1) sensor data visualization, 2) information selection, and 3) information presentation style.
To optimize sensor data, visualization was most relevant (1), especially since robotic systems were equipped with laser sensors that generate output which is not easily understandable for humans in raw format.
The resulting visualization from this optimization is visible in
When controlling several robots information might be generated that would be too much to be analyzed by a single operator [
critical situation that the operator must react on (independent on which robot is currently selected). By this dual approach we assured that the operator is on the one hand informed on all critical issues and on the other hand would not have to process too much information resulting in too high work load.
Since work load must be kept low during control we performed several experiments to investigate work load on the operator depending on the interface design (3). We performed offline and online analysis of electroencephalographic data from humans to evaluate the general design as well as task load within our setting. As a measure the event related potential P300 [
We used offline average analysis to optimize the style of information processing.
As result, important information is presented in a symbolic fashion instead of text. In the case that more than two robots were involved we presented the robots in different colors that were than mapped to the given information [
This helped the operator to connect relevant information to the correct robot.
Since in the field test we used only two very different robots this approach was not applied.
Online P300 analysis was used in previous experiments to adapt the task load which was determined by the number of tasks given to the operator per fixed time interval. We analyzed in single trial using pySPACE [
The requirements for communication also had an impact on the control interface setup. The communication was expected to be unstable with a high latency along with a low bandwidth. To meet the requirements we decided to use the UDT protocol [
The approach described above was evaluated in a field trial where the robots (Sherpa TT and Coyote III, see
They were controlled from the Multi-Projection Environment (Bremen, Germany). The communication was sent via a satellite modem, which showed up to 22 seconds of latency and varying bandwidth (up to 464 kbps, mostly less). Our approach showed a high usability during the experiments. Most of the time, when the systems were moving autonomously, the only telemetry exchanged was position messages of the robots in a interval of five seconds.
Although there are challenges left, our approach is perfectly suited to assist astronauts in hybrid teams with semi-autonomous robots to control them. Our goal is to transfer our results to earthbound technologies. The Exoskeleton is already used for rehabilitation research. The VR User interface will be further developed to support generic robots and also underwater applications. Also, a test subject based performance evaluation of the proposed approach is on out agenda.
The authors would like to thank Leif Christensen, Thomas Röhr, Roland Sonsalla, and Tobias Stark for their Work in Utah and also Michael Maurus for his work on the VR user interface within the Trans TerrA and FT\_Utah projects.
The contributing Projects funded by the German Space Agency (DLR Agentur) with federal funds of the Federal Ministry of Economics and Technology in accordance with the parliamentary resolution of the German Parliament, grant no. 50RA1406, 50RA1407, 50RA1301, 50RA1011, 50RA1012, 50RA1701, 50RA1702, 50RA1703, 50RA1621, and 50RA1622.
Planthaber, S., Mallwitz, M. and Kirchner, E.A. (2018) Immersive Robot Control in Virtual Reality to Command Robots in Space Missions. Journal of Software Engineering and Applications, 11, 341-347. https://doi.org/10.4236/jsea.2018.117021