W. D. HUANG ET AL.
Copyright © 2013 SciRes. CN
camera or clari fied v erb ally to be corr ectl y pe rceiv ed from
the helper side. Furthermore many participants found that
the lack of depth perception on the two-dimensional im-
age to be the most l imiting factor. Participants were ob-
served to exhibit difficulty from the helper’s perspective
in discerning the exact sizes and positions of bricks in the
workspace as well as communicating complex represen-
tational hand gestures.
4. Conclusions
HandsInAir is a new real-time wearable system for re-
mote collaboration. It employs novel approaches that
support the mobility of remote collaborators and capture
remote gestures. The system enables the helper to per-
form hand gestures in the air without the need to interact
with tangible objects. The syste m is lightweight, easy to
set up, intuitive to use and requires little environmental
or technical support. HandsInAir has demonstrated great
capability of mediating remote collabo ration and has
significant potential for implementation in a wide range
of real world applications such as telemedicine, remote
maintenance and repair.
The greatest strength of the system is its capability of
facilitating remote collaboration tasks. This can be seen
from the fact that all participants in our study were able
to collaborate effectively to successfully complete a se-
ries of remote tasks. A majority of them expressed com-
fort and ease using the system, and found it valuable for
remote collaboration.
Findings in the usability study also further corrobo-
rated concepts underlying remote collaboration and find-
ings in previous studies (e.g. [18]). These included the
prevalence of pointing gestures over complex representa-
tional gestures, and the value of a shared wor ks pace at
providing common ground to facilitate communication.
The system lacked sufficient image quality a nd depth
information whic h were cited as its main deficiencies. In
the next iteration of the system we would rectify the
shortcoming in the image quality by choosing a more
suitable image compression method and hardware with
greater grap hics processing capabilities. Furt her work has
also been planned to reorganize of the configuration of
the camera and near-eye display on the helmet to make
them independently adjustable and more comfortable and
accessible to users.
Although a two-dimensional workspace was satisfac-
tory for communicating pointing gestures, it was inade-
quate for clearly communicating more complex assembly
instructions thro ugh the use of representational gestures.
Recent advancements in depth sensing technology have
made it feasible to explore the development of a three
dimensional shared workspace that would enable partic-
ipants greater freedom and range of expression (e.g.,
[19,20]). The use of depth sensing technology to imple-
ment more robust hand gesture recognition based on
depth filtration instead of color hue filtration will also be
explored. The advanced detection mechanism would al-
low the helper to incorporate instructional apparatus into
the shared workspace.
REFERENCES
[1] H. H. Clark and S. E. Brennan, “Grounding in Commu-
nication,” Perspectives on Socially Shared Cognition.
Ameri can Psychological Association, Washington DC,
1991. http://dx.doi.org/10.1037/10096-006
[2] S. R. Fussell, R. E. Kraut and J. Siegel, “Coordination of
Communication: Effects of Shared Visual Context on
Collaborative Work,” ACM Conference on Computer
Supported Cooperative Work, 2000, pp. 21-30.
[3] D. S. Kirk, T. Rodden and D. S. Fraser, “Turn It This
Way: Grounding Collaborative Action with Remote Ges-
tures,” ACM Human Factors in Computing Systems, 2007,
pp. 1039-1048.
[4] L. Alem, F. Tecchia and W. Huang, “HandsOnVideo:
Towards a Gesture Based Mobile AR System for Remote
Collaboration,” Recent Trends of Mobile Collaborative
Augmented Real ity, Springer, New York, 2011, pp. 127-
138.
[5] S. R. Fussell, L. D. Setlock, J. Yang, J. Ou, E. Mauer and
A. D. I. Kramer, “Gestures over Video Streams to Sup-
port Remote Collaboration on Physical Tasks,” Human-
Computer Interaction, Vol. 19, 2004, pp. 273-309.
http://dx.doi.org/10.1207/s15327051hci1903_3
[6] S. Gaugl itz, C. Lee, M. Turk and T. Höllerer, “Integrating
the Physical Environment into Mobile Remote Collabora-
tion,” Proceedings of the 14th international conference
on Human-computer interaction with mobile devices and
services, 2012, pp. 241-250.
[7] H. Kuzuoka, “Spatial Workspace Collaboration: A Shared
View Video Support System for Remote Collaboration
Capability,” ACM Human Factors in Computing Systems,
1992, pp. 533-540.
[8] W. Huang and L. Alem, “HandsinAir: A Wearable
System for Remote Collaboration on Physical Tasks,”
Proceedings of the 2013 Conference on Computer Sup-
ported Cooperative Work Companion, 2013, pp. 153-156.
[9] W. Huang and L. Alem, “Gesturing in the Air: Sup-
porting Full Mobility in Remote Collaboration on Phy-
sical Tasks,” Journal of Universal Computer Science,
2013.
[10] OpenCV. http://opencv.willowgarage.com/wiki/
[11] Libjpeg-Turbo. http://libjpeg-turbo.virtualgl.org/
[12] Independent JPEG Group. Available: http://www.ijg.org/
[13] Windows Sockets 2.
http://msdn.microsoft.com/en-us/library/ms740673(v=vs.
85).aspx
[14] Microsoft Foundation Classes.
http://msdn.microsoft.com/en-us/library/d06h2x6e(v=VS.
100).aspx
[15] Multithreaded Programming with the Event-based Asyn-