[1]
|
M. N. Nicolescus and M. J. Mataric, “Natural Methods for Robot Task Learning: Instructive Demonstrations, Generalization and Practice,” In Proceedings of the Second International Joint Conference on Autonomous Agents and Multi-Agent Systems, Melbourne, Australia, July 14-18, 2003, pp. 241-248.
|
[2]
|
M. N. Nicolescus and M. J. Mataric, “Learning and Interacting in Human—Robot Domains,” IEEE Transactions on systems, Man and cybernetics—Part A: Systems and Humans, Vol. 31, No. 5, 2001, pp. 419-430.
|
[3]
|
G. Hailu,“Symbolic Structures in Numeric Reinforcement for Learning Optimum Robot Trajectory,” Robotics and Automation Systems, Vol. 37, No. 1, 2001, pp. 53-68.
|
[4]
|
D. C. Bentivegna, C. G. Atkeson and G. Cheng, “Learning Tasks from Observation and Practice,” Robotics and Automation Systems, Vol. 47, No. 2-3, 2004, pp. 163-169.
|
[5]
|
G. Hailu and G. Sommer, “Learning by Biasing,” IEEE International Conference on Robotics and Automation, Leuvem, Belgium, 1998, pp. 16-21.
|
[6]
|
J. R. Millan, “Reinforcement Learning of Goal Directed Obstacle Avoiding Reaction Strategies in an Autonomous Mobile Robot,” Robotics and Autonomous Systems, Vol. 15, No. 4, 1995, pp. 275-299.
|
[7]
|
A. Johannet and I. Sarda, “Goal-Directed Behaviours by Reinforcement Learning,” Neurocomputing, Vol. 28, No. 1-3, 1990, pp. 107-125.
|
[8]
|
P. M. Bartier and C. P. Keller, “Multivariate Interpolation to Incorporate Thematic Surface Data Using Inverse Distance Weighting (IDW),” Computers & Geosciences, Vol. 22, No. 7, 1996, pp. 795-799.
|
[9]
|
H. Friedrich, M. Kaiser and R. Dillmann, “What Can Robots Learn from Humans,” Annual Reviews in Control, Vol. 20, 1996, pp. 167-172.
|
[10]
|
S. Thrun, “An Approach to Learning Mobile Robot Navigation,” Robotics and Automation Systems, Vol. 15, 1995, pp. 301-319.
|
[11]
|
M. Kasper, G. Fricke, K. Steuernagel and E. von Puttka-mer, “A Behaviour Based Learning Mobile Robot Architecture for Learning from Demonstration,” Robotics and Automation Systems, Vol. 34, No. 2-3, 2001, pp. 153-164.
|
[12]
|
W. Ilg and K. Berns, “A Learning Architecture Based on Reinforcement Learning for Adaptive Control of the Walking Machine LAURON,” Robotics and Automation Systems, Vol. 15, No. 4, 1995, pp. 321-334.
|
[13]
|
H. Friedrich, M. Kaiser and R. Dillmann, “PBD-The Key to Service Robot Programming,” AAAI Technical Report SS-96-02, American Association for Artificial Intelligence, America, 1996.
|
[14]
|
H. Friedrich, M. Kaiser and R. Dillmann, “Obtaining Good Performance from a Bad Teacher,” In Programming by Demonstration vs. Learning from Examples Workshop at ML’95, Tahoe, 1995.
|
[15]
|
R. Dillmann, M. Kaiser and A. Ude, “Acquisition of Elementary Robot Skills from Human Demonstration,” In International Symposium on Intelligent Robotics Systems, Pisa, Italy, 1995, pp. 185-192.
|
[16]
|
R. S. Sutton and A. G. Barto, “Reinforcement Learning: An Introduction,” The MIT Press Cambridge, Massachusetts London, England, 1998.
|
[17]
|
B. Bakker, V. Zhumatiy, G. Gruener and J. Schmidhuber, “A Robot that Reinforcement-Learns to Identify and Memorize Important Previous Observations,” Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, USA, October 27-31, 2003, pp. 430-435.
|
[18]
|
A. Billard, Y. Epars, S. Calinon, S. Schaal and G. Cheng, “Discovering Optimal Imitation Strategies,” Robotics and Automation Systems, Vol. 47, No. 2-3, 2004, pp. 69-77.
|
[19]
|
S. Russell and P. Norvig, “Artificial Intelligence: A Modern Approach,” Second Edition, Prentice Hall, New Jersey, USA, 2002.
|