M. Swaroopa, Ch. Sreedhar


Using sensors in a physical and semantic level offers the chance to use temporal constraints. This idea is extended by thinking about the robot must learn to in the user concerning the user’s activities and subsequently have the ability to exploit these details later on teaching episodes. The thought of co-learning within this context refers back to the situation whereby an individual user along with a robot interacts to attain a specific goal. Numerous enhancements and enhancements to such facilities are always to use both inductive and predictive mechanisms to improve the longevity of the robot recognizing user activities.  In the present paper, we supply the house resident by having an interface for teaching robot behaviors according to formerly learnt activities using Quinlan’s C4.5 rule induction system. The participants didn't, however, agree as strongly on set up robot ought to be completely setup by another person, having a wider selection of responses in the participants. The resulting robot behavior rules will also be with different production rule approach. The sensor system supplies a standardized method of encoding information and offers options for connecting semantic sensors along with other, typically exterior, occasions. Take into account that the individual has indicated towards the robot that she or he are “preparing food” and sooner or later also indicated that she or he are actually “using the toaster.” When the robot learns the group of physical activities connected using these tasks it will be able to recognize them once they occur later on. During these studies the robot was operating mainly like a cognitive prosthetic.


Sensors; Teaching; Activity Recognition; Robot Learning; Robot Personalization;


T. Gu, Z. Wu, X. Tao, H. Pung, and J. Lu, “epsICAR: An emerging patterns based approach to sequential, interleaved and concurrent activity recognition,” in Proc. Int. Conf. Pervasive Comput. Commun., 2009, pp. 1–9.

I. Duque, K. Dautenhahn, K. L. Koay, I. Willcock, and B. Christianson, “Knowledge-driven user activity recognition for a smart house. Development and validation of a generic and low-cost, resource-efficient system,” in Proc. 6th Int. Conf. Adv. Comput.-Human Interactions, 2013, pp. 141–146.

N. Hu, G. Englebienne, and B. J. A. Kr¨ose, “Bayesian fusion of ceiling mounted camera and laser range finder on a mobile robot for people detection and localization,” in Proc. IROS Workshop: Human Behavior Understanding, 2012, pp. 41–51.

M. Salem, G. Lakatos, F. Amirabdollahian, and K. Dautenhahn, “Would you trust a (faulty) robot? effects of error, task type and personality on human-robot cooperation and trust,” in Proc. IEEE Int. Conf. Human–Robot Interaction, pp. 141–148, 2015.

H.Kose-Bagci, E. Ferrari,K.Dautenhahn, D. S. Syrdal, andC. L.Nehaniv, “Effects of embodiment and gestures on social interaction in drumming games with a humanoid robot,” J. Adv. Robot., vol. 23, pp. 1951–1996, 2009.

Y. Kuniyoshi, M. Inaba, and H. Inoue, “Learning by watching: Extracting reusable task knowledge from visual observations of human performance,” IEEE Trans. Robot. Autom., vol. 10, no. 6, pp. 799–822, Nov. 1994.

Full Text: PDF


  • There are currently no refbacks.

Copyright © 2012 - 2023, All rights reserved.|

Creative Commons License
International Journal of Innovative Technology and Research is licensed under a Creative Commons Attribution 3.0 Unported License.Based on a work at IJITR , Permissions beyond the scope of this license may be available at