EVALUATION OF FINGER DIRECTION RECOGNITION METHOD FOR BEHAVIOR CONTROL OF ROBOT

Publications

Share / Export Citation / Email / Print / Text size:

International Journal on Smart Sensing and Intelligent Systems

Subject: Computational Science & Engineering, Engineering, Electrical & Electronic

GET ALERTS

eISSN: 1178-5608

DESCRIPTION

0
Reader(s)
0
Visit(s)

Comment(s)
0
Share(s)

VOLUME 6 , ISSUE 5 (December 2013) > List of articles

Advertisement
EVALUATION OF FINGER DIRECTION RECOGNITION METHOD FOR BEHAVIOR CONTROL OF ROBOT

T. Ikai * / M. Ohka * / S. Kamiya / H. Yussof / S. C. Abdullah

Keywords : Robot vision, Human indication, Finger direction recognition, Two hand forms, Handing over

Citation Information : International Journal on Smart Sensing and Intelligent Systems. Volume 6, Issue 5, Pages 2,308-2,333, DOI: https://doi.org/10.21307/ijssis-2017-640

License : (CC BY-NC-ND 4.0)

Received Date : 20-November-2013 / Accepted: 15-December-2013 / Published Online: 31-December-2013

ARTICLE

ABSTRACT

When a human gives an order to a robot, the robot must often use its vision to ascertain the human’s indication. In our previous paper, to develop a system where robots precisely receive and obey human orders in daily work spaces, we proposed an experimental system for finger direction recognition (FDR) in 3D space using stereo matching by mounting two cameras on the robot. In this paper, we evaluate this system for FDR in 3D space by performing a series of evaluation experiments using a turntable capable of fixing a hand in a specific finger direction. We estimated various finger directions and distances for two major pointing hand forms (Forms 1 and 2) and evaluated the precision of θ and Φ. We conclude that the θ and Φ estimations are valid because the estimation error is almost within 10  when the distance between the camera and object is less than 110 and 80 cm for θ and Φ for Forms 1 and 2, respectively. Finally, we applied our FDR system to the communication between a robot and a person by visual and tactile sensations. In the application test, the robot recognized the object at which the person pointed and put it in the person’s palm.

Content not available PDF Share

FIGURES & TABLES

REFERENCES

[1] Morioka K, Hashikawa F, and Takigawa T, (2013), "Human Identification Based on Walking
Detection with AccelerationvSensor and Networked Laser Range Sensors in Intelligent
Space," International Journal on Smart Sensing and Intelligent Systems, Vol. 6, No. 5, 2040-
2054.
[2] Ahmed HS, Faouzi BM, and Caelen J, (2013), "Detection and Classification of the Behavior
of People in an Intelligent Building by Camera," International Journal on Smart Sensing and
Intelligent Systems, Vol. 6, No. 4, 1318-1342.
[3] Jain R, Kasturi R, and Schunck BG, (1995), "Machine Vision," McGraw-Hill, Inc., ISBN 0-
07-032018-7.
[4] Kurata T, Kato T, Kourogi M, Keechul J, and Endo K, (2002), "A Functionally-distributed
Hand Tracking Method for Wearable Visual Interfaces and its Applications," In IAPR
Workshop on Machine Vision Applications, pp. 84–89.
[5] Freeman W & Roth M, (1995), "Orientation Histograms for Hand Gesture Recognition,"
IEEE Int. Workshop on Automatic Face and Gesture Recognition.
[6] Sawada H, Hashimoto S, and Matsushima T, (1998), "A Study of Gesture Recognition Based
on Motion and Hand Figure Primitives and Its Application to Sign Language Recognition,"
vol. 39, no. 5, 1325-1333.
[7] Starner, T. and Pentland, A., (1995), "Real-time American Sign Language Recognition from
Video Using Hidden Markov Models," 1995. Proc. of International Symposium on Computer
Vision, vol. 21, no. 23, 265-270.
[8] Yoshino L et al, (1996), "Recognition of Japanese Sign Language from Image Sequence
Using Color Combination," Proc. 3rd Int. Conf. Image Processing, 16-19.
[9] Ong SCW et al, (2005), "Automatic Sign language Analysis: A Survey and the Future beyond
Lexical Meaning," IEEE Trans. PAMI, Vol. 27, No. 6, 873-891.
[10] Yamada Y, Matsuo T, Shimada N, and Shirai Y, (2009), Hand Detection and Hand Shape
Classification Based on Appearance Learning for Sign Language Recognition, MIRU2009,
IS1-37. (in Japanese)
[11] Tanaka S, Umeda Ki, (2001), Operating a Mobile Robot by Gesture Recognition, The
transactions of the Institute of Electrical Engineers of Japan. C, A Publication of Electronics,
Information and System Society, vol. 121, no. 9, pp. 1457-1463. (in Japanese)
[12] Fujimoto K, Matsuo T, Shimada N, and Shirai Y, (2010), "High Speed 3-D Hand Shape
Measurement by Tree-based Learning Contour Features, Proc. of Meeting on Image
Recognition and Understanding," MIRU2010, IS3-64. (in Japanese)
[13] Lee SU and Cohen I, (2004), "3D Hand Reconstruction from a Monocular View,"
Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04), Vol. 3,
310-313.
[14] Takeda Y, Terabayashi K, Asano H, and Umeda K, (2011), "Finger Direction Recognition in
3D Utilizing a Range and Image Sensor,” Proc. Of Summer Seminar of Japan Soc. of
Precision Engineering Technical Committee on Industrial Application of Image Processing,
75-76, (in Japanese)
[15] Ohkubo Y, Okada K, Inamura T, and Inaba M, (2005), "Finger Recognition for Target
Indication to Robots in Daily Life (Humanoid 3, Mega-Integration in Robotics and
Mechatronics to Assist Our Daily Lives),” Proc. of 2004 JSME Conference on Robotics and
Mechatronics (ROBOMEC'04), 2A1-S-049.
[16] Rehg JM & Kanade T, (1994), "Visual Tracking of High DOF Articulated Structures: an
Application to Human Hand Tracking," In Third European Conference on Computer Vision,
35-46.
[17] Nickel K & Stiefelhagen R (2007), "Visual Recognition of Pointing Gestures for Humanrobot
Interaction,” Image and Vision Computing, vol. 25, no. 12, 1875-1884.
[18] Shotton J, Fitzgibbon A, Cook M, Sharp T, Finocchio M, Moore R, and Blake A (2011),
"Real-time Human Pose Recognition in Parts from Single Depth Images," In Computer
Vision and Pattern Recognition (CVPR), 2011 IEEE Conference, 1297-1304.
[19] Ikai T, Ohka M, and Yussof H (2012), “Behavior Control of Robot by Human Finger
Direction,” Engineering Procedia, 41, 784-791.
[20] Bradski G (2011), "Open Source Computer Vision Library,"
http://www.intel.com/research/mrl/research/opencv.
[21] Abdullah SC, Ikai T, Dosho Y, Yussof H and Ohka M (2011), "Edge Extraction Using
Image and Three-axis Tactile Data," International Journal on Smart Sensing and Intelligent
Systems, Vol. 4, No. 3, 508-526.
[22] Hashem, HF (2009), "Adaptive Technique for Human Face Detection Using HSV Color
Space and Neural Networks," National Radio Science Conference, 2009, vol. 1, no. 7, 17-19
[23] Brooks, AG & Breazeal, C (2006), "Working with Robots and Objects: Revisiting Deictic
Reference for Achieving Spatial Common Ground," In Proceedings of the 1st ACM
SIGCHI/SIGART Conference on Human-robot Interaction, 297-304.
[24] Manders C, Farbiz F, Chong JH, Tang KY, Chua GG, Loke MH, and Yuan ML (2008),
"Robust Hand Tracking Using a Skin Tone and Depth Joint Probability Model," 8th IEEE
International Conference on Automatic Face & Gesture Recognition, Vol. 1, No. 6, 17-19
[25] Kim S, Sekiyama K, and Fukuda T, (2008)"Pattern Adaptive and Finger Image-guided
Keypad Interface for In-vehicle Information Systems," International Journal on Smart
Sensing and Intelligent Systems, Vol. 1, No. 3, 572-591.

EXTRA FILES

COMMENTS