अमूर्त

Neural Network Based Static Sign Gesture Recognition System

Parul Chaudhary , Hardeep Singh Ryait

Sign language is natural media of communication for the hearing and speech impaired all over the world This paper presents vision based static sign gesture recognition system using neural network. This system enables deaf people to interact easily and efficiently with normal people. The system firstly convert images of static gestures of American Sign Language into Lab color space where L for lightness and (a, b) for the color-opponent dimensions, from which skin region i.e. hand is segmented using thresholding technique. The region of interest (hand) is cropped and converted into binary image for feature extraction. Then height, area, centroid, and distance of the centroid from the origin (top-left corner) of the image are used as features. Finally each set of feature vector is used to train a used to train a feed-forward back propagation network. Experimental results showed successful recognition of static sign gestures with an average recognition accuracy of 85 % on a typical set of test images.

अस्वीकृति: इस सारांश का अनुवाद कृत्रिम बुद्धिमत्ता उपकरणों का उपयोग करके किया गया है और इसे अभी तक समीक्षा या सत्यापित नहीं किया गया है।

में अनुक्रमित

Index Copernicus
Academic Keys
CiteFactor
Cosmos IF
RefSeek
Hamdard University
World Catalogue of Scientific Journals
International Innovative Journal Impact Factor (IIJIF)
International Institute of Organised Research (I2OR)
Cosmos

और देखें