Loading...
Thumbnail Image
Publication

User-independent recognition of Arabic sign language for facilitating communication with the deaf community

Shanableh, Tamer
Assaleh, Khaled
Date
2011
Advisor
Type
Article
Postprint
Peer-Reviewed
Degree
Description
Abstract
This paper presents a solution for user-independent recognition of isolated Arabic Sign language gestures. The video based gestures are preprocessed to segment out the hands of the signer based on color segmentation of the colored gloves. The prediction errors of consecutive segmented images are then accumulated into two images according to the directionality of the motion. Different accumulation weights are employed to further help preserve the directionality of the projected motion. Normally, a gesture is represented by hand movements; however, additional user-dependent head and body movements might be present. In the user-independent mode we seek to filter out such user-dependent information. This is realized by encapsulating the movements of the segmented hands in a bounding box. The encapsulated images of the projected motion are then transformed into the frequency domain using Discrete Cosine Transformation (DCT). Feature vectors are formed by applying Zonal coding to the DCT coefficients with varying cutoff values. Classification techniques such as KNN and polynomial classifiers are used to assess the validity of the proposed user-independent feature extraction schemes. An average classification rate of 87% is reported.