#Two-Hand Sign Language Recognition Using Keypoints and Shape Descriptors with Attention-Driven Feature Fusion ## Introduction This project focuses on Arabic Sign Language recognition using a two-hand static and dynamic gesture recognition system. We employ skin segmentation for hand detection and extract both keypoint-based (ORB, AKAZE, SIFT, BRISK) and shape-based features to enhance recognition. An attention-enabled feature fusion strategy integrates these features to improve classification accuracy. ## Prerequisites Ensure you have the following libraries installed: - **Matplotlib** - **OpenCV** - **Pandas** - **NumPy** - **Scikit-Learn** - **Pillow** - **Scipy** ## Usage Instructions The system processes images sequentially through the following steps: 1. **Preprocessing** – Run the `preprocessing` script to perform initial image processing for gesture recognition. 2. **Silhouette Extraction** – Execute `silhouetteextraction` to extract the silhouette of the hand gestures. 3. **Feature Extraction:** - Run `Keypoints and feature extraction` to extract keypoints and features using BRISK, AKAZE, ORB, and SIFT. 4. **Distance-Based Keypoint Extraction** – Selecting keypoints based on spatial distance. 5. **Shape Features Extraction** – Run `shape features` to extract shape-based descriptors. 6. **Feature Extraction** – Combining extracted features for recognition. 7. **Final Processing** ### Input Format - JPEG images. ## Execution Ensure you are in the project’s root directory before running the scripts. Then, execute the following commands in order: ```bash python path/to/scripts/preprocessing.py python path/to/scripts/silhouetteextraction.py python "path/to/scripts/Keypoints and feature extraction.py" python "path/to/scripts/shape features.py" ### **Customizing the Path** - Replace `path/to/scripts/` with the actual folder path where your scripts are stored. - If the scripts are in a subfolder like `gesture_recognition/scripts/`, update it accordingly: ```bash python gesture_recognition/scripts/preprocessing.py Ensure that each stage completes before proceeding to the next. ## Acknowledgments This work is based on research in Arabic Sign Language recognition and aims to advance gesture-based communication systems. ## Funding This work was supported by the IITP (Institute of Information & Communications Technology Planning & Evaluation)-ICAN(ICT Challenge and Advanced Network of HRD)(IITP-2025-RS-2022-00156326), 33) grant funded by the Korea government ( Ministry of Science and ICT). The authors extend their appreciation to the Deanship of Research and Graduate Studies at King Khalid University for funding this work through Large Group Project under grant number (RGP.2/568/45).