This project is built to help mute individuals communicate using hand gestures and facial expressions, translating them into spoken words in real-time.
- π Real-time hand gesture recognition (100+ patterns)
- π Facial expression detection (Happy, Sad, Angry, etc.)
- π£ Text-to-speech in Hindi & English
- π Voice command control ("pause", "resume", "exit")
- π§ User detection using OpenCV (face presence)
pip install -r requirements.txt
python AI-Gesture-OpenCV-Identity-Tool.py
AI-Gesture-Tool/
βββ AI-Gesture-OpenCV-Identity-Tool.py
βββ requirements.txt
βββ README.md
βββ gesture_log.csv # (generated after first run)
βββ images/ # (optional: add screenshots or demos here)
π§ Technologies Used
Python
OpenCV
MediaPipe
gTTS (Text to Speech)
Pygame
SpeechRecognition
πΈ Demo
Add a demo GIF or screenshots in the images/ folder.
π Use Cases
Helping mute individuals express needs
Smart gesture-based AI interfaces
Educational tools for sign language
π€ Contribution
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.