This script allows you to control the system volume using hand gestures recognized by a computer vision model. It uses the MediaPipe library for hand tracking and a pre-trained model for gesture recognition.
- Increase Volume: Thumbs Up: Increases the system volume.
- Decrease Volume: Thumbs Down: Decreases the system volume.
- Hardware Access: The script will start capturing the video from the webcam and display the output frame.
- System Requirements: Make sure your system has audio capabilities and the necessary audio drivers are installed for volume control to work properly.
To learn Machine Learning and AI in python in a more fun way
- Clone the repo:
git clone https://github.com/dark-king-001/Hand_gesture.git
- Install the required packages:
pip install opencv-python numpy mediapipe tensorflow keras
- Enter the Folder:
cd Hand_gesture
- run the game:
python main.py
- This project is licensed under the MIT License.
- If the script fails to capture frames from the webcam, make sure the webcam is connected and functional.
- If the hand landmarks are not detected accurately, try adjusting the min_detection_confidence parameter in the script to a lower value.