The project aims to be a one stop solution for specially abled individuals as aims to teach sign language via utility of deep learning and computer vision.
The Live link of the Webapp: https://sakshamml.onrender.com
- Backend and ML stack: Python, Tensorflow, Tensorflow.js, RESTFUL API architecture, pandas
- Database: MongoDB
- Frontend: NEXT.js, React, TailwindCSS, Javascript
- Tools: Azure cloud, git, Docker, jupyter notebooks
Follow these steps to set up the Saksham Web project locally:
Make sure you have Node.js and npm installed on your machine.
-
Clone the repository:
git clone https://github.com/Chaitanyarai899/Saksham-Web.git
-
Navigate to the project directory:
cd Saksham-Web
-
Install dependencies:
npm install
-
Start the development server:
npm run dev
Visit http://localhost:3000 in your web browser to see the Saksham Web app.
The MLmicroservices folder contains all the modules for sign language detection model. The code is in multiple jupyter notebooks. To install all required python packages:
pip install -r requirements.txt
All models are built using tensorflow in python enviorment and then converted to tensorflow.js format to be compatible with browser.
Feel free to contribute to the project. Fork the repository, make your changes, and submit a pull request.