Bridging the Gap Between Silence and Speech
A real-time, IoT-powered sign language translator using machine learning and flex sensors to convert gestures into text and speech.
FlexLingo is an end-to-end IoT + Machine Learning system that translates American Sign Language (ASL) gestures into spoken words in real time. Using a custom-built sensor glove, this project captures hand movements and finger bends, processes the data, and delivers accurate translations via a React-powered web interface.
This was my final-year engineering capstone project, where I led a team of 4 through ideation, prototyping, model training, and deployment.
| Feature | Description |
|---|---|
| 🔁 Real-Time Translation | Instantly converts hand gestures into text and speech with <1s latency. |
| 🤖 Dual ML Models | Choose between Random Forest (fast) and BiLSTM (accurate) for gesture prediction. |
| 🧤 Smart Sensor Glove | Built with flex sensors and MPU6050 (accelerometer + gyroscope) for precise motion capture. |
| 💻 Web Dashboard | Interactive React.js UI to monitor device status, select models, view predictions, and hear speech. |
| 📊 Prediction History | Logs all translations for review and analysis. |
| 🔌 Serial Communication | Arduino ↔ Python backend via PySerial for seamless data streaming. |
For any inquiries or collaborations, feel free to reach out: