Machine learning Experiment with OpenCode using MediaPipe and TensorFlow.js
sign language recognition system using MediaPipe and TensorFlow.js involves real-time hand tracking for finding ASK Alphabets, the pure vibe coding results
Date
Live ProjectView site
Using Opencode Leveraging local LLM available via Ollama, performing Machine learning training for ASL based Sign language recognition, just for alphabets for now
The Stack
- Next.js 15 with the App Router and React 19 for the frontend
- Tailwind CSS v4 for styling (dark theme, gradient text, the whole modern look)
- MediaPipe HandLandmarker for real-time hand detection through the webcam
- TensorFlow.js running a custom CNN model entirely in the browser
- Firebase Hosting for deployment as a static site (my choice to host)
Tools Used
- Ollama For LLM Local hosting
- OpenCode For Vibe coding
- qwen3-coder-next LLM used