Unblink is a camera monitoring application that runs AI vision models on your camera streams in real-time. Key features:
- 🤓 Contextual understanding
- 👀 Object detection
- 🔎 Intelligent search across your video feeds.
- ⚡ Sub-second video streaming
Live demo: https://app.zapdoslabs.com
The following instructions are for running the source code using Bun. If you are looking for alternative methods, check out the Docker or Binary Executable doc.
- Bun runtime installed on your system
# Clone the repository
git clone https://github.com/tri2820/unblink
cd unblink
# Install dependencies
bun install
# Start the application
bun dev
The application will start and be accessible at http://localhost:3000
This can further configured via PORT and HOSTNAME env variable.
For example:
PORT=4000 HOSTNAME=127.0.0.1 bun dev
or
HOSTNAME=0.0.0.0 bun devAdd and configure multiple camera sources with support for RTSP, MJPEG, and other protocols.
Monitor all your cameras simultaneously with real-time feeds and status indicators.
Ask natural language questions about what's happening in your camera feeds.
Search through captured frames using natural language queries. Find specific events, objects, or scenes across your camera history.
Real-time object detection and tracking powered by D-FINE model.
Send detections & description via webhooks and other communication channels

Securely gate your instance with role-based access

- D-FINE: State-of-the-art object detection for identifying and tracking objects in real-time
- SmolVLM2 and Moondream 3 : Vision-language models for understanding context and answering questions about camera feeds
Why is my CPU usage so high?
D-FINE object detection is resource-intensive. If you experience performance issues, you could consider disabling object detection from the Settings page. I would add some optimization to this soon.
Where is the code to run the models?
The model inference code is in a separate repository at https://github.com/tri2820/unblink-engine. This separation allows the AI models to run with GPU acceleration in Python, while keeping the app lightweight.
Currently I have the engine hosted on my GPU server that you can use (the client app automatically connects to it), so hosting the engine yourself is optional. If you need to, you can mofidy ENGINE_URL env var and the client app will connect there instead.
For administration, please refer to ADMIN.md
| Feature | Status | Notes |
|---|---|---|
| Multi-camera Dashboard | ✅ Stable | Tested with several camera protocols |
| D-FINE Object Detection | ✅ Stable | |
| SmolVLM2 Integration | ✅ Stable | |
| Semantic Search | 🤔 WIP | Need to rework UI |
| Video Recording & Playback | 🚧 Coming Soon | |
| Motion Detection | 🚧 Coming Soon | |
| ONVIF Support | 🚧 Coming Soon | |
| Webhook | ✅ Stable | |
| Automation | 🚧 Coming Soon |
Legend: ✅ Stable | 🤔 WIP | 🚧 Coming Soon
Contributions are welcome! Please feel free to submit issues, feature requests, or pull requests.
The tech that does the major lifting of the stream ingestion work is done by seydx through the amazing node-av library.
Built with ❤️ and ramen. Star Unblink to save it for later. 🌟




