This project creates a virtual camera filter that detects your face in real-time and overlays the rotating "The Laughing Man" logo from Ghost in the Shell: Stand Alone Complex. The output is sent to a virtual camera compatible with Google Meet, Zoom, and other video conferencing applications.
- 🎯 Face detection with MediaPipe (BlazeFace, high precision and performance)
- 🎨 Alpha blending for perfect transparency of the logo
- 📹 Virtual camera compatible with Google Meet, Zoom and other video calling apps
- ⚡ Optimized for consistent 30 FPS (detection at reduced resolution, optional
--detect-every) - 🖼️ Virtual background (optional): selfie segmentation and replace background with images (
wall*.jpginassets/) - 🎭 Toggle logo style (white / transparent) and show/hide overlay at runtime via keyboard
- 📥 Automatic download of MediaPipe models (face detector and selfie segmenter) on first run
- OS: Linux (Ubuntu/Debian recommended)
- Python: 3.10 or higher
- Hardware: V4L2-compatible webcam
sudo apt update
sudo apt install -y v4l2loopback-dkms v4l2loopback-utils libcairo2-devFor detailed step-by-step instructions, see INSTALL.md.
# 1. Clone the repository
git clone https://github.com/your-user/the-laughing-man.git
cd the-laughing-man
# 2. Load the v4l2loopback module
sudo modprobe v4l2loopback devices=1 video_nr=10 card_label="Laughing-Man-Cam" exclusive_caps=1
# 3. Install with uv (recommended)
curl -LsSf https://astral.sh/uv/install.sh | sh
uv venv
source .venv/bin/activate
uv pip install -e .
# 4. Run
python main.py-
Start the virtual camera:
./start.sh
Or manually:
source .venv/bin/activate python main.py -
Configure in Google Meet:
- Open Google Meet in your browser
- Go to Settings → Video
- Select "Laughing-Man-Cam" as your camera
- Done! The filter will be applied automatically
-
Keyboard shortcuts (focus on the "Laughing Man Control" window):
tor Space: Toggle logo style (e.g. white / transparent)f: Show or hide the overlay (camera only, or camera + logo on face)qor Ctrl+C: Quit the application
-
Stop the application:
- Press
qin the control window, orCtrl+Cin the terminal
- Press
The system is optimized to maintain a constant 30 FPS:
- ✅ Face detection at reduced resolution (320px width) with MediaPipe BlazeFace
- ✅ Optional detection every N frames (
--detect-every) to reduce CPU load - ✅ Resized logo caching
- ✅ Optimized alpha blending with NumPy
- ✅ Segmentation at half resolution when virtual background is enabled
--no-background: Disable virtual background and segmentation (camera only + logo overlay).--no-preview: Do not show preview window (lower CPU; keyboard shortcuts unavailable).--detect-every N: Run face detection every N frames (e.g.2for less CPU usage).
Edit main.py and modify:
VIRTUAL_DEVICE = "/dev/video10" # Change the number if necessaryIn main.py, modify:
self.face_overlay = FaceOverlay(
logo_path=str(LOGO_PNG_PATH),
min_detection_confidence=0.5 # Range: 0.0 - 1.0
)- Verify that your webcam is connected:
ls /dev/video* - Try another device:
CAMERA_DEVICE = "/dev/video1"
- Verify that v4l2loopback is loaded:
lsmod | grep v4l2loopback - Reload the module:
sudo modprobe -r v4l2loopback && sudo modprobe v4l2loopback devices=1 video_nr=10 card_label="Laughing-Man-Cam" exclusive_caps=1
- Reduce your webcam resolution
- Increase
min_detection_confidenceto 0.6 or 0.7
- Verify that the device exists:
v4l2-ctl --list-devices - Restart the browser after starting the script
- Grant camera permissions to the browser
MIT License - See LICENSE file for more details
- Logo: Ghost in the Shell: Stand Alone Complex
- Face Detection: MediaPipe
- Virtual Camera: pyvirtualcam
- Inspiration: The iconic "Laughing Man" scene 🎭
Contributions are welcome! Please:
- Fork the project
- Create a branch for your feature (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request