Skip to content

kelkalot/real-time-local-video-chat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 

Repository files navigation

real-time-local-video-chat

Simple example for a real time local video chat example with llama.cpp

How to:

  1. Install llama.cpp -> https://github.com/ggml-org/llama.cpp
  2. Run model (tested on Mac) Real-time model: llama-server -hf ggml-org/SmolVLM-500M-Instruct-GGUF (quickest) Gemma3 4b: llama-server -hf ggml-org/gemma-3-4b-it-GGUF (best quality of answer to time to answer ration)
  3. Download visual-local-chat.html and open it
  4. Start chating :)

image

About

Real time local video chat example with llama.cpp

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages