A fully functional chatbot that runs entirely on your local machine using Ollama and Streamlit.
- 🔒 Complete Privacy: All processing happens on your local machine
- 💰 Zero Cost: No API subscriptions or usage fees
- ⚡ Fast Experimentation: Quick responses with lightweight models
- 🎯 Simple Setup: Get started in minutes
- 🔧 Extensible: Easy to customize and extend
Below is a quick, fully‑local demo of the chatbot running in Streamlit:
- Python 3.10 or higher
- Ollama installed on your system
Download and install Ollama from the official website:
https://ollama.com/downloadVerify installation:
ollama --versionollama pull llama3.2:3bgit clone https://github.com/atulmkamble/zero_to_chatbot
cd zero_to_chatbotpip install -r requirements.txtCreate a .env file in the project root:
OLLAMA_URL="http://localhost:11434/api/generate"
.
├── main.py # Simple CLI chatbot
├── app.py # Streamlit web interface
├── .env # Environment variables
├── requirements.txt # Python dependencies
└── README.md # This file
python main.pyThis will send a test prompt to the local model and display the response in your terminal.
streamlit run app.pyOpen your browser and navigate to:
- Ollama runs the Llama 3.2 3B model locally on your machine
- The Python scripts communicate with Ollama's API via HTTP requests
- Streamlit provides an interactive web interface for the chatbot
- All conversations stay on your local machine - no data leaves your system
To use a different model, first pull it:
ollama pull <model-name>Then update the model name in your code:
"model": "<model-name>"Read the full tutorial on my blog: Zero to Chatbot: Build a Local AI Assistant
