Sleek dark-mode AI chat with file/image/audio/PDF attach, chat history, streaming replies, token usage, and robust error handling β powered by Streamlit & Google Gemini (Files API)
Built with the tools and technologies:
Python | Streamlit | Google Gemini (google-genai) | Files API | python-dotenv
Aurora is a professional, dark-mode AI chat interface for Google Gemini with a polished gradient UI, sticky composer, and a modular backend. It supports image/audio/PDF uploads via the Files API, shows previews before send, persists uploaded files across turns, streams model output token-by-token, tracks usage, and handles errors gracefully (e.g., rate limits, temporary outages). The UI is tuned for productivity: model picker, usage dialog, suggestion chips, and a centered greeting on first run.
- Modern UI/UX: fixed bottom composer, gradient send button, attach modal with staged previews, centered hero + 2Γ2 suggestions.
- Files API only: images, audio, PDFs uploaded once; references reused across the session for follow-up questions.
- Streaming replies: incremental rendering with a visible βThinkingβ¦β placeholder.
- Usage metrics: input/output/reasoning tokens per turn; running totals in a modal.
- Resilient backend: modular
backend/genai_backend.pywith graceful fallbacks and robust retries for transient server errors. - Model switcher: choose between
gemini-2.5-pro,gemini-2.5-flash,gemini-2.5-flash-preview-09-2025, fallbackgemini-2.0-flash.
- Dark, indigo-magenta-gold gradient theme; compact header with model select (β1/8 width) and Usage button.
- Sticky composer with οΌ button β staging modal β Attach confirmation; tiny thumbnail chips above composer.
- Chat history with user/assistant bubbles; user messages display attached files inline.
- Image persistence: ask follow-up questions about previously attached files (without re-uploading).
- βThinkingβ¦β indicator placed right after the userβs latest message.
- Streaming output with smooth autoscroll behavior during the stream.
- Friendly error surfaces (429 suggest model switch; 503 explain temporary unavailability; 400 guidance to simplify).
- Token usage (prompt/response/reasoning) aggregated across session.
Follow these steps to run the project locally.
aurora-chat-streamlit/
ββ app.py # Streamlit UI & chat orchestration
ββ backend/
β ββ genai_backend.py # google-genai client, Files API upload, generate/stream helpers
ββ frontend/
β ββ scroll.py # (Optional helper) one-shot scroll utilities for UX polish
ββ .env # contains GEMINI_API_KEY (not committed)
ββ requirements.txt
ββ LICENSE
ββ README.md
- Python 3.9+
- A Google AI Studio API key with access to Gemini models
- Internet connectivity to call the API
-
Create and activate a virtual environment (recommended).
python -m venv .venv # Windows: .venv\Scripts\activate # macOS/Linux: source .venv/bin/activate -
Install dependencies.
pip install -r requirements.txt
Create a .env file at the project root:
GEMINI_API_KEY=your_api_key_here
app.py loads this via python-dotenv. Environment variables also work.
Run the app:
streamlit run app.py
Workflow inside the app:
- Pick a model from the header dropdown.
- (Optional) Click Usage to see token totals (updates after model calls).
- Start typing in the composer or click a suggestion chip.
- Click the οΌ button to open the attach modal β upload files β click Attach.
- Send your message. Youβll see your message bubble (with files) followed by a Thinkingβ¦ placeholder and streamed output.
- Ask follow-ups without re-uploading β the Files API references persist for the session.
- Dark gradient UI with sticky composer and compact header
- 2Γ2 suggestion chips and centered greeting on first load
- Files API integration; staged previews and inline message attachments
- Image/audio/PDF support; image persistence across turns
- Streaming responses with βThinkingβ¦β indicator
- Error handling with user-friendly guidance (429/503/400)
- Token usage: prompt/response/reasoning + session totals
- Modular backend (
genai_backend.py) and frontend utility (frontend/scroll.py) - Model picker: 2.5 Pro, 2.5 Flash, 2.5 Flash Preview, 2.0 Flash (fallback)
- In-app model capability hints (vision/audio limits, file caps)
- Chat export (markdown/HTML) and βShare linkβ (optional)
- Theming controls (font size/compact mode/high-contrast)
- Advanced file library view (rename/remove/inspect metadata)
- Settings drawer (system prompt, temperature, safety toggles)
- Unit tests and linting (pytest/ruff)
- Example deployments (Streamlit Community Cloud / Docker)
- Keyboard shortcuts cheat-sheet and accessibility polish (ARIA)
- Basic analytics (per-turn latency, success/error rates)
MIT β see LICENSE for details.
Questions, feedback, or feature requests? Open an issue or reach out on LinkedIn.
- Maintainer: Brejesh Balakrishnan
- LinkedIn: https://www.linkedin.com/in/brejesh-balakrishnan-7855051b9/
- Project: https://github.com/brej-29/aurora-chat-streamlit
Contributions are very welcome! If youβd like to improve the UX, add tests, wire up deployments, or extend model features, please:
- Fork the repo and create a branch,
- Keep changes focused and documented,
- Open a PR with a clear description and screenshots where relevant.
If you use Aurora in your own project, Iβd love to hear about it β please share a link! π