A powerful semantic document search engine built with Next.js, FastAPI, Ollama, and ChromaDB. Generate AI-powered documents and search through them using natural language queries with vector embeddings.
- π€ AI Document Generation: Generate documents using Ollama's GPT-OSS model
- π Semantic Search: Search documents using natural language with vector embeddings
- πΎ Vector Storage: ChromaDB for efficient vector storage and retrieval
- π¨ Premium UI: Beautiful dark mode interface with glassmorphism effects
- β‘ Real-time: Instant search results with semantic similarity scoring
- π± Responsive: Works seamlessly on desktop and mobile devices
- Next.js 15 - React framework with App Router
- TypeScript - Type-safe development
- Tailwind CSS - Utility-first styling
- Shadcn UI - Premium component library
- Framer Motion - Smooth animations
- Bun - Fast package manager and runtime
- FastAPI - High-performance Python API framework
- Ollama - Local LLM inference
gpt-oss:20b- Text generationnomic-embed-text- Vector embeddings
- ChromaDB - Vector database for semantic search
- Uvicorn - ASGI server
light/
βββ apps/
β βββ web/ # Next.js frontend
β β βββ src/
β β β βββ app/ # App router pages
β β β βββ components/
β β β β βββ ui/ # UI components
β β β βββ lib/ # Utilities
β β βββ package.json
β βββ backend/ # FastAPI backend
β βββ main.py # API endpoints
β βββ requirements.txt
β βββ chroma_db/ # Vector database storage
βββ packages/
βββ ... # Shared packages
- Bun (v1.0+) - Install Bun
- Python (v3.10+)
- Ollama - Install Ollama
-
Clone the repository
git clone <repository-url> cd light
-
Install Ollama models
ollama pull gpt-oss:20b ollama pull nomic-embed-text
-
Install frontend dependencies
bun install
-
Set up backend
cd apps/backend python -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate pip install -r requirements.txt
-
Start Ollama (if not running)
ollama serve
-
Start the backend (in
apps/backend)source .venv/bin/activate uvicorn main:app --reload --port 8000 -
Start the frontend (in project root)
bun run dev
-
Access the application
- Frontend: http://localhost:3000
- Backend API: http://localhost:8000
- API Docs: http://localhost:8000/docs
- Type a prompt in the main input box
- Click the send button (arrow icon)
- View the generated document below
- Press
Cmd+K(orCtrl+K) or click the Search button in the navbar - Type your search query
- Click on any result to view full details
Cmd/Ctrl + K- Open search dialogEnter- Submit prompt or searchEsc- Close dialogs
Generate a new document and store it in the vector database.
Request:
{
"prompt": "Write a haiku about coding"
}Response:
{
"id": "uuid-string",
"prompt": "Write a haiku about coding",
"content": "Silent keys tap code,\nLogic blooms in glowing lines,\nDawn breaks, bugs vanish."
}Search for documents using semantic similarity.
Request:
{
"query": "poems about programming"
}Response:
{
"results": [
{
"id": "uuid-string",
"content": "document content...",
"metadata": { "prompt": "original prompt" },
"distance": 0.123
}
]
}bun run buildbun run lintbun run check-typesEdit apps/backend/main.py to change models:
GENERATION_MODEL = "gpt-oss:20b"
EMBEDDING_MODEL = "nomic-embed-text"Vector database is stored in apps/backend/chroma_db/
Ensure Ollama is running and models are pulled:
ollama serve
ollama pull gpt-oss:20b
ollama pull nomic-embed-textCheck if the backend is running on port 8000:
curl http://localhost:8000Clear cache and reinstall:
rm -rf node_modules bun.lock
bun installMIT
Contributions are welcome! Please open an issue or submit a pull request.