Skip to content

Lightweight example of ollama, embeddings and fastapi

Notifications You must be signed in to change notification settings

supersyntx/light

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

5 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Light - Semantic Document Search Engine

A powerful semantic document search engine built with Next.js, FastAPI, Ollama, and ChromaDB. Generate AI-powered documents and search through them using natural language queries with vector embeddings.

Features

  • πŸ€– AI Document Generation: Generate documents using Ollama's GPT-OSS model
  • πŸ” Semantic Search: Search documents using natural language with vector embeddings
  • πŸ’Ύ Vector Storage: ChromaDB for efficient vector storage and retrieval
  • 🎨 Premium UI: Beautiful dark mode interface with glassmorphism effects
  • ⚑ Real-time: Instant search results with semantic similarity scoring
  • πŸ“± Responsive: Works seamlessly on desktop and mobile devices

Tech Stack

Frontend

  • Next.js 15 - React framework with App Router
  • TypeScript - Type-safe development
  • Tailwind CSS - Utility-first styling
  • Shadcn UI - Premium component library
  • Framer Motion - Smooth animations
  • Bun - Fast package manager and runtime

Backend

  • FastAPI - High-performance Python API framework
  • Ollama - Local LLM inference
    • gpt-oss:20b - Text generation
    • nomic-embed-text - Vector embeddings
  • ChromaDB - Vector database for semantic search
  • Uvicorn - ASGI server

Project Structure

light/
β”œβ”€β”€ apps/
β”‚   β”œβ”€β”€ web/              # Next.js frontend
β”‚   β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”‚   β”œβ”€β”€ app/      # App router pages
β”‚   β”‚   β”‚   β”œβ”€β”€ components/
β”‚   β”‚   β”‚   β”‚   └── ui/   # UI components
β”‚   β”‚   β”‚   └── lib/      # Utilities
β”‚   β”‚   └── package.json
β”‚   └── backend/          # FastAPI backend
β”‚       β”œβ”€β”€ main.py       # API endpoints
β”‚       β”œβ”€β”€ requirements.txt
β”‚       └── chroma_db/    # Vector database storage
└── packages/
    └── ...               # Shared packages

Getting Started

Prerequisites

Installation

  1. Clone the repository

    git clone <repository-url>
    cd light
  2. Install Ollama models

    ollama pull gpt-oss:20b
    ollama pull nomic-embed-text
  3. Install frontend dependencies

    bun install
  4. Set up backend

    cd apps/backend
    python -m venv .venv
    source .venv/bin/activate  # On Windows: .venv\Scripts\activate
    pip install -r requirements.txt

Running the Application

  1. Start Ollama (if not running)

    ollama serve
  2. Start the backend (in apps/backend)

    source .venv/bin/activate
    uvicorn main:app --reload --port 8000
  3. Start the frontend (in project root)

    bun run dev
  4. Access the application

Usage

Generate Documents

  1. Type a prompt in the main input box
  2. Click the send button (arrow icon)
  3. View the generated document below

Search Documents

  1. Press Cmd+K (or Ctrl+K) or click the Search button in the navbar
  2. Type your search query
  3. Click on any result to view full details

Keyboard Shortcuts

  • Cmd/Ctrl + K - Open search dialog
  • Enter - Submit prompt or search
  • Esc - Close dialogs

API Endpoints

POST /generate

Generate a new document and store it in the vector database.

Request:

{
  "prompt": "Write a haiku about coding"
}

Response:

{
  "id": "uuid-string",
  "prompt": "Write a haiku about coding",
  "content": "Silent keys tap code,\nLogic blooms in glowing lines,\nDawn breaks, bugs vanish."
}

POST /search

Search for documents using semantic similarity.

Request:

{
  "query": "poems about programming"
}

Response:

{
  "results": [
    {
      "id": "uuid-string",
      "content": "document content...",
      "metadata": { "prompt": "original prompt" },
      "distance": 0.123
    }
  ]
}

Development

Build

bun run build

Lint

bun run lint

Type Check

bun run check-types

Configuration

Ollama Models

Edit apps/backend/main.py to change models:

GENERATION_MODEL = "gpt-oss:20b"
EMBEDDING_MODEL = "nomic-embed-text"

ChromaDB Storage

Vector database is stored in apps/backend/chroma_db/

Troubleshooting

Ollama 404 Error

Ensure Ollama is running and models are pulled:

ollama serve
ollama pull gpt-oss:20b
ollama pull nomic-embed-text

Backend Connection Error

Check if the backend is running on port 8000:

curl http://localhost:8000

Frontend Build Issues

Clear cache and reinstall:

rm -rf node_modules bun.lock
bun install

License

MIT

Contributing

Contributions are welcome! Please open an issue or submit a pull request.

About

Lightweight example of ollama, embeddings and fastapi

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •