Moderation App is a full-stack reference project that combines OpenAI's multimodal Moderation API with an optional PyTorch pre-filter service. It demonstrates how to layer custom rule logic on top of vendor moderation signals, expose the results through a FastAPI backend, and surface interactive tooling in a React UI. Docker, Kubernetes manifests, and dedicated docs round out the repo to help you move from local prototyping to production-style deployments.
- FastAPI backend that fans out to OpenAI Moderation and a local TorchScript classifier, merges the signals, and applies YAML-driven policy rules (
backend/). - PyTorch microservice that scores text and images before they hit OpenAI, letting you block/allow requests early (
torch_filter/). - Vite + React frontend for quick manual testing of text/image moderation pipelines (
frontend/). - Container-first workflow: Dockerfiles for each service, a
docker-compose.ymlfor local orchestration, and Kubernetes manifests underinfra/. - Lightweight test suites (pytest) and rich reference docs under
docs/to explain design decisions, pipelines, and rollout guidance.
Clients hit the React/Vite UI (frontend/), which runs in a browser or CDN; it sends moderation requests to a single API endpoint (/api/...) and never talks directly to third-party services. Moderation Backend (backend/, FastAPI) orchestrates moderation cascades: it first calls the internal Torch pre-filter, applies the policy from policy.yaml, and if needed forwards to OpenAI’s Moderation API; it reads secrets (e.g., OPENAI_API_KEY) and thresholds from environment variables/config maps. Torch Filter Service (torch_filter/, FastAPI + PyTorch) exposes /moderate/text and /moderate/image endpoints; it loads TorchScript artifacts (text/image) and returns risk scores used by the backend’s cascade logic. External Providers: OpenAI Moderation API for the final decision path; optional storage for TorchScript artifacts (local volume, ConfigMap, PVC, or cloud storage). Infra: Kubernetes manifests (infra/k8s-*.yaml) deploy three Deployments + Services; frontend Service typically sits behind an Ingress / load balancer, while backend and filter stay internal ClusterIP endpoints. Data flow: Browser ➝ Frontend ➝ Backend ➝ (Torch Filter → OpenAI if needed) → Backend response ➝ Frontend renders indicators. Use ConfigMaps/Secrets/PVCs for policy, keys, and model assets; scale each deployment independently.
| Path | Description |
|---|---|
backend/ |
FastAPI service, moderation cascade (main.py), OpenAI/PyTorch integration, config & tests. |
torch_filter/ |
TorchScript-powered FastAPI microservice plus export scripts and test suite. |
frontend/ |
React + Vite single-page app for interacting with the moderation API. |
infra/ |
Kubernetes manifests for torch filter, backend, and frontend deployments/services. |
docs/ |
Planning notes, architecture write-ups, and detailed setup guides. |
docker-compose.yml |
Spins up all services locally with sane defaults. |
- Python 3.11+ for backend and torch filter development environments.
- Node 18+/npm for the Vite frontend.
- Docker 24+ (optional but recommended for running the 3 services together).
- An OpenAI API key with access to the Moderation endpoint.
- TorchScript models for the prefilter service (place them in
torch_filter/filters/or use the provided export scripts).
- Copy
.env(or.env.exampleif you create one) and setOPENAI_API_KEY,TORCH_FILTER_URL, and any policy overrides. - Build and start everything:
docker compose up --build - Visit the React app at http://localhost:5173. It proxies API calls to the backend on port
8000, which in turn speaks to the torch filter on9000and OpenAI. - Stop the stack with
docker compose downwhen you're done.
python -m venv .venv
.venv\Scripts\activate
pip install -r torch_filter\requirements.txt
uvicorn torch_filter.service:app --reload --port 9000Drop TorchScript artifacts in
torch_filter/filters/(model.ts,text_model.ts) or export them viaexport_*.pyscripts.
python -m venv .venv
.venv\Scripts\activate
pip install -r backend\requirements.txt
set OPENAI_API_KEY=sk-... # or use backend/.env
set TORCH_FILTER_URL=http://localhost:9000
uvicorn backend.main:app --reload --port 8000Policy changes live in backend/policy.yaml; edit thresholds/actions to tune decisions (allow, warn, support, block, ban).
cd frontend
npm install
npm run dev -- --hostExpose the backend URL to the UI through VITE_API_BASE (env var or .env file in frontend/).
- Backend tests:
pytest backend/tests - Torch filter tests:
pytest torch_filter/tests - Frontend linting/tests: add your preferred tooling (e.g.,
npm run test)—Vite scaffolding is in place.
- Docker images are multi-stage and ready for registries; adjust environment variables through your orchestrator.
infra/contains Kubernetes manifests for each service (Deployments + Services). Update container image names and any secrets before applying.- For production, replace the example
.envvalues with secrets, provision persistent storage for TorchScript models, and consider enabling HTTPS/CDN for the frontend.
Additional planning docs, architecture diagrams, and playbooks are available under docs/. Start with Startup Guide - Moderation App (open Ai Moderation + Py Torch Prefilter).docx for a narrative walkthrough.