AI‑powered tool that analyzes long videos, finds the best moments, and generates ready‑to‑share short clips with scores, thumbnails, and download options.
AI Video Clipper is a full‑stack app that analyzes long videos, finds the most engaging moments, and generates ready‑to‑share short clips with rich metadata, previews, and downloads.
Upload a video (or give a URL), set your clip options, let the backend process it with FFmpeg + AI, and manage all generated clips in a modern React UI.
-
🔼 Video upload & URL input
- Upload local video files (mp4, avi, mov, wmv, flv).
- (Optional) Download and process videos from remote URLs using
yt-dlp-exec.
-
🧠 AI‑powered analysis
- Extracts audio and transcribes it via a pluggable AI provider.
- Suggests clips automatically based on content and options.
- Per‑clip scores for engagement, clarity, and hook.
-
🎯 Flexible clip options
- Time range selection (start / end seconds).
- Preset lengths:
short,medium,long,custom. - Aspect ratios:
9:16,16:9,1:1,auto. - Templates:
clean,creator,meme. - Extra options: meme hooks, captions, background music, hooks, CTAs, etc.
-
🖼️ Modern React UI
- Source panel, options panel, and results grid.
- Glassmorphism + gradient‑based cards for clips.
- Detailed preview modal with custom progress bar and overlay controls.
- Tag chips for aspect ratio, caption status, template, and meme hooks.
- Engagement score visualization with colored labels and progress bars.
- Bulk download of selected clips and per‑clip download buttons.
-
🎧 Backend processing
- Uses FFmpeg / ffprobe via
fluent-ffmpegfor:- Probing metadata (duration, resolution, bitrate).
- Extracting audio for transcription.
- Cutting clips on demand. 1
- Job system: analyze once, then poll for job and clips.
- On‑demand clip generation and streaming via Express.
- Uses FFmpeg / ffprobe via
- React (with hooks, function components)
- TypeScript
- Vite as dev server / bundler
- Tailwind CSS (utility classes for styling)
- lucide-react for icons
- Custom services for:
- HTTP API calls (
axios) - Blob download helper for video files
- HTTP API calls (
- Node.js + TypeScript
- Express HTTP API
- fluent-ffmpeg for video/audio processing 2
- FFmpeg / ffprobe (external binaries, installed on the host) 1
- multer for file uploads
- yt-dlp-exec (optional) for downloading videos from URLs 3
- zod for runtime request validation
- Simple in‑process “DB” (utility functions in
db.ts) for jobs and clips - Pluggable AI provider (
aiProvider.ts) to handle:- Transcription (
transcribe) - Clip suggestion (
suggestClips)
- Transcription (
ai-video-clipper/ ├── backend/ │ ├── src/ │ │ ├── routes/ │ │ │ └── video.ts # /api/video/* routes │ │ ├── services/ │ │ │ ├── videoProcessor.ts # FFmpeg, ffprobe, yt-dlp logic │ │ │ └── aiProvider.ts # AI provider abstraction │ │ ├── db.ts # in-memory job/clip storage │ │ ├── types.ts # shared backend types │ │ └── server.ts # Express app bootstrap │ ├── package.json │ └── tsconfig.json ├── frontend/ │ ├── src/ │ │ ├── components/ │ │ │ ├── SourcePanel.tsx │ │ │ ├── OptionsPanel.tsx │ │ │ └── ResultsGrid.tsx # clips grid + preview modal │ │ ├── services/ │ │ │ └── api.ts # axios API client + helpers │ │ ├── types.ts # shared frontend types │ │ └── App.tsx │ ├── index.html │ ├── package.json │ └── tsconfig.json ├── .gitignore ├── LICENSE └── README.md
Before running the project locally, install:
-
Node.js (LTS recommended, e.g. 18+).
-
npm (comes with Node) or your preferred package manager.
-
FFmpeg (must include
ffmpegandffprobebinaries). You can download Windows builds from gyan.dev or BtbN’s GitHub releases. 1-
On Windows, an easy way is to use a package manager:
choco install ffmpeg winget install "FFmpeg (Essentials Build)" -
Make sure
ffmpegandffprobeare available in your PATH.
-
-
(Optional for URL sources) yt-dlp via
yt-dlp-exec:- The Node package wraps
yt-dlp. You may still need the underlying tool depending on your environment. 3
- The Node package wraps
-
AI provider credentials (if required by your chosen provider e.g. OpenAI, Gemini, etc.), configured via environment variables in the backend.
git clone https://github.com//ai-video-clipper.git cd ai-video-clipper
text
cd backend npm install
text
Create a .env file in backend/ (or use .env.example if present) with values like:
PORT=3001 AI_PROVIDER=gemini # or 'openai', etc. AI_API_KEY=your_api_key_here
text
(Adjust names/keys to match your actual aiProvider implementation.)
npm run dev
text
This starts the Express server (default port 3001).
In another terminal:
cd frontend npm install
text
Configure API base URL in frontend/src/services/api.ts if needed (e.g. http://localhost:3001/api).
npm run dev
text
Vite will show a local URL (usually http://localhost:3000/).
Now open the frontend URL in your browser; the app will talk to the backend on port 3001.
-
Start both servers
- Backend:
npm run devinsidebackend. - Frontend:
npm run devinsidefrontend.
- Backend:
-
Upload or specify a video
- In the Source panel:
- Choose Upload and pick a file from your machine, or
- Choose URL (if enabled) and paste a video link.
- Wait for the upload/URL validation and metadata (duration) to appear.
- In the Source panel:
-
Configure clip options
- Choose time range (start / end).
- Pick a length preset (short / medium / long / custom).
- Select aspect ratio and template.
- Toggle options: meme hook, captions, hook title, call to action, background music, etc.
-
Analyze & generate
- Click Analyze & Generate Clips.
- The backend:
- Probes the video with ffprobe.
- Extracts audio with FFmpeg.
- Sends audio to the AI provider for transcription.
- Requests clip suggestions from the AI.
- Saves clips in the in‑memory DB.
- The frontend polls the job endpoint until clips are ready.
-
Review generated clips
- Clips appear in the Results grid:
- Card for each clip with title, time range, aspect ratio, template, tags.
- Engagement score (High / Medium / Low) and percent.
- Click Preview for in‑modal playback with a custom overlay player and progress bar.
- Click Download to download a single clip.
- Select multiple clips and click Download All for bulk download.
- Clips appear in the Results grid:
Base URL (backend):
/api/video
text
Key routes:
-
POST /api/video/upload- Multipart upload with
filefield. - Returns
fileIdand basic metadata.
- Multipart upload with
-
POST /api/video/analyze-
Body:
{ "source": { "type": "file" | "url", "fileId": "...", "url": "..." }, "options": { /* clip options */ } } -
Validated with
zod; returnsjobIdand initialvideoMeta.
-
-
GET /api/video/job/:jobId- Returns job status and an array of clip definitions (with scores, captions, thumbnails).
-
GET /api/video/clip/:jobId/:clipId/download- Streams the generated clip.
- Generates the clip on demand using FFmpeg if not already present.
express– HTTP server / routingmulter– multipart form‑data parsing for file uploadsfluent-ffmpeg– Node wrapper for FFmpeg / ffprobe 2yt-dlp-exec– wrapper foryt-dlpfor URL downloads 3zod– runtime validation of request payloadsdotenv– environment variable loadinguuid– generation of job and file IDstypescript,ts-node/tsx– TypeScript tooling
react,react-domtypescriptviteaxios– HTTP clientlucide-react– icon settailwindcss,postcss,autoprefixer– styling pipeline
-
FFmpeg / ffprobe
- Must be installed and either:
- Available on
PATH, or - Explicitly configured in
videoProcessor.tsviaffmpeg.setFfmpegPath/setFfprobePath. 2
- Available on
- Must be installed and either:
-
Temporary folders
- The backend uses subfolders under
temp/:temp/uploads– raw uploaded videostemp/downloads– URL‑downloaded videostemp/audio– extracted audio for transcriptiontemp/clips– generated clips
- These paths are relative to
process.cwd(); ensure the directories are writable.
- The backend uses subfolders under
-
Database
- Current implementation uses an in‑memory store for jobs and clips (via helper functions in
db.ts). - For production: replace with a proper database (PostgreSQL, MongoDB, etc.).
- Current implementation uses an in‑memory store for jobs and clips (via helper functions in
-
AI Provider
aiProvider.tsacts as a thin abstraction.- Plug in your favorite provider by implementing:
transcribe({ filePath, startSec, endSec })suggestClips({ transcript, options })
npm run dev– start backend in watch mode (TypeScript).npm run build– build backend TypeScript to JavaScript.npm start– run built backend.
npm run dev– start Vite dev server.npm run build– production build.npm run preview– preview production build locally.
- Persist jobs and clips in a real database.
- Add authentication and user accounts.
- Support more source platforms via URL (Twitch, Facebook, etc.).
- Advanced clip editing in the browser (trim, reorder, overlay text).
- Export directly to platforms (YouTube Shorts, Reels, TikTok).
This project is licensed under the MIT License.
See the LICENSE file for details. 4
Footnotes
-
FFmpeg & ffprobe documentation – https://ffmpeg.org/ffprobe.html and Windows builds from gyan.dev/BtbN. ↩ ↩2 ↩3
-
fluent-ffmpeg– Node wrapper for FFmpeg: https://www.npmjs.com/package/fluent-ffmpeg. ↩ ↩2 ↩3 -
yt-dlp-exec– Node wrapper for yt-dlp: https://www.npmjs.com/package/@borodutch-labs/yt-dlp-exec. ↩ ↩2 ↩3 -
MIT License overview – https://choosealicense.com/licenses/mit/. ↩