Skip to content

End-to-end RAG automation built with n8n, Ollama (local LLMs), and Pinecone. Automatically ingests documents, generates embeddings, stores vectors, and enables context-aware AI chat.

License

Notifications You must be signed in to change notification settings

OMI-KALIX/n8n-rag-automation-ollama-pinecone

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

13 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ n8n RAG Automation using Ollama & Pinecone

A fully automated Retrieval-Augmented Generation (RAG) pipeline built with n8n, Ollama (local LLMs), and Pinecone Vector Database.

This project demonstrates how to ingest documents, generate embeddings, store them in a vector database, and query them using an AI Agent with real context.


✨ Features

  • πŸ“ Automated document ingestion from Google Drive
  • βœ‚οΈ Intelligent document chunking
  • 🧠 Embedding generation using local Ollama models
  • πŸ“¦ Scalable vector storage with Pinecone
  • πŸ’¬ Context-aware chat using n8n AI Agent
  • πŸ”’ Runs locally with no external LLM dependency

πŸ“‚ Folder Structure

n8n-rag-automation-ollama-pinecone/
β”‚
β”œβ”€β”€ workflows/
β”‚   β”œβ”€β”€ file-ingestion-pipeline_rag-chat-automation.json
β”‚
β”œβ”€β”€ screenshots/
β”‚   β”œβ”€β”€ file-ingestion-workflow.png
β”‚   └── rag-chat-workflow.png
β”‚
β”œβ”€β”€ .env.example
β”œβ”€β”€ .gitignore
└── README.md

πŸ—οΈ Architecture Overview

File Ingestion Pipeline

  • Google Drive Trigger (file added/updated)
  • File download
  • Recursive Character Text Splitter
  • Embeddings via nomic-embed-text
  • Store vectors in Pinecone

RAG Chat Pipeline

  • Chat trigger
  • AI Agent (tool-enabled)
  • Semantic search from Pinecone
  • Context-aware responses using Llama 3.2

πŸ–ΌοΈ Automation Workflow

Workflow Screenshot

🧠 Models Used

Purpose Model
Chat / Agent llama3.2:latest
Embeddings nomic-embed-text
Embedding Dimension 768
Similarity Metric cosine

βš™οΈ Prerequisites

  • n8n (local or Docker)
  • Ollama installed
  • Pinecone account
  • Google Drive credentials (for ingestion)

πŸš€ Setup Instructions

1️⃣ Install Ollama Models

ollama pull llama3.2
ollama pull nomic-embed-text

🌟 Final Notes

This project was built to explore how automation, local LLMs, and vector databases come together to form real-world AI systems.
Everything here is designed to be practical, transparent, and extensible.

If this repository helps you learn, build, or experiment with RAG pipelines, feel free to fork it, adapt it, or improve it.

Contributions, suggestions, and discussions are always welcome.

⭐ If you found this useful, consider starring the repo β€” it really helps!

Happy building πŸš€

About

End-to-end RAG automation built with n8n, Ollama (local LLMs), and Pinecone. Automatically ingests documents, generates embeddings, stores vectors, and enables context-aware AI chat.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published