An interactive, animated, AIβaugmented portfolio built with Next.js, featuring a floating conversational assistant that understands the site structure, can guide navigation, trigger UI interactions, and answer questions about professional background, projects, and skills.
Interactive floating chat (see src/components/chat/) powered by an LLM (meta/llama-3.1-70b-instruct) with:
- Contextβaware responses constrained to professional info (configured in
src/config/ai.ts) - Conversation history & graceful fallback messaging (
ai-chat-responses.ts) - Tool/action architecture: AI can request UI actions (navigate / scroll / theme switch / modal / download) β normalized & displayed with rich status indicators (
navigation-indicator.tsx) - Execution feedback UI: expandable success/error panels with action summaries
- Accessibility & reducedβmotion awareness
Structured types in src/types/tools.ts define:
- Strongly typed
ToolContext,ToolAction,ToolResult, and execution metadata - Extensible pattern for adding future tools (e.g., focus/highlight)
- Normalization of action variants for resilient AI function calling
resume-display.tsx renders a themeβadaptive, animated resume within the siteβwide glassmorphism design language:
- Staggered Framer Motion reveals
- Smooth inβpage hash scrolling
- Responsive layout & iconography (Lucide)
- Shares global frosted surfaces & layered blur gradients used across navbar, overlays, chat, transitions
Custom animation utilities in src/animation/ (fade, flip words, page transitions) plus Framer Motion orchestration for:
- Section transitions
- Floating chat open/close morph transitions
- Scroll & action notifications
- Dark/light theme via
next-themes - TailwindCSS utilityβfirst styling with accent-driven gradients
- Reusable helper
classNamesutility - Siteβwide glassmorphism: layered translucent panels, subtle borders, backdrop blurs
- Configurable email transport via environment variables
- Rate limiting to prevent abuse (see below)
rate-limiter.ts (LRU cache based) provides per-user/IP + UA throttling with graceful headers & cookie fallback.
- Automated generation of
sitemap.xml&robots.txtviasrc/scripts/generateSitemap.mjs - Dynamic route exclusion by convention
- Google Site Verification configurable in
src/data/siteMetaData.mjs
- Jest + React Testing Library test suite (see
src/components/chat/__tests__/) covering interactive chat components & dialogs
- Optimized for Vercel (edge friendly, Next.js defaults)
- Static assets & icons organized under
public/
- Reduced motion support via
useReducedMotion - Keyboard accessible chat trigger & ARIA labels
- Scroll anchoring & autoβsizing textarea (
useAutoSizeTextarea)
- Central AI system prompt governs boundaries & formatting
- Pluggable tool execution pattern allows future actions (focus/highlight/show/hide) with minimal changes
- Modular section components: hero, about, experience, projects, skills, resume
- Shared layout primitives (
main-layout,navbar,footer) for consistency - Reusable animation variants to avoid duplication
- Data-driven sections sourced from TypeScript files in
src/data/(no hard-coded JSX blobs)
- Lean image usage; static assets optimized under
public/ - Conditional rendering for heavy components (chat window mounts only when opened)
- Auto-scroll management to prevent layout thrash
- Potential to integrate Next.js
<Image />for further optimization (future enhancement)
- Strong typing across utilities & tool system
- Separation of concerns: UI vs. domain logic vs. configuration
- Centralized constants & metadata (
siteMetaData.mjs,ai.ts) - Clear naming + comments for complex behaviors (tool execution, scrolling)
- Semantic headings & icon labels with
aria-label - Color contrasts tuned for dark/light themes
- Motion reduced when user prefers reduced motion
- Focusable floating action button with clear state indication
- Subtle motion for first-impression polish (resume, hero, chat open)
- Avoids perpetual CPU-heavy animations; most transitions are spring-based and finite
- Encapsulated variants enabling consistent timing curves
- Rate limiting on AI endpoint
- Graceful fallback messaging when AI or tools fail
- Environment variable isolation via
.env.local(example provided)
- Smooth in-page hash scrolling
- AI assisted scroll-to-section actions (planned highlight/focus tools)
- Clear active route boundaries in the navbar (implementation detail in layout components)
welcome-screen.tsx provides a firstβvisit immersive intro:
- Scroll/touch to dismiss interaction
- Rotating inspirational taglines with timed transitions
- Scroll lock handling + graceful teardown
- Dynamic floating particles & gradient orbs with reducedβmotion awareness
page-transition-animation.tsx adds cinematic route transitions:
- Layered radial clipPath reveals & gradient overlays
- Animated sparkle core with orbiting particles & radiating rings
- Automatically skipped when user prefers reduced motion
- Elevates perceived performance & brand identity
- Consistent translucent surfaces (welcome overlay, chat window, resume container, transition panels)
- Light/dark adaptive accent hues with subtle inner/outer shadow blending
- Avoids excessive blur radius for performance while retaining depth
| Route | Purpose |
|---|---|
/ (Home) |
Landing hero, skills snapshot, featured sections, AI chat entry |
/about |
Detailed background, narrative profile |
/projects |
Project gallery sourced from src/data/projects.ts |
/resume |
Interactive animated resume (resume-display.tsx) |
/404 |
Custom notβfound with consistent styling |
API routes (/api/*) |
Chat endpoint, email sending (rate limited) |
- Next.js β Hybrid rendering & routing
- TypeScript β Type safety & maintainability
- Tailwind CSS β Rapid UI styling
- Framer Motion β Declarative animations
- Lucide Icons β Consistent icon set
- Nodemailer β Contact form delivery
- LLM (Llama 3.1) β Conversational AI layer
- Jest / RTL β Component testing
| File | Purpose |
|---|---|
src/config/ai.ts |
Model + guarded system prompt |
src/utility/ai-chat-responses.ts |
Fetch + fallback strategy & conversation windowing |
src/types/tools.ts |
Typed contracts for tool calls & results |
src/components/chat/ |
UI (window, floating button, indicators, dialogs) |
Core flow:
- User asks a question β message appended locally
- API route (
/api/chat) processes content (LLM call β configure as needed) - AI returns text + optional structured tool action intents
- Actions normalized β queued β visualized β executed (navigate, scroll, theme, etc.)
- Result blocks rendered with success/error details & action logs
Add a new tool:
- Extend
ToolAction['type'] - Implement execution & UI mapping
- Update normalization and indicator icon selection
- Run generation manually:
npm run sitemap(or via build hook) - Dynamic
[slug]/ bracket paths automatically excluded - Update Google site verification token in
src/data/siteMetaData.mjs
Create a .env.local (not committed). See .env.example for the full annotated list.
# Example environment variables for the portfolio
# Copy this file to .env.local (never commit real secrets)
# -----------------------------
# Contact / Nodemailer (Gmail example)
# If using Gmail with 2FA, use an App Password instead of your real password.
NODEMAILER_USER=your-email@example.com
NODEMAILER_PASS=your-app-password
# -----------------------------
# LLM / AI Provider
# Base URL optional β set if you proxy or self-host (e.g. OpenAI compatible gateway)
LLM_API_KEY=your-llm-api-key
LLM_BASE_URL=https://api.openai.com/v1
# -----------------------------
# Build / Analysis Flags (optional)
# Set to "true" to enable bundle analyzer as configured in next.config.js
ANALYZE=false
# Output standalone server build (useful for Docker / minimal deploy)
BUILD_STANDALONE=false
# -----------------------------
# Runtime Environment (auto-set by Vercel / Node usually; here for clarity)
NODE_ENV=development
# Add additional variables here as you extend tool features.Never commit real credentials. For production, configure them in Vercel Project Settings.
npm installnpm run sitemapnpm run devVisit: http://localhost:3000
npm test- Focus on critical interactive chat behaviors (open/close, message send, indicators)
- Prefer state-driven assertions over timing hacks
- Keep tests colocated with components (
__tests__folders)
src/
components/
chat/ # AI assistant UI & logic
config/ # AI model + system prompt
data/ # SEO + site metadata
animation/ # Reusable motion variants
hooks/ # Custom UI hooks
utility/ # Helpers (AI fetch, rate limiter, classNames)
types/ # Shared TS types (tools, etc.)
public/ # Static assets & icons
| Area | How to Change |
|---|---|
| Accent Theme | Tailwind config (tailwind.config.js) |
| AI Knowledge | Edit knowledge base in src/config/ai.ts |
| Resume Content | Update resume-display.tsx or source structured data |
| Tool Actions | Extend ToolAction in types/tools.ts |
| SEO Metadata | src/data/siteMetaData.mjs |
| Animations | Add variants under src/animation/ |
| Email Transport | Update .env.local + sendMail.ts config (e.g., switch from Gmail to custom SMTP) |
| LLM Provider | Point LLM_BASE_URL & LLM_API_KEY to alternative (OpenAI-compatible) endpoint |
| Rate Limits | Tweak limiter settings in src/utility/rate-limiter.ts |
rate-limiter.ts uses an in-memory LRU strategy (inspired by official Next.js example). For production horizontal scaling, replace with Redis / Upstash / KV store.
Deploy on Vercel:
- Push repository to GitHub
- Import into Vercel
- Add environment variables
- Trigger build (sitemap script runs automatically if configured)
- Add focus/highlight tool execution
- Streaming AI responses
- Persist conversation in session storage
- Internationalization (i18n)
See LICENSE
- Next.js team & examples
- Framer Motion contributors
- Lucide Icons
If you build on this template, a star β on the repo is always appreciated!
