Skip to content

aryasaatvik/fal-mcp

Repository files navigation

Fal.ai MCP Server

A Model Context Protocol (MCP) server for discovering and documenting Fal.ai models. This server provides tools for coding agents to learn about available models, their schemas, capabilities, and usage patterns.

Features

  • 🔍 Model Discovery: List and search Fal.ai models with comprehensive filtering and sorting
  • 📋 Model Information: Get detailed metadata about specific models, including pricing (with API key)
  • 📐 Schema Retrieval: Fetch OpenAPI schemas for model parameters
  • 📚 Documentation: Generate comprehensive documentation with usage examples
  • Caching: Cloudflare KV-based caching for fast model lookups
  • 🔑 Optional Authentication: Works without API keys, with optional auth for higher rate limits and pricing access
  • 🎯 Advanced Filtering: Filter by category, status, tags, owner, highlighted, pinned, kind, and more
  • 📊 Flexible Sorting: Sort by update date, name, category, or owner

Available Tools

list_models

List all available Fal.ai models with comprehensive filtering and sorting options.

Filtering Parameters:

  • category? (optional): Filter by API category from metadata.category (e.g., "text-to-image", "image-to-video", "training") or array of categories - models must match ANY specified category (e.g., ["text-to-image", "image-to-video"])
  • status? (optional): Filter by status - "active" or "deprecated"
  • tags? (optional): Array of tags - models must have ALL specified tags (e.g., ["new", "beta"])
  • owner? (optional): Filter by model owner/organization (e.g., "fal-ai", "clarityai")
  • highlighted? (optional): Filter to only highlighted models (boolean)
  • pinned? (optional): Filter to only pinned models (boolean)
  • kind? (optional): Filter by model kind - "inference" or "training"
  • search? (optional): Free-text search across name, description, ID, and tags
  • limit? (optional): Maximum number of results (1-100, default: 20)
  • sortBy? (optional): Sort field - "updated_at" (most recent first), "name" (alphabetical), "category" (grouped), or "owner" (grouped)

Common API Categories:

  • "text-to-image", "image-to-image", "text-to-video", "image-to-video", "video-to-video", "image-to-3d", "text-to-audio", "audio", "audio-to-audio", "speech-to-text", "text-to-speech", "training"

Examples:

// Basic search
{
  "category": "text-to-image",
  "search": "flux",
  "limit": 10
}

// Advanced filtering
{
  "category": ["text-to-image", "image-to-image"],
  "status": "active",
  "tags": ["new"],
  "sortBy": "updated_at",
  "limit": 20
}

// Find models from specific owner
{
  "owner": "fal-ai",
  "highlighted": true,
  "sortBy": "name"
}

get_model_info

Get detailed information about a specific model including metadata, capabilities, description, and pricing (when API key is provided).

Parameters:

  • model_id (required): Model endpoint ID (e.g., "fal-ai/flux-pro") or alias (e.g., "flux_pro")

Returns:

  • Model metadata (name, description, category, owner, status, tags)
  • Capabilities information
  • Model URL and last updated timestamp
  • Pricing information (when API key is provided via Authorization header)

Example:

{
  "model_id": "fal-ai/flux-pro"
}

Note: Pricing information is only included when an API key is provided in the Authorization header. Without an API key, all other model information is still available.

get_model_schema

Get the OpenAPI/JSON schema for a model's input and output parameters.

Parameters:

  • model_id (required): Model endpoint ID or alias

Example:

{
  "model_id": "fal-ai/flux/dev"
}

get_model_documentation

Get comprehensive documentation including usage examples and best practices.

Parameters:

  • model_id (required): Model endpoint ID or alias

Example:

{
  "model_id": "fal-ai/flux-pro"
}

Setup

Prerequisites

  • Node.js 18+ and npm
  • Cloudflare account
  • Wrangler CLI installed globally: npm install -g wrangler

Installation

# Clone the repository
git clone <repository-url>
cd fal-mcp

# Install dependencies
npm install

Configuration

  1. Create KV Namespace

    Create a KV namespace for model caching:

    npx wrangler kv namespace create MODEL_CACHE

    This will output a namespace ID. Update wrangler.jsonc with the actual namespace ID:

    {
      "kv_namespaces": [
        {
          "binding": "MODEL_CACHE",
          "id": "your_actual_namespace_id_here"
        }
      ]
    }
  2. Login to Cloudflare

    npx wrangler login

Deployment

Deploy to Cloudflare Workers

npm run deploy

This will deploy your MCP server to: fal-mcp.<your-account>.workers.dev/mcp

Local Development

npm run dev

The server will be available at http://localhost:8787/mcp

API Key Configuration (Optional)

API keys are optional but recommended to avoid rate limits. The server works without authentication, but authenticated requests have higher rate limits.

How It Works

The server extracts API keys from the Authorization header in MCP client requests:

Authorization: Key <your-fal-api-key>

Configuring MCP Clients

Claude Desktop

Add custom headers in your Claude Desktop configuration:

{
  "mcpServers": {
    "fal-ai": {
      "command": "npx",
      "args": [
        "mcp-remote",
        "https://fal-mcp.your-account.workers.dev/mcp"
      ],
      "env": {
        "FAL_API_KEY": "your-fal-api-key-here"
      }
    }
  }
}

Note: The exact method for passing headers depends on your MCP client. Some clients may require using a proxy or wrapper to inject headers.

Other MCP Clients

Check your MCP client's documentation for how to pass custom headers. The server expects:

  • Header name: Authorization
  • Header value: Key <your-api-key>

Getting a Fal.ai API Key

  1. Sign up at fal.ai
  2. Navigate to your account settings
  3. Generate an API key
  4. Use it in the Authorization header format: Key <your-api-key>

Connecting to MCP Clients

Using mcp-remote

For local MCP clients like Claude Desktop, use the mcp-remote package:

npm install -g mcp-remote

Then configure your MCP client to use:

npx mcp-remote https://fal-mcp.your-account.workers.dev/mcp

Direct Connection

If your MCP client supports streamable HTTP transport, connect directly to:

https://fal-mcp.your-account.workers.dev/mcp

Architecture

Components

  • MCP Server (src/index.ts): Main entry point, handles HTTP requests and routes to MCP agent
  • Model Registry (src/registry/model-registry.ts): Manages model cache with KV storage and alias resolution
  • API Client (src/utils/fal-api.ts): Handles all Fal.ai Platform API calls
  • Tools (src/tools/fal-ai.tools.ts): MCP tool implementations
  • Types (src/types/fal-model.ts): TypeScript interfaces and Zod schemas

Caching Strategy

  • Models are cached in Cloudflare KV with a 1-hour TTL
  • Cache is automatically refreshed when expired
  • Falls back to stale cache if API calls fail
  • Supports graceful degradation without API keys

API Integration

The server uses the Fal.ai Platform API for:

  • Model discovery with pagination (GET /v1/models)
  • Model metadata retrieval
  • OpenAPI schema expansion (expand=openapi-3.0)
  • Pricing information (GET /v1/models/pricing) - requires authentication

All API calls are made server-side, so MCP clients don't need direct API access.

Development

Type Checking

npm run typecheck

Linting

npm run lint:fix

Formatting

npm run format

Generate Cloudflare Types

npm run cf-typegen

Project Structure

fal-mcp/
├── src/
│   ├── index.ts              # Main entry point, MCP agent
│   ├── registry/
│   │   └── model-registry.ts # Model cache and registry
│   ├── tools/
│   │   └── fal-ai.tools.ts   # MCP tool implementations
│   ├── types/
│   │   └── fal-model.ts      # TypeScript types and Zod schemas
│   └── utils/
│       └── fal-api.ts        # Fal.ai API client
├── wrangler.jsonc            # Cloudflare Workers configuration
├── package.json
└── README.md

Error Handling

The server handles errors gracefully:

  • Rate Limit (429): Uses stale cache if available, provides helpful message about API keys
  • Authentication Error (401): Clear message about checking API key format
  • Not Found (404): Suggests using list_models to find available models
  • Network Errors: Falls back to stale cache when possible

License

This project is licensed under the MIT License - see the LICENSE file for details.

Contributing

Contributions are welcome! Please read our Contributing Guidelines for details on our code of conduct and the process for submitting pull requests.

References

About

Unofficial fal.ai MCP for developers

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published