Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .changeset/cold-lizards-sniff.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
---
---
2 changes: 2 additions & 0 deletions .changeset/legal-shrimps-smash.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
---
---
5 changes: 4 additions & 1 deletion .changeset/pre.json
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,17 @@
"docs": "0.0.0",
"auth-example": "0.1.0",
"openai-apps-minimal-nextjs": "0.1.0",
"mcpay": "0.1.7-beta.12"
"mcpay": "0.1.7-beta.14"
},
"changesets": [
"cold-lizards-sniff",
"fair-dingos-report",
"fifty-weeks-wash",
"free-tigers-switch",
"kind-owls-ring",
"legal-shrimps-smash",
"neat-symbols-accept",
"quick-tips-study",
"ready-cobras-attend",
"ripe-chicken-attack",
"rotten-toes-own",
Expand Down
2 changes: 2 additions & 0 deletions .changeset/quick-tips-study.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
---
---
89 changes: 89 additions & 0 deletions .cursor/plans/expand-400f8f42.plan.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
<!-- 400f8f42-9695-431c-9246-5dca65231626 66b16702-9ab2-4d2e-b56b-b8f531fb3ad6 -->
# Expand MCP Proxy Hooks to All Requests

## Scope

- Add typed support and hook stages for: initialize, tools/list, prompts/list, resources/list, resources/templates/list, resources/read, notifications, and generic/target request/response/error.
- Introduce optional requestContext to all request shapes.
- Support continueAsync in request hooks (early response path), synchronous path remains default.

## Key Changes

### 1) Enhance `packages/js-sdk/src/handler/proxy/hooks.ts`

- Re-export MCP types needed by hooks.
- Add `RequestContext` zod schema and `*WithContext` request types.
- Define hook result schemas/types for each method (request/response/error), plus notification and target-side variants.
- Extend `Hook` interface with method signatures for all supported MCP methods and errors, including `processTarget*` and notification methods.
- Add generic helper types: `GenericRequestHookResult`, `GenericResponseHookResult`, and method-finder utility types.

### 2) Generalize proxy routing in `packages/js-sdk/src/handler/proxy/index.ts`

- Parse JSON-RPC envelope and route by `body.method`:
- "initialize" → initialize hook chain
- "tools/list" → listTools hook chain
- "prompts/list" → listPrompts hook chain
- "resources/list" → listResources hook chain
- "resources/templates/list" → listResourceTemplates hook chain
- "resources/read" → readResource hook chain
- "tools/call" → existing tool call chain (kept)
- else → other request chain
- For each chain:
- Run `process*Request` across hooks; honor results:
- `continue`: accumulate mutated request
- `respond`: short-circuit with given response (wrap in JSON-RPC result envelope with same id)
- `continueAsync` (when defined for that method): short-circuit immediately with provided response; do not forward upstream
- Build upstream headers: run `prepareUpstreamHeaders` with the active request (now for all methods, not only tools/call)
- Forward to target; parse response (JSON or SSE, same logic) and run `process*Result` across hooks; honor `continue`
- On upstream or parsing errors, run `process*Error` chain; if any hook returns `respond`, use it; else synthesize JSON-RPC error

### 3) Header stage generalization

- Keep `prepareUpstreamHeaders` as a generic stage; call it for every request kind.

### 4) Notifications support

- If JSON-RPC message has no `id`, treat as notification:
- Route to `processNotification` chain (client→target) and short-circuit if any hook modifies or blocks (return 204 for blocked or passthrough if forwarded).
- Forward to target; if target pushes notifications back (SSE or webhook), route through `processTargetNotification` where applicable.

### 5) Target-side scaffolding

- Define `processTargetRequest`, `processTargetResult`, `processTargetError`, `processTargetNotification`, `processTargetNotificationError` in the Hook interface and call sites where reverse direction is handled (primarily for SSE and future bidirectional transport). For v1, call result/error handlers after parsing SSE frames.

### 6) ContinueAsync handling

- Accept `continueAsync` in request-stage results for all supported methods that define it; short-circuit by immediately returning the provided response in a JSON-RPC envelope. The callback remains hook-owned; the proxy does not forward upstream nor schedule extra work.

### 7) Maintain backward compatibility

- Existing hooks (`analytics`, `auth-headers`, `logging`, `x402*`) continue to compile; their tool-call methods remain supported.
- `prepareUpstreamHeaders` signature unchanged.

### 8) Minimal docs

- Document the new Hook methods and result types in package README and JSDoc in `hooks.ts`.

## Acceptance Criteria

- tools/call behavior unchanged.
- initialize, tools/list, prompts/list, resources/list, resources/templates/list, resources/read all route through corresponding hook stages and forward successfully to target servers.
- Notifications (no `id`) are forwarded; hook stages can observe them.
- Error hooks can replace errors with valid results for each method.
- `continueAsync` on supported methods returns early and does not forward upstream.

## Notes

- No persistence or background scheduling is added for `continueAsync` in v1. Hooks that use it must manage their own callback lifecycle.

### To-dos

- [ ] Extend hooks.ts with schemas, requestContext, all hook result types, method helpers.
- [ ] Generalize proxy index.ts routing for all MCP methods and notifications.
- [ ] Call prepareUpstreamHeaders for every request kind.
- [ ] Invoke process*Error chains and honor respond results.
- [ ] Add processNotification and processTargetNotification call sites.
- [ ] Wire processTarget* handlers for SSE/streamed responses.
- [ ] Accept continueAsync and short-circuit with provided response.
- [ ] Ensure existing hooks compile under new types; adapt imports.
- [ ] Update README and JSDoc with new hook API and examples.
99 changes: 99 additions & 0 deletions .cursor/plans/m-d73ca5be.plan.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
<!-- d73ca5be-e589-4210-a90a-138bb82fbd5f 4bda7b4c-c97b-4a42-8caf-6316df5a7b33 -->
# MCP Data App — Postgres (Neon‑ready) Registry + Generic Events, No Ownership

## Overview
- New app: `apps/mcp-data` for indexing, analytics ingestion, and queries.
- Registry: Postgres is the source of truth with flexible JSONB (no ownership model). All MCP resources (tools, resources, pricing, capabilities) live inside the server doc.
- Events: Generic event stream table (not tool‑call specific). Start on Neon with vanilla Postgres; Enable TimescaleDB without changing producers.
- Optional cache: keep Upstash Redis for proxy hot‑path only; authoritative data is Postgres.

## Storage Design (Drizzle + raw SQL where needed)
- `mcp_servers` (document‑like with JSONB)
- id uuid pk DEFAULT gen_random_uuid() // via `pgcrypto`
- origin text UNIQUE
- title text, description text, require_auth boolean
- tags jsonb DEFAULT '[]'::jsonb
- recipients jsonb, receiver_by_network jsonb
- tools jsonb DEFAULT '[]'::jsonb // array of objects: { name, description?, pricing?, metadata? }
- resources jsonb DEFAULT '[]'::jsonb // array of objects: { type, key, data?, description?, metadata? }
- capabilities jsonb DEFAULT '{}'::jsonb, metadata jsonb DEFAULT '{}'::jsonb
- status text, last_seen_at timestamptz, indexed_at timestamptz, created_at/updated_at
- Indexes: btree(origin), GIN(tags), GIN(capabilities), GIN(metadata), GIN(tools), GIN(resources)

- `events` (generic append‑only time‑series)
- id uuid pk DEFAULT gen_random_uuid(), ts timestamptz DEFAULT now(), request_id text UNIQUE
- server_id uuid REFERENCES mcp_servers(id), origin text
- kind text // 'mcp.request' | 'mcp.response' | 'tool.call' | 'verify' | 'settle' | 'discovery' | 'health' | 'proxy' | ...
- method text, status_code int, latency_ms int, error_code text
- payment jsonb DEFAULT '{}'::jsonb // x402 fields
- meta jsonb DEFAULT '{}'::jsonb // raw payload/headers/envelope fragments
- Indexes: (origin, ts), (server_id, ts), (kind, ts), UNIQUE(request_id), GIN(meta), GIN(payment)

Notes:
- Use `jsonb` columns that store arrays, not `jsonb[]` (Drizzle/PG best‑practice).
- Enable `pgcrypto` and use `gen_random_uuid()` (Neon‑friendly) instead of `uuid-ossp`.

## Time‑series Scale Options
- Vanilla (Neon default):
- Declarative monthly partitions via raw SQL migration:
- `CREATE TABLE events (...) PARTITION BY RANGE (ts);`
- `CREATE TABLE events_YYYY_MM PARTITION OF events FOR VALUES FROM ('YYYY-MM-01') TO ('YYYY-MM+1-01');`
- Add a tiny scheduler (cron/node-cron) to auto‑create next month’s partition.
- Timescale (later): `CREATE EXTENSION IF NOT EXISTS timescaledb;` then `create_hypertable('events','ts', ...)`.
- Citus (later): `CREATE EXTENSION IF NOT EXISTS citus;` then `create_distributed_table('events','server_id')`.
- Gate with `ANALYTICS_BACKEND=vanilla|timescale|citus`; run init SQL idempotently.

## New App: `apps/mcp-data`
- `src/server.ts`
- `POST /ingest/event` → insert into `events` with `ON CONFLICT (request_id) DO NOTHING`; batch up to ~500 rows per Neon guidance.
- `POST /index/run` → probe origin, `tools/list` + `resources/list`, update `mcp_servers` JSONB.
- `GET /events/summary?origin=...` → derived payments + counts over `events`.
- `GET /servers?query=...` → JSONB filters over `mcp_servers`.
- `src/indexer/*` → health/discovery, enrich pricing/capabilities, upsert doc.
- `src/db/*` → Drizzle PG; optional Redis helper to refresh proxy cache.

## Proxy Integration
- `packages/js-sdk`: add `AnalyticsHook` that emits generic events (request/response, tool calls, payments if present).
- Update proxy glue so analytics runs for all JSON requests (not only `tools/call`): add a lightweight request/response tap when bypassing tool‑specific path.
- `apps/mcp2/src/index.ts`: include `AnalyticsHook` next to `LoggingHook` and `X402MonetizationHook`.

## Payments (derived from events)
- No separate ledger; compute from `events` where `payment` fields exist or `kind IN ('verify','settle','tool.call')`.
- Example queries by origin:
- Paid calls: `WHERE origin=$1 AND (payment->>'payer') IS NOT NULL AND COALESCE((payment->>'success')::boolean, false)`.
- Revenue estimate: sum network‑specific amounts stored in `payment`; or count × fixed price when applicable.
- Error rates: `WHERE origin=$1 AND error_code IS NOT NULL`.

## Provider Compatibility
- Neon: supported (JSONB, GIN, partitions, `pgcrypto`). Add Timescale


## Performance & Ops
- Use partial GIN indexes (e.g., `jsonb_path_ops`) where useful; consider materialized views for heavy summaries.
- Batch ingestion ≤500 rows; exponential backoff on failures; drop non‑critical fields if needed.
- Idempotency via `request_id` dedupe.
- Partition rotation cron; alert if next partition is missing.
- Add `pgvector` for capability embeddings on `mcp_servers`.

## To‑dos
- [ ] Scaffold `apps/mcp-data` service and env
- [ ] Implement Drizzle schema (mcp_servers JSONB + events) using `jsonb` arrays and `pgcrypto`
- [ ] Add raw SQL migrations: base tables, indexes, vanilla monthly partitions, partition rotation job
- [ ] Gate Timescale init SQL behind `ANALYTICS_BACKEND`
- [ ] Implement `AnalyticsHook` (all JSON requests) and wire into `apps/mcp2`
- [ ] Implement `/ingest/event` with batching, `ON CONFLICT DO NOTHING`, retries
- [ ] Build indexer (probe + tools/resources refresh) writing to JSONB doc
- [ ] Implement summary and search endpoints
- [ ] pgvector for search; optional Redis cache refresh after index

### To-dos

- [ ] Scaffold apps/mcp-data service and env
- [ ] Create Drizzle schema for servers JSONB + events with pgcrypto
- [ ] Write raw SQL migrations: tables, indexes, monthly partitions, rotation cron
- [ ] Gate Timescale/Citus init SQL by ANALYTICS_BACKEND
- [ ] Implement AnalyticsHook for all JSON requests and wire into mcp2
- [ ] Implement /ingest/event with batching and dedupe
- [ ] Build indexer to probe, list tools/resources, update servers doc
- [ ] Add summary and search endpoints
- [ ] Optional pgvector for capabilities search
30 changes: 30 additions & 0 deletions apps/mcp-data/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# dev
.yarn/
!.yarn/releases
.vscode/*
!.vscode/launch.json
!.vscode/*.code-snippets
.idea/workspace.xml
.idea/usage.statistics.xml
.idea/shelf

# deps
node_modules/

# env
.env
.env.production

# logs
logs/
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*

# misc
.DS_Store
.vercel
.env*.local
12 changes: 12 additions & 0 deletions apps/mcp-data/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
MCP Data service

Setup
- Copy .env.example to .env and set `DATABASE_URL` and `INGESTION_SECRET`
- Run migrations (to be added) and start the server

Endpoints
- POST /ingest/event
- POST /index/run
- GET /events/summary?origin=


13 changes: 13 additions & 0 deletions apps/mcp-data/drizzle.config.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
import 'dotenv/config';
import { defineConfig } from 'drizzle-kit';

export default defineConfig({
out: './drizzle',
schema: './src/db/schema.ts',
dialect: 'postgresql',
dbCredentials: {
url: process.env.DATABASE_URL!,
},
});


48 changes: 48 additions & 0 deletions apps/mcp-data/drizzle/0000_colossal_blacklash.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
CREATE TABLE "events" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"ts" timestamp with time zone DEFAULT now(),
"request_id" text,
"server_id" uuid,
"origin" text,
"kind" text,
"method" text,
"status_code" integer,
"latency_ms" integer,
"error_code" text,
"payment" jsonb DEFAULT '{}'::jsonb,
"meta" jsonb DEFAULT '{}'::jsonb,
CONSTRAINT "events_request_id_unique" UNIQUE("request_id")
);
--> statement-breakpoint
CREATE TABLE "mcp_servers" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"origin" text NOT NULL,
"title" text,
"description" text,
"require_auth" text,
"tags" jsonb DEFAULT '[]'::jsonb,
"recipients" jsonb DEFAULT '{}'::jsonb,
"receiver_by_network" jsonb DEFAULT '{}'::jsonb,
"tools" jsonb DEFAULT '[]'::jsonb,
"resources" jsonb DEFAULT '[]'::jsonb,
"capabilities" jsonb DEFAULT '{}'::jsonb,
"metadata" jsonb DEFAULT '{}'::jsonb,
"status" text,
"last_seen_at" timestamp with time zone,
"indexed_at" timestamp with time zone,
"created_at" timestamp with time zone DEFAULT now(),
"updated_at" timestamp with time zone DEFAULT now(),
CONSTRAINT "mcp_servers_origin_unique" UNIQUE("origin")
);
--> statement-breakpoint
ALTER TABLE "events" ADD CONSTRAINT "events_server_id_mcp_servers_id_fk" FOREIGN KEY ("server_id") REFERENCES "public"."mcp_servers"("id") ON DELETE no action ON UPDATE no action;--> statement-breakpoint
CREATE INDEX "idx_events_origin_ts" ON "events" USING btree ("origin","ts");--> statement-breakpoint
CREATE INDEX "idx_events_server_ts" ON "events" USING btree ("server_id","ts");--> statement-breakpoint
CREATE INDEX "idx_events_kind_ts" ON "events" USING btree ("kind","ts");--> statement-breakpoint
CREATE INDEX "idx_events_meta" ON "events" USING gin ("meta");--> statement-breakpoint
CREATE INDEX "idx_events_payment" ON "events" USING gin ("payment");--> statement-breakpoint
CREATE INDEX "idx_mcp_servers_tags" ON "mcp_servers" USING gin ("tags");--> statement-breakpoint
CREATE INDEX "idx_mcp_servers_capabilities" ON "mcp_servers" USING gin ("capabilities");--> statement-breakpoint
CREATE INDEX "idx_mcp_servers_metadata" ON "mcp_servers" USING gin ("metadata");--> statement-breakpoint
CREATE INDEX "idx_mcp_servers_tools" ON "mcp_servers" USING gin ("tools");--> statement-breakpoint
CREATE INDEX "idx_mcp_servers_resources" ON "mcp_servers" USING gin ("resources");
44 changes: 44 additions & 0 deletions apps/mcp-data/drizzle/0001_flaky_mac_gargan.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
CREATE TABLE "rpc_logs" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"ts" timestamp with time zone DEFAULT now(),
"server_id" uuid,
"origin_raw" text,
"origin" text,
"jsonrpc_id" text,
"method" text,
"duration_ms" integer,
"error_code" text,
"http_status" integer,
"request" jsonb DEFAULT '{}'::jsonb,
"response" jsonb DEFAULT '{}'::jsonb,
"meta" jsonb DEFAULT '{}'::jsonb
);
--> statement-breakpoint
ALTER TABLE "events" DISABLE ROW LEVEL SECURITY;--> statement-breakpoint
DROP TABLE "events" CASCADE;--> statement-breakpoint
ALTER TABLE "mcp_servers" DROP CONSTRAINT "mcp_servers_origin_unique";--> statement-breakpoint
DROP INDEX "idx_mcp_servers_tags";--> statement-breakpoint
DROP INDEX "idx_mcp_servers_capabilities";--> statement-breakpoint
DROP INDEX "idx_mcp_servers_metadata";--> statement-breakpoint
DROP INDEX "idx_mcp_servers_tools";--> statement-breakpoint
DROP INDEX "idx_mcp_servers_resources";--> statement-breakpoint
ALTER TABLE "mcp_servers" ADD COLUMN "origin_raw" text NOT NULL;--> statement-breakpoint
ALTER TABLE "mcp_servers" ADD COLUMN "data" jsonb DEFAULT '{}'::jsonb;--> statement-breakpoint
ALTER TABLE "rpc_logs" ADD CONSTRAINT "rpc_logs_server_id_mcp_servers_id_fk" FOREIGN KEY ("server_id") REFERENCES "public"."mcp_servers"("id") ON DELETE no action ON UPDATE no action;--> statement-breakpoint
CREATE INDEX "idx_rpc_logs_origin_ts" ON "rpc_logs" USING btree ("origin","ts");--> statement-breakpoint
CREATE INDEX "idx_rpc_logs_server_ts" ON "rpc_logs" USING btree ("server_id","ts");--> statement-breakpoint
CREATE INDEX "idx_rpc_logs_method_ts" ON "rpc_logs" USING btree ("method","ts");--> statement-breakpoint
CREATE INDEX "idx_rpc_logs_request" ON "rpc_logs" USING gin ("request");--> statement-breakpoint
CREATE INDEX "idx_rpc_logs_response" ON "rpc_logs" USING gin ("response");--> statement-breakpoint
CREATE INDEX "idx_mcp_servers_data" ON "mcp_servers" USING gin ("data");--> statement-breakpoint
ALTER TABLE "mcp_servers" DROP COLUMN "title";--> statement-breakpoint
ALTER TABLE "mcp_servers" DROP COLUMN "description";--> statement-breakpoint
ALTER TABLE "mcp_servers" DROP COLUMN "require_auth";--> statement-breakpoint
ALTER TABLE "mcp_servers" DROP COLUMN "tags";--> statement-breakpoint
ALTER TABLE "mcp_servers" DROP COLUMN "recipients";--> statement-breakpoint
ALTER TABLE "mcp_servers" DROP COLUMN "receiver_by_network";--> statement-breakpoint
ALTER TABLE "mcp_servers" DROP COLUMN "tools";--> statement-breakpoint
ALTER TABLE "mcp_servers" DROP COLUMN "resources";--> statement-breakpoint
ALTER TABLE "mcp_servers" DROP COLUMN "capabilities";--> statement-breakpoint
ALTER TABLE "mcp_servers" DROP COLUMN "metadata";--> statement-breakpoint
ALTER TABLE "mcp_servers" ADD CONSTRAINT "mcp_servers_origin_raw_unique" UNIQUE("origin_raw");
4 changes: 4 additions & 0 deletions apps/mcp-data/drizzle/0002_married_leo.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
ALTER TABLE "rpc_logs" DROP COLUMN "jsonrpc_id";--> statement-breakpoint
ALTER TABLE "rpc_logs" DROP COLUMN "duration_ms";--> statement-breakpoint
ALTER TABLE "rpc_logs" DROP COLUMN "error_code";--> statement-breakpoint
ALTER TABLE "rpc_logs" DROP COLUMN "http_status";
Loading
Loading