[Security] Fix CRITICAL vulnerability: V-001#3083
[Security] Fix CRITICAL vulnerability: V-001#3083orbisai0security wants to merge 1 commit intoonlook-dev:mainfrom
Conversation
Automatically generated security fix
|
@orbisai0security is attempting to deploy a commit to the Onlook Team on Vercel. A member of the Team first needs to authorize it. |
📝 WalkthroughWalkthroughThree distinct security and validation enhancements: a database migration introducing audit logging with RLS policies, enhanced email-capture API with rate limiting and input sanitization, and file upload validation constraints in the freestyle adapter. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant RateLimiter
participant Validator
participant Sanitizer
participant Webhook as External Webhook
Client->>RateLimiter: POST /email-capture (client IP)
alt Rate Limit Exceeded
RateLimiter-->>Client: 429 Too Many Requests
else Within Limit
RateLimiter->>Validator: Forward request data
alt Validation Fails
Validator-->>Client: 400 Bad Request
else Validation Passes
Validator->>Sanitizer: Sanitize name & UTMs
Sanitizer->>Webhook: Send sanitized payload
Webhook-->>Sanitizer: Success
Sanitizer-->>Client: 200 OK
end
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~28 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| } | ||
|
|
||
| // Validate file type | ||
| if (file.type && !ALLOWED_FILE_TYPES.includes(file.type)) { |
There was a problem hiding this comment.
In validateFileUploads, files with missing or empty MIME types bypass type checks. Consider explicitly handling empty file.type if that’s undesired.
| if (file.type && !ALLOWED_FILE_TYPES.includes(file.type)) { | |
| if (!file.type || !ALLOWED_FILE_TYPES.includes(file.type)) { |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@apps/backend/supabase/migrations/0006_rls.sql`:
- Around line 114-120: The policy "canvas_deny_select_policy" currently targets
TO public which is broader than the intent; update the CREATE POLICY for
"canvas_deny_select_policy" to target TO anon instead of TO public so the rule
clearly applies only to unauthenticated users and matches the stated intent of
denying SELECT to anon users while preserving backend (postgres) access.
- Around line 1-33: The SECURITY DEFINER functions log_security_event,
user_has_project_access, and user_has_canvas_access lack a pinned search_path
and must be hardened: add "SET search_path = pg_catalog, public;" to each
function definition, schema-qualify internal table references (e.g., change
INSERT INTO audit_logs to INSERT INTO public.audit_logs in log_security_event),
and ensure the log_security_event function populates ip_address (add an
ip_address INET parameter or capture inet_client_addr() and insert it into the
audit record). Lock down the audit_logs table by revoking public rights,
enabling row level security, creating a deny-all policy (FOR ALL USING (false)
WITH CHECK (false)), and then explicitly grant minimal required privileges
(e.g., GRANT SELECT to the service_role or appropriate role).
In `@apps/web/client/src/server/api/routers/domain/adapters/freestyle.ts`:
- Around line 27-57: validateFileUploads currently types files as Record<string,
Blob> but DeploymentRequest.files supplies Record<string, DeploymentFile>, so
change validateFileUploads to accept Record<string, DeploymentFile> (or add an
overload) and perform validation on DeploymentFile fields: compute byte size
from file.content (if file.encoding === 'base64' compute decoded byte length via
base64 length formula or decode to bytes; otherwise use UTF-8 byte length),
enforce MAX_FILE_SIZE and MAX_TOTAL_SIZE using those computed sizes, and
validate file type/allowed list by either using a mime field on DeploymentFile
or by mapping known encodings to allowed types; alternatively, if you prefer
keeping Blob logic, convert each DeploymentFile to a Blob (new
Blob([decodedBytes], { type: inferredMime })) before running the existing
checks. Ensure you update the function signature and references to
validateFileUploads accordingly.
🧹 Nitpick comments (2)
apps/web/client/src/server/api/routers/domain/adapters/freestyle.ts (1)
9-24: Validation constants look reasonable for deployment use case.The limits are sensible defaults. However, consider whether the
ALLOWED_FILE_TYPESlist is complete for typical web deployments. Notably missing are common types liketext/plain,application/xml,font/*types, andimage/webpwhich are frequently used in modern web projects.Consider extending allowed file types
const ALLOWED_FILE_TYPES = [ 'text/html', 'text/css', + 'text/plain', + 'text/xml', 'application/javascript', 'text/javascript', 'application/json', + 'application/xml', 'image/jpeg', 'image/png', 'image/gif', 'image/svg+xml', + 'image/webp', 'application/pdf', + 'font/woff', + 'font/woff2', + 'font/ttf', + 'font/otf', ];apps/web/client/src/app/api/email-capture/route.ts (1)
4-13: HTML sanitization may be unnecessary for this use case.The
sanitizeHtmlfunction performs HTML entity encoding, but the sanitized values are used in URL query parameters (viaURL.searchParams.append), which already performs URL encoding. HTML entity encoding is primarily needed when rendering untrusted data in HTML contexts.If the downstream N8N webhook renders these values in HTML, the sanitization should happen there. If you're sanitizing defensively for storage/logging, consider that this transforms the data (e.g., "O'Brien" becomes "O'Brien").
Consider whether this sanitization is needed given the data flow, or if simple character filtering (removing control characters) would be more appropriate.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
apps/backend/supabase/migrations/0006_rls.sqlapps/web/client/src/app/api/email-capture/route.tsapps/web/client/src/server/api/routers/domain/adapters/freestyle.ts
🧰 Additional context used
📓 Path-based instructions (5)
apps/web/client/src/server/api/routers/**/*.ts
📄 CodeRabbit inference engine (AGENTS.md)
apps/web/client/src/server/api/routers/**/*.ts: Place tRPC routers under apps/web/client/src/server/api/routers/**
Use publicProcedure/protectedProcedure from apps/web/client/src/server/api/trpc.ts and validate inputs with Zod
Return plain objects/arrays; rely on SuperJSON for serialization in tRPC procedures
apps/web/client/src/server/api/routers/**/*.ts: Place tRPC routers under src/server/api/routers/**
Use publicProcedure/protectedProcedure from src/server/api/trpc.ts and validate inputs with Zod
Return plain objects/arrays; rely on SuperJSON for serialization
Files:
apps/web/client/src/server/api/routers/domain/adapters/freestyle.ts
apps/web/client/src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
apps/web/client/src/**/*.{ts,tsx}: Use path aliases@/* and ~/* for imports that map to apps/web/client/src/*
Avoid hardcoded user-facing text; use next-intl messages/hooks insteadUse path aliases
@/* and ~/* for imports mapping to src/*
Files:
apps/web/client/src/server/api/routers/domain/adapters/freestyle.tsapps/web/client/src/app/api/email-capture/route.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
Do not use the any type unless necessary
Files:
apps/web/client/src/server/api/routers/domain/adapters/freestyle.tsapps/web/client/src/app/api/email-capture/route.ts
{apps,packages}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Avoid using the any type unless absolutely necessary
Files:
apps/web/client/src/server/api/routers/domain/adapters/freestyle.tsapps/web/client/src/app/api/email-capture/route.ts
apps/web/client/src/app/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Default to Server Components; add 'use client' only when using events, state/effects, browser APIs, or client-only libs
Files:
apps/web/client/src/app/api/email-capture/route.ts
🧠 Learnings (1)
📚 Learning: 2025-09-16T19:22:52.461Z
Learnt from: CR
Repo: onlook-dev/onlook PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-09-16T19:22:52.461Z
Learning: Applies to apps/web/client/src/server/api/routers/**/*.ts : Use publicProcedure/protectedProcedure from src/server/api/trpc.ts and validate inputs with Zod
Applied to files:
apps/web/client/src/app/api/email-capture/route.ts
🧬 Code graph analysis (1)
apps/web/client/src/server/api/routers/domain/adapters/freestyle.ts (1)
packages/models/src/hosting/index.ts (3)
HostingProviderAdapter(75-77)DeploymentRequest(64-67)DeploymentResponse(69-73)
🔇 Additional comments (5)
apps/web/client/src/server/api/routers/domain/adapters/freestyle.ts (1)
59-61: Good placement of validation before processing.Calling validation early in
deploy()ensures malicious or oversized uploads are rejected before any SDK initialization or network calls occur. This is the correct defensive pattern.apps/web/client/src/app/api/email-capture/route.ts (4)
43-54: IP extraction and rate limiting implementation looks good.The fallback chain for IP extraction (
x-forwarded-for→x-real-ip→'unknown') handles common reverse proxy scenarios appropriately. Using'unknown'as fallback means clients without identifiable IPs share a rate limit bucket, which is a reasonable security-conscious default.
90-99: Sanitization correctly applied to user-controlled fields.The approach of sanitizing all string fields except the already-validated email is sound. The comment on line 93 accurately notes that the email format validation provides sufficient safety for that field.
105-111: Good practice: Early environment validation with appropriate error response.Validating the required environment variable early and returning a 500 with a generic error message (not leaking config details) is the correct pattern.
62-66: Zod schema chain order is unconventional but valid.While
.string().max().optional()is the preferred style for clarity,.string().optional().max()is not invalid. For simple string validators like.max(), the chaining order with.optional()doesn't affect runtime behavior or TypeScript type inference; both producestring | undefinedand validate correctly. The suggested reordering is a stylistic improvement, not a critical fix.Likely an incorrect or invalid review comment.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| -- Audit logging function for security events | ||
| CREATE TABLE IF NOT EXISTS audit_logs ( | ||
| id UUID PRIMARY KEY DEFAULT gen_random_uuid(), | ||
| event_type TEXT NOT NULL, | ||
| user_id UUID, | ||
| resource_type TEXT, | ||
| resource_id UUID, | ||
| action TEXT NOT NULL, | ||
| details JSONB, | ||
| ip_address INET, | ||
| created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP, | ||
| created_by UUID DEFAULT auth.uid() | ||
| ); | ||
|
|
||
| -- Create index on audit_logs for efficient querying | ||
| CREATE INDEX IF NOT EXISTS idx_audit_logs_user_id ON audit_logs(user_id); | ||
| CREATE INDEX IF NOT EXISTS idx_audit_logs_created_at ON audit_logs(created_at); | ||
| CREATE INDEX IF NOT EXISTS idx_audit_logs_event_type ON audit_logs(event_type); | ||
|
|
||
| -- Helper function to log security events | ||
| CREATE OR REPLACE FUNCTION log_security_event( | ||
| event_type TEXT, | ||
| action TEXT, | ||
| resource_type TEXT DEFAULT NULL, | ||
| resource_id UUID DEFAULT NULL, | ||
| details JSONB DEFAULT NULL | ||
| ) RETURNS VOID AS $$ | ||
| BEGIN | ||
| INSERT INTO audit_logs (event_type, user_id, resource_type, resource_id, action, details, created_by) | ||
| VALUES (event_type, auth.uid(), resource_type, resource_id, action, details, auth.uid()); | ||
| END; | ||
| $$ LANGUAGE plpgsql SECURITY DEFINER; | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find and read the migration file to verify its actual content
find . -type f -name "*0006_rls.sql" -o -name "*migrations*" | head -20Repository: onlook-dev/onlook
Length of output: 174
🏁 Script executed:
# Read the migration file
cat -n apps/backend/supabase/migrations/0006_rls.sql 2>/dev/null || find . -type f -name "*0006*.sql" | head -5Repository: onlook-dev/onlook
Length of output: 14845
🏁 Script executed:
# Search for all migration files to understand the structure
fd -e sql . apps/backend/supabase/migrations/ 2>/dev/null | head -20Repository: onlook-dev/onlook
Length of output: 1202
🏁 Script executed:
# Look for canvas table policies mentioned in the scratchpad
rg "canvas" apps/backend/supabase/migrations/ -A 5 -B 2 --type sql 2>/dev/nullRepository: onlook-dev/onlook
Length of output: 18337
🏁 Script executed:
# Check for any other audit_logs references or RLS patterns
rg "audit_logs|SECURITY DEFINER" apps/backend/supabase/ -A 3 -B 2 --type sql 2>/dev/null | head -50Repository: onlook-dev/onlook
Length of output: 3381
🌐 Web query:
PostgreSQL SECURITY DEFINER search_path security best practice
💡 Result:
Short best-practice checklist for SECURITY DEFINER + search_path (concise):
- Avoid SECURITY DEFINER unless needed; prefer SECURITY INVOKER or least-privilege roles. [1][6]
- Always set a safe search_path in the function definition (e.g. SET search_path = pg_catalog, pg_temp or trusted_schema, pg_temp) so writable/untrusted schemas are excluded/pg_temp is last. Prefer schema-qualified names for critical objects. [3][2][7]
- Do not rely on global/session search_path settings — enforce it per-function (CREATE FUNCTION ... SECURITY DEFINER SET search_path = ...). [3][2]
- Minimize the function owner's privileges; avoid making the owner a superuser. Use a dedicated low‑privilege role that owns only what the function needs. [6][5]
- Avoid or carefully sanitize dynamic SQL (EXECUTE, format()) inside SECURITY DEFINER functions. Validate inputs strictly. [1][6]
- Restrict EXECUTE: revoke PUBLIC then grant EXECUTE only to required roles (do create+privilege change in one transaction to avoid exposure window). [3]
- Revoke CREATE on public (or remove public from default search_path) and restrict who can create schemas/roles to reduce attack surface. Review database/user-level search_path settings. [4][7]
- Keep PostgreSQL and any vendor distributions patched for known search_path-related CVEs (e.g., CVE-2018-1058 family, CVE-2020-14349/14350, CVE-2023-2454/CVE-2023-41117) and audit existing SECURITY DEFINER functions after upgrades. [0][4][5][8]
- Periodically list and review SECURITY DEFINER functions and their owners/privileges (pg_proc / pg_roles). Monitor for unexpected writable schemas. [2][7]
Key references:
- PostgreSQL docs: Writing SECURITY DEFINER functions safely. [3]
- PostgreSQL search_path/security advisories & CVEs. [0][4][5][8]
- Practical guides and analyses (abuse examples, mitigation patterns). [2][6]
If you want, I can:
- scan your database (query list of SECURITY DEFINER functions + their SET search_path and owners) and show risky ones; or
- produce a one‑line CREATE FUNCTION template that implements the recommended settings.
🌐 Web query:
Supabase audit logging table RLS security recommendations
💡 Result:
Short checklist — RLS for Supabase audit-log tables
- Always enable RLS on any audit table. Create explicit policies before allowing anon/anon-like access. [1]. (supabase.com)
- Make audit tables read-restricted: allow SELECT only to admin/service principals (check role via profiles/roles table or auth.uid() existence). Deny UPDATE/DELETE from non-admins (or use policies that only allow INSERT from app roles). [2][3]. (supadmin.dev)
- Treat audit logs as write-once/append-only: disallow UPDATE/DELETE via RLS + DB-level constraints/triggers; store old/new JSON in the row on INSERT. [3]. (supabase.com)
- Use Postgres roles with BYPASSRLS only for trusted backend/service accounts; never expose service keys to clients. Rotate & scope service keys. [4]. (supabase.com)
- If policies need cross-table checks, use security-definer functions in a restricted schema (so policies don’t leak data) and test performance. [3]. (supabase.com)
- Don’t rely on RLS alone for client-side filtering — add explicit filters in queries to avoid large scans and reduce policy evaluation cost. Index common columns (user_id, created_at, table_name). [3]. (supabase.com)
- Auto-enforce RLS for tables created outside the dashboard (use an event trigger to enable RLS on CREATE TABLE) and alert on tables with RLS disabled. (Useful for CI/migrations.) [5]. (supaexplorer.com)
- Keep audit/log storage and retention policies separate from DB when needed (Supabase platform/auth audit options); restrict who can export/drain logs. Review retention and plan-level limitations. [6]. (supabase.com)
If you want, I can:
- generate example RLS policies (SELECT/INSERT/deny UPDATE/DELETE) for a concrete audit_log schema, or
- produce event-trigger SQL to auto-enable RLS on new tables.
Citations:
- 1: https://supabase.com/docs/guides/database/postgres/row-level-security?utm_source=openai
- 2: https://supadmin.dev/blog/supabase-database-design-patterns-admin-dashboards?utm_source=openai
- 3: https://supabase.com/docs/guides/troubleshooting/rls-performance-and-best-practices-Z5Jjwv?utm_source=openai
- 4: https://supabase.com/docs/learn/auth-deep-dive/auth-row-level-security?utm_source=openai
- 5: https://supabase.com/docs/guides/troubleshooting/rls-performance-and-best-practices-Z5Jjwv?utm_source=openai
- 6: https://supabase.com/docs/guides/troubleshooting/rls-performance-and-best-practices-Z5Jjwv?utm_source=openai
- 7: https://supaexplorer.com/dev-notes/supabase-security-2025-whats-new-and-how-to-stay-secure.html?utm_source=openai
- 8: https://supabase.com/docs/guides/security/platform-audit-logs?utm_source=openai
Pin search_path in all three SECURITY DEFINER functions and explicitly lock down audit_logs table access.
Three functions in this migration are marked SECURITY DEFINER without pinned search_path: log_security_event() (line 32), user_has_project_access() (line 47), and user_has_canvas_access() (line 63). This is a known vulnerability; unbound search_path in SECURITY DEFINER functions can be exploited via search_path hijacking attacks. Additionally, audit_logs table has no RLS enabled and no explicit privilege controls—it will inherit default PUBLIC grants, potentially exposing sensitive audit data. The ip_address column is defined but never populated by the helper function.
Add SET search_path = pg_catalog, public; to all three function definitions:
$$ LANGUAGE plpgsql SECURITY DEFINER
SET search_path = pg_catalog, public;For audit_logs, enable RLS and lock down access:
REVOKE ALL ON TABLE audit_logs FROM PUBLIC;
ALTER TABLE audit_logs ENABLE ROW LEVEL SECURITY;
CREATE POLICY "audit_logs_deny_all" ON audit_logs FOR ALL USING (false) WITH CHECK (false);
GRANT SELECT ON audit_logs TO service_role; -- adjust role as neededSchema-qualify the table reference in log_security_event(): change line 29 from INSERT INTO audit_logs to INSERT INTO public.audit_logs.
🤖 Prompt for AI Agents
In `@apps/backend/supabase/migrations/0006_rls.sql` around lines 1 - 33, The
SECURITY DEFINER functions log_security_event, user_has_project_access, and
user_has_canvas_access lack a pinned search_path and must be hardened: add "SET
search_path = pg_catalog, public;" to each function definition, schema-qualify
internal table references (e.g., change INSERT INTO audit_logs to INSERT INTO
public.audit_logs in log_security_event), and ensure the log_security_event
function populates ip_address (add an ip_address INET parameter or capture
inet_client_addr() and insert it into the audit record). Lock down the
audit_logs table by revoking public rights, enabling row level security,
creating a deny-all policy (FOR ALL USING (false) WITH CHECK (false)), and then
explicitly grant minimal required privileges (e.g., GRANT SELECT to the
service_role or appropriate role).
| DROP POLICY IF EXISTS "canvas_deny_select_policy" ON canvas; | ||
| -- Deny SELECT access to unauthenticated users | ||
| CREATE POLICY "canvas_deny_select_policy" ON canvas | ||
| FOR SELECT | ||
| TO public | ||
| USING (false); | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd apps/backend/supabase/migrations && wc -l 0006_rls.sqlRepository: onlook-dev/onlook
Length of output: 77
🏁 Script executed:
head -n 150 apps/backend/supabase/migrations/0006_rls.sql | tail -n +100Repository: onlook-dev/onlook
Length of output: 2057
🏁 Script executed:
cat -n apps/backend/supabase/migrations/0006_rls.sql | grep -A 50 "canvas"Repository: onlook-dev/onlook
Length of output: 9750
🏁 Script executed:
find . -type f \( -name "*.ts" -o -name "*.js" -o -name "*.py" \) -not -path "*/node_modules/*" -not -path "*/.next/*" | head -20Repository: onlook-dev/onlook
Length of output: 1127
🏁 Script executed:
rg "canvas" --type ts --type js -i -l | grep -E "(server|backend|api)" | head -10Repository: onlook-dev/onlook
Length of output: 540
🏁 Script executed:
rg "from.*canvas|select.*canvas" --type ts --type js -i -A 2 | head -50Repository: onlook-dev/onlook
Length of output: 3242
🏁 Script executed:
cat -n apps/web/client/src/server/api/routers/project/frame.ts | head -100Repository: onlook-dev/onlook
Length of output: 3096
🏁 Script executed:
cat -n apps/web/client/src/server/api/routers/project/project.ts | head -80Repository: onlook-dev/onlook
Length of output: 3487
🏁 Script executed:
rg "supabase|createClient" --type ts --type js apps/web/client/src/server -A 3 | head -60Repository: onlook-dev/onlook
Length of output: 2555
🏁 Script executed:
find . -type f -name "client.ts" -path "*/db/*" | head -5Repository: onlook-dev/onlook
Length of output: 88
🏁 Script executed:
cat -n packages/db/src/client.tsRepository: onlook-dev/onlook
Length of output: 767
🏁 Script executed:
rg "createAdminClient|createClient" --type ts apps/web/client/src/utils/supabase -A 10Repository: onlook-dev/onlook
Length of output: 6509
🏁 Script executed:
cat -n apps/web/client/src/server/api/trpc.ts | head -50Repository: onlook-dev/onlook
Length of output: 1944
Consider using TO anon instead of TO public for clarity; however, note that the current policy does not affect internal backend operations.
The concern about privileged roles breaking is not applicable here—the backend accesses the database directly via a postgres connection (not RLS-aware), completely bypassing this policy. However, since the comment states the intent is to "Deny SELECT access to unauthenticated users," TO anon is more precise than TO public (which includes all roles). Use TO anon for clearer policy intent and better defense-in-depth practices.
Suggested adjustment
CREATE POLICY "canvas_deny_select_policy" ON canvas
FOR SELECT
-TO public
+TO anon
USING (false);🤖 Prompt for AI Agents
In `@apps/backend/supabase/migrations/0006_rls.sql` around lines 114 - 120, The
policy "canvas_deny_select_policy" currently targets TO public which is broader
than the intent; update the CREATE POLICY for "canvas_deny_select_policy" to
target TO anon instead of TO public so the rule clearly applies only to
unauthenticated users and matches the stated intent of denying SELECT to anon
users while preserving backend (postgres) access.
| // Rate limiter: simple in-memory store (in production, use Redis or similar) | ||
| const requestCounts = new Map<string, number[]>(); | ||
| const RATE_LIMIT_WINDOW_MS = 60000; // 1 minute | ||
| const RATE_LIMIT_MAX_REQUESTS = 5; // 5 requests per minute | ||
|
|
||
| function isRateLimited(clientIp: string): boolean { | ||
| const now = Date.now(); | ||
| const windowStart = now - RATE_LIMIT_WINDOW_MS; | ||
|
|
||
| if (!requestCounts.has(clientIp)) { | ||
| requestCounts.set(clientIp, []); | ||
| } | ||
|
|
||
| const timestamps = requestCounts.get(clientIp)!; | ||
| // Remove old timestamps outside the window | ||
| const recentTimestamps = timestamps.filter(t => t > windowStart); | ||
|
|
||
| if (recentTimestamps.length >= RATE_LIMIT_MAX_REQUESTS) { | ||
| return true; | ||
| } | ||
|
|
||
| recentTimestamps.push(now); | ||
| requestCounts.set(clientIp, recentTimestamps); | ||
| return false; | ||
| } |
There was a problem hiding this comment.
In-memory rate limiter has limitations in serverless/multi-instance deployments.
The comment acknowledges this, but there are additional concerns:
-
Memory leak: Old IP entries are never fully removed from the Map. While timestamps are filtered per-request, IPs that stop making requests remain in the Map forever.
-
Serverless ineffectiveness: In Next.js on Vercel or similar, each invocation may be a fresh instance, making this rate limiter ineffective.
Add periodic cleanup to prevent memory growth
+// Cleanup old entries periodically (every 100 requests)
+let requestCounter = 0;
+const CLEANUP_INTERVAL = 100;
+
+function cleanupOldEntries(): void {
+ const now = Date.now();
+ const windowStart = now - RATE_LIMIT_WINDOW_MS;
+ for (const [ip, timestamps] of requestCounts.entries()) {
+ const recent = timestamps.filter(t => t > windowStart);
+ if (recent.length === 0) {
+ requestCounts.delete(ip);
+ } else {
+ requestCounts.set(ip, recent);
+ }
+ }
+}
+
function isRateLimited(clientIp: string): boolean {
+ if (++requestCounter % CLEANUP_INTERVAL === 0) {
+ cleanupOldEntries();
+ }
const now = Date.now();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Rate limiter: simple in-memory store (in production, use Redis or similar) | |
| const requestCounts = new Map<string, number[]>(); | |
| const RATE_LIMIT_WINDOW_MS = 60000; // 1 minute | |
| const RATE_LIMIT_MAX_REQUESTS = 5; // 5 requests per minute | |
| function isRateLimited(clientIp: string): boolean { | |
| const now = Date.now(); | |
| const windowStart = now - RATE_LIMIT_WINDOW_MS; | |
| if (!requestCounts.has(clientIp)) { | |
| requestCounts.set(clientIp, []); | |
| } | |
| const timestamps = requestCounts.get(clientIp)!; | |
| // Remove old timestamps outside the window | |
| const recentTimestamps = timestamps.filter(t => t > windowStart); | |
| if (recentTimestamps.length >= RATE_LIMIT_MAX_REQUESTS) { | |
| return true; | |
| } | |
| recentTimestamps.push(now); | |
| requestCounts.set(clientIp, recentTimestamps); | |
| return false; | |
| } | |
| // Rate limiter: simple in-memory store (in production, use Redis or similar) | |
| const requestCounts = new Map<string, number[]>(); | |
| const RATE_LIMIT_WINDOW_MS = 60000; // 1 minute | |
| const RATE_LIMIT_MAX_REQUESTS = 5; // 5 requests per minute | |
| // Cleanup old entries periodically (every 100 requests) | |
| let requestCounter = 0; | |
| const CLEANUP_INTERVAL = 100; | |
| function cleanupOldEntries(): void { | |
| const now = Date.now(); | |
| const windowStart = now - RATE_LIMIT_WINDOW_MS; | |
| for (const [ip, timestamps] of requestCounts.entries()) { | |
| const recent = timestamps.filter(t => t > windowStart); | |
| if (recent.length === 0) { | |
| requestCounts.delete(ip); | |
| } else { | |
| requestCounts.set(ip, recent); | |
| } | |
| } | |
| } | |
| function isRateLimited(clientIp: string): boolean { | |
| if (++requestCounter % CLEANUP_INTERVAL === 0) { | |
| cleanupOldEntries(); | |
| } | |
| const now = Date.now(); | |
| const windowStart = now - RATE_LIMIT_WINDOW_MS; | |
| if (!requestCounts.has(clientIp)) { | |
| requestCounts.set(clientIp, []); | |
| } | |
| const timestamps = requestCounts.get(clientIp)!; | |
| // Remove old timestamps outside the window | |
| const recentTimestamps = timestamps.filter(t => t > windowStart); | |
| if (recentTimestamps.length >= RATE_LIMIT_MAX_REQUESTS) { | |
| return true; | |
| } | |
| recentTimestamps.push(now); | |
| requestCounts.set(clientIp, recentTimestamps); | |
| return false; | |
| } |
| private validateFileUploads(files: Record<string, Blob>): void { | ||
| // Validate number of files | ||
| const fileCount = Object.keys(files).length; | ||
| if (fileCount === 0) { | ||
| throw new Error('No files provided'); | ||
| } | ||
| if (fileCount > MAX_FILES) { | ||
| throw new Error(`Too many files. Maximum allowed: ${MAX_FILES}`); | ||
| } | ||
|
|
||
| // Validate individual files and calculate total size | ||
| let totalSize = 0; | ||
| for (const [filename, file] of Object.entries(files)) { | ||
| // Validate file size | ||
| if (file.size > MAX_FILE_SIZE) { | ||
| throw new Error(`File "${filename}" exceeds maximum size of ${MAX_FILE_SIZE / 1024 / 1024}MB`); | ||
| } | ||
|
|
||
| // Validate file type | ||
| if (file.type && !ALLOWED_FILE_TYPES.includes(file.type)) { | ||
| throw new Error(`File type "${file.type}" for "${filename}" is not allowed`); | ||
| } | ||
|
|
||
| totalSize += file.size; | ||
| } | ||
|
|
||
| // Validate total size | ||
| if (totalSize > MAX_TOTAL_SIZE) { | ||
| throw new Error(`Total file size exceeds maximum of ${MAX_TOTAL_SIZE / 1024 / 1024}MB`); | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find the file and check its structure
cd apps/web/client/src/server/api/routers/domain/adapters && pwd && ls -la | head -20Repository: onlook-dev/onlook
Length of output: 307
🏁 Script executed:
#!/bin/bash
# Look for DeploymentFile interface/type definition
rg -n 'interface DeploymentFile|type DeploymentFile' --type=ts -A 5Repository: onlook-dev/onlook
Length of output: 429
🏁 Script executed:
#!/bin/bash
# Check the freestyle.ts file for the method signature and imports
cat -n apps/web/client/src/server/api/routers/domain/adapters/freestyle.ts | head -80Repository: onlook-dev/onlook
Length of output: 3172
🏁 Script executed:
#!/bin/bash
# Find DeploymentRequest interface
rg -n 'interface DeploymentRequest|type DeploymentRequest' --type=ts -A 10Repository: onlook-dev/onlook
Length of output: 724
🏁 Script executed:
#!/bin/bash
# Also check the freestyle-sandboxes SDK to understand what deployWeb expects
rg -n 'deployWeb' --type=ts -B 2 -A 2Repository: onlook-dev/onlook
Length of output: 563
Critical type mismatch breaks entire file validation logic.
The method signature expects Record<string, Blob> but receives Record<string, DeploymentFile> (from DeploymentRequest.files). These types are incompatible:
DeploymentFilehascontent: stringand optionalencodingBlobhassizeandtypeproperties
As a result, the validation logic fails at runtime:
- Line 41:
file.sizeisundefinedforDeploymentFileobjects - Line 46:
file.typeisundefined, so the condition always passes, allowing all files through
This breaks both file size validation and file type allowlisting. The method must either accept and validate DeploymentFile objects directly (checking content length and encoding), or convert them to Blob objects before validation.
🤖 Prompt for AI Agents
In `@apps/web/client/src/server/api/routers/domain/adapters/freestyle.ts` around
lines 27 - 57, validateFileUploads currently types files as Record<string, Blob>
but DeploymentRequest.files supplies Record<string, DeploymentFile>, so change
validateFileUploads to accept Record<string, DeploymentFile> (or add an
overload) and perform validation on DeploymentFile fields: compute byte size
from file.content (if file.encoding === 'base64' compute decoded byte length via
base64 length formula or decode to bytes; otherwise use UTF-8 byte length),
enforce MAX_FILE_SIZE and MAX_TOTAL_SIZE using those computed sizes, and
validate file type/allowed list by either using a mime field on DeploymentFile
or by mapping known encodings to allowed types; alternatively, if you prefer
keeping Blob logic, convert each DeploymentFile to a Blob (new
Blob([decodedBytes], { type: inferredMime })) before running the existing
checks. Ensure you update the function signature and references to
validateFileUploads accordingly.
Security Fix
This PR addresses a CRITICAL severity vulnerability detected by our security scanner.
Security Impact Assessment
Evidence: Proof-of-Concept Exploitation Demo
This demonstration shows how the vulnerability could be exploited to help you understand its severity and prioritize remediation.
How This Vulnerability Can Be Exploited
The misconfigured Row-Level Security (RLS) policy in the Supabase migration allows any user, including unauthenticated ones, to perform SELECT operations on the 'canvas' table with 'USING (true)', effectively making all data in this table publicly readable. In the context of the Onlook repository, which appears to be a visual UI development tool where the 'canvas' table likely stores user-created project designs, components, and potentially sensitive code snippets or metadata, an attacker could exploit this by directly querying the Supabase database via its client library or API endpoints without needing authentication. This bypasses intended access controls, enabling data exfiltration from the 'canvas' table, which is central to the application's functionality.
The misconfigured Row-Level Security (RLS) policy in the Supabase migration allows any user, including unauthenticated ones, to perform SELECT operations on the 'canvas' table with 'USING (true)', effectively making all data in this table publicly readable. In the context of the Onlook repository, which appears to be a visual UI development tool where the 'canvas' table likely stores user-created project designs, components, and potentially sensitive code snippets or metadata, an attacker could exploit this by directly querying the Supabase database via its client library or API endpoints without needing authentication. This bypasses intended access controls, enabling data exfiltration from the 'canvas' table, which is central to the application's functionality.
Exploitation Impact Assessment
Vulnerability Details
V-001apps/backend/supabase/migrations/0006_rls.sqlChanges Made
This automated fix addresses the vulnerability by applying security best practices.
Files Modified
apps/backend/supabase/migrations/0006_rls.sqlapps/web/client/src/app/api/email-capture/route.tsapps/web/client/src/server/api/routers/domain/adapters/freestyle.tsVerification
This fix has been automatically verified through:
🤖 This PR was automatically generated.
Important
Fixes critical RLS vulnerability and enhances security with audit logging, input sanitization, rate limiting, and file upload validation.
0006_rls.sqlby replacingUSING (true)withUSING (user_id = auth.uid())forcanvastable.0006_rls.sqlfor security events.canvastable.route.tsto prevent XSS.route.tsto limit requests to 5 per minute.route.ts.freestyle.tsto check file size, type, and total size.This description was created by
for 3f6cb03. You can customize this summary. It will automatically update as commits are pushed.
Summary by CodeRabbit
New Features
Bug Fixes
✏️ Tip: You can customize this high-level summary in your review settings.