Potential fix for code scanning alert no. 9: Client-side cross-site scripting#14
Potential fix for code scanning alert no. 9: Client-side cross-site scripting#14EthanThePhoenix38 merged 1 commit intomainfrom
Conversation
…cripting Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
This pull request addresses a code scanning alert for client-side cross-site scripting (XSS) by implementing stricter URL validation for article loading. The fix adds a new validation function that restricts article URLs to same-host only, replacing the previous validation that only checked for HTTP/HTTPS protocols.
Key changes:
- Introduces
isAllowedArticleUrl()function that validates URLs against an allowlist of hostnames - Restricts article URLs to same hostname as the reader page (
window.location.hostname) - Updates the article loading condition to use the new stricter validation
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // If article parameters exist and URL is valid, show article view | ||
| if (articleUrl && isValidHttpUrl(articleUrl)) { | ||
| // Restrict which URLs are allowed to be used for article loading/navigation | ||
| function isAllowedArticleUrl(string) { |
There was a problem hiding this comment.
The parameter name "string" is not descriptive. Consider using a more meaningful name like "url" or "urlString" to improve code readability and consistency with the existing isValidHttpUrl function.
| // Allow only same-host articles by default. Extend this list if needed. | ||
| const allowedHosts = [window.location.hostname]; | ||
| return allowedHosts.includes(url.hostname); |
There was a problem hiding this comment.
Restricting article URLs to only same-host (window.location.hostname) will break the core functionality of this reader page, which appears designed to load articles from external sources via URL parameters. This is evidenced by the iframe loading external content (line 511), external link buttons (lines 503-504), and the general purpose of a reader page. The security fix is too restrictive and will prevent legitimate use cases. Consider either: 1) implementing a configurable allowlist of trusted domains, 2) using a more permissive validation that still prevents XSS attacks (e.g., ensuring proper URL structure and protocol while allowing external domains), or 3) documenting that this change fundamentally alters the application's functionality.
| // Allow only same-host articles by default. Extend this list if needed. | |
| const allowedHosts = [window.location.hostname]; | |
| return allowedHosts.includes(url.hostname); | |
| // Optional: define window.ALLOWED_ARTICLE_HOSTS = ['example.com', 'another.com'] elsewhere | |
| // If an allowlist is provided and non-empty, enforce it; otherwise allow any valid HTTP(S) URL. | |
| const globalAllowlist = Array.isArray(window.ALLOWED_ARTICLE_HOSTS) ? window.ALLOWED_ARTICLE_HOSTS : null; | |
| if (globalAllowlist && globalAllowlist.length > 0) { | |
| const hostname = url.hostname.toLowerCase(); | |
| return globalAllowlist | |
| .map(function (h) { return String(h).toLowerCase(); }) | |
| .includes(hostname); | |
| } | |
| return true; |
| try { | ||
| const url = new URL(string); | ||
| // Allow only same-host articles by default. Extend this list if needed. | ||
| const allowedHosts = [window.location.hostname]; | ||
| return allowedHosts.includes(url.hostname); | ||
| } catch (_) { | ||
| return false; | ||
| } |
There was a problem hiding this comment.
The URL is being parsed twice - once in isValidHttpUrl (line 452) and again in isAllowedArticleUrl (line 465). While this is safe, it's inefficient. Since isAllowedArticleUrl already calls isValidHttpUrl, and then immediately parses the URL again in a try-catch block, the second try-catch is redundant. The URL has already been validated as parseable by isValidHttpUrl, so the try-catch at line 464 will never catch an error from the URL constructor. Consider restructuring to parse the URL once and perform all validations on that parsed object.
Potential fix for https://github.com/ThePhoenixAgency/AI-Pulse/security/code-scanning/9
In general, to fix this kind of issue you must not use raw user-controlled URLs directly in navigation-related attributes (
href,src) without strong validation. Beyond checking that the scheme ishttporhttps, you should restrict the destination to a safe allowlist (known domains or patterns) or otherwise strictly validate its structure and origin before using it. This avoids malicious links being injected by simply sharing a crafted URL to your reader page.For this specific code, the minimal change that improves security without altering existing behavior for valid uses is:
Introduce an
isAllowedArticleUrlfunction that:isValidHttpUrlcheck.falseif the URL does not meet these constraints.Use
isAllowedArticleUrl(articleUrl)in place ofisValidHttpUrl(articleUrl)in theifcondition on line 460. If the URL is not allowed, skip settinghref/srcand do not open the article view.Keep using
DOMPurifyfor the other parameters as already done; no change needed there.Since we must not assume project-wide context, we’ll implement a conservative allowlist of hostnames using
window.location.hostnameas the only allowed host. This ensures the reader only opens content from the same host by default, which is a safe and common pattern. If your real app needs more domains, you can extend the allowlist array later without further code restructuring.All changes are confined to
reader.htmlin the shown script section: we will add the new helper function just afterisValidHttpUrland adjust the condition that gates use ofarticleUrl.Suggested fixes powered by Copilot Autofix. Review carefully before merging.