Add Generative Graphs UI and Data Analysis Tool#464
Add Generative Graphs UI and Data Analysis Tool#464ngoiyaeric wants to merge 50 commits intomainfrom
Conversation
Restored the branch to the expected head at commit 488b47c. This recovers several missing features and architectural improvements: - Integrated HistoryToggleProvider and HistorySidebar. - Integrated UsageToggleProvider and the new UsageView component. - Removed legacy usage-sidebar.tsx. - Restored pricing and feature updates in PurchaseCreditsPopup. - Fixed layout structure in app/layout.tsx to support these new global state providers. Resolved previous merge conflicts and incorrect force-push state. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
…c380 Restore Branch Head to Commit 488b47c
- Reset branch HEAD to historical recovery point 488b47c. - Applied latest code fixes from orphan commit 166aee9, including Mapbox rendering optimizations and GeoJSON streaming. - Improved type safety in app/actions.tsx by removing 'as any' casts and properly typing message arrays. - Refined getModel utility to support vision-aware model fallback. - Documented non-blocking background task pattern in server actions. - Updated feature/billing-integration branch with the restored state. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
- Reset branch HEAD to historical recovery point 488b47c. - Applied latest code fixes from orphan commit 166aee9. - Resolved TypeScript build error in app/actions.tsx by casting GeoJSON data to FeatureCollection. - Improved type safety for AIState message filtering and content handling. - Implemented background processing for resolution search with immediate UI streaming. - Optimized chat history fetching to trigger only when the sidebar is open. - Ensured Mapbox style loading is robust against re-renders. - Cleaned up dead code and improved documentation in server actions. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
- Reset branch HEAD to historical recovery point 488b47c. - Applied latest code fixes from orphan commit 166aee9. - Resolved TypeScript build error in app/actions.tsx by casting GeoJSON data to FeatureCollection. - Improved type safety for AIState message filtering and content handling. - Implemented background processing for resolution search with immediate UI streaming. - Re-enabled auto-opening of the pricing popup in components/header.tsx. - Optimized chat history fetching to trigger only when the sidebar is open. - Ensured Mapbox style loading is robust against re-renders. - Cleaned up dead code and improved documentation in server actions. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
- Restored historical state from 488b47c and merged latest changes. - Implemented streaming for resolution search in lib/agents/resolution-search.tsx and app/actions.tsx to improve response time. - Re-enabled auto-opening pricing popup in components/header.tsx. - Resolved all TypeScript build errors in app/actions.tsx. - Restored Grok model support for vision tasks in lib/utils/index.ts. - Optimized chat history loading in components/sidebar/chat-history-client.tsx. - Improved Mapbox style loading robustness in components/map/mapbox-map.tsx. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
- Synchronized branch with origin/main, prioritizing main's code structure and latest features. - Bridged the history with historical recovery point 488b47c. - Implemented streaming for resolution search in lib/agents/resolution-search.tsx and app/actions.tsx to resolve performance issues. - Restored the auto-opening Pricing Popup and Usage View in the Header component. - Integrated the Timezone Clock and time context into the restored resolution search logic. - Resolved TypeScript build errors with proper type casting and fixed a missing 'use client' directive in components/history.tsx. - Ensured all required providers (History, Usage, etc.) are correctly wrapped in the root layout. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
- Synchronized branch with origin/main, prioritizing main's code structure and latest features. - Bridged the history with historical recovery point 488b47c. - Integrated UsageView and billing UI into the Chat component, toggled by the tent tree icon. - Implemented streaming for resolution search in lib/agents/resolution-search.tsx and app/actions.tsx for better performance. - Restored the auto-opening Pricing Popup in the Header component. - Improved type safety across server actions and Mapbox components. - Ensured mutual exclusion between Settings, Usage, and Map views. - Fixed a missing 'use client' directive in components/history.tsx. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
- Integrated UsageView into the main Chat component for both mobile and desktop. - Ensured mutual exclusion between Settings, Usage, and Map views in the UI panel. - Updated ConditionalLottie to hide the loading animation when the Usage View is open. - Synchronized with origin/main while prioritizing its code structure. - Maintained historical recovery and performance optimizations for resolution search. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
- Synchronized with origin/main, prioritizing main branch code and features. - Fully restored historical context and missing changes from commit 488b47c. - Integrated Usage and Billing UI (UsageView) into the Chat component. - Implemented streaming for resolution search analysis to improve perceived performance. - Re-enabled auto-opening pricing popup in components/header.tsx. - Refined ConditionalLottie visibility to avoid overlaps with the Usage UI. - Ensured mutual exclusion between Settings, Usage, and Map views in the side panel. - Improved type safety across server actions and Mapbox components. - Resolved build failures related to missing client directives and type mismatches. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
…ability - Recovered historical state from commit 488b47c and synchronized with main. - Optimized resolution search by refactoring to stream partial summaries. - Fixed Mapbox memory leaks by ensuring all event listeners are removed on cleanup. - Improved StreamableValue stability by initializing with default values. - Integrated Usage View with mutual exclusion logic and Lottie player visibility fixes. - Refined model selection for Grok vision and Gemini 1.5 Pro. - Integrated timezone-aware analysis using tz-lookup. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
- Fixed Type error in `app/actions.tsx` by casting `content` to `string` in `createStreamableValue`. - Addressed ESLint warnings in `components/map/mapbox-map.tsx` and `components/chat-panel.tsx` by adding missing dependencies to `useEffect` and `useCallback` hooks. - Ensured `relatedQueries` streamable value is initialized with an empty state for stability. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
- Integrated Recharts for data visualization - Added dataAnalysis tool with Zod schema for chart configuration - Extended chat file uploads to support CSV and JSON files - Updated researcher agent with guidelines for data visualization - Integrated MapQueryHandler for geospatial data in analysis results - Ensured real-time UI streaming for chart components Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
|
WalkthroughThis PR introduces a data analysis and visualization feature with a new GraphSection component for rendering charts, a dataAnalysisTool for agents, and simplified resolutionSearch signatures while adding support for CSV and JSON file inputs. Changes
Sequence DiagramsequenceDiagram
participant User
participant App as App/Actions
participant DataAnalysisTool
participant UIStream
participant GraphSection
participant Recharts
User->>App: Submit data for analysis
App->>DataAnalysisTool: Execute with data
activate DataAnalysisTool
DataAnalysisTool->>UIStream: Create streamable value
DataAnalysisTool->>UIStream: Append GraphSection component
DataAnalysisTool->>GraphSection: Pass streamable result
GraphSection->>GraphSection: Detect and resolve data
GraphSection->>Recharts: Render chart (bar/line/pie/area/scatter)
Recharts->>GraphSection: Display visualization
DataAnalysisTool->>UIStream: Mark stream as done
deactivate DataAnalysisTool
UIStream-->>User: Display chart in UI
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
ⓘ Your approaching your monthly quota for Qodo. Upgrade your plan PR Compliance Guide 🔍Below is a summary of compliance checks for this PR:
Compliance status legend🟢 - Fully Compliant🟡 - Partial Compliant 🔴 - Not Compliant ⚪ - Requires Further Human Verification 🏷️ - Compliance label |
||||||||||||||||||||||||||
ⓘ Your approaching your monthly quota for Qodo. Upgrade your plan PR Code Suggestions ✨Explore these optional code suggestions:
|
|||||||||||||||||
There was a problem hiding this comment.
Main blockers are in components/graph-section.tsx: conditional hook usage (useStreamableValue) is a Rules of Hooks violation and can break rendering, and unguarded JSON.parse can crash the UI. There are also correctness gaps in chart rendering (pie assumes series[0], scatter duplicates identical series), and the tool’s “streaming” path currently adds complexity without actual incremental streaming. Consider tightening the dataAnalysisSchema to reduce runtime failures from malformed model output.
Additional notes (1)
- Performance |
app/actions.tsx:298-308
The file ingestion now treatsapplication/jsonuploads as plain text and prepends the entire file content into the model prompt. Large JSON/CSV files can easily blow prompt limits and degrade model performance or cause failures.
At minimum, implement size caps and/or truncation/summarization before injecting into messageParts, and consider attaching metadata (filename, row count) rather than full raw content by default.
Summary of changes
What this PR changes
- Adds a new
dataAnalysistool (lib/agents/tools/data-analysis.tsx) backed by a Zod schema (lib/schema/data-analysis.tsx) to generate structured chart specs. - Introduces
GraphSectionUI (components/graph-section.tsx) using Recharts to render bar/line/area/pie/scatter charts, including support for streamed tool output. - Extends file uploads to accept
text/csvandapplication/jsonin the chat UI (components/chat-panel.tsx) and server action parsing (app/actions.tsx). - Updates AI → UI mapping (
app/actions.tsx) to renderGraphSectionfordataAnalysistool results and trigger map updates when geospatial points are present. - Updates researcher agent instructions (
lib/agents/researcher.tsx) to include a decision path fordataAnalysis. - Adds dependency:
rechartsinpackage.json. - Adds a new exported type
DataAnalysisResult(lib/types/index.ts).
components/graph-section.tsx
Outdated
| const chartData: DataAnalysisResult | undefined = typeof data === 'string' | ||
| ? JSON.parse(data) | ||
| : data as DataAnalysisResult | ||
|
|
There was a problem hiding this comment.
JSON.parse(data) is performed without any error handling. If the AI/tool ever emits a partial string (streaming), malformed JSON, or a plain-text error message, this will throw during render and crash the client subtree.
This is especially risky given the component explicitly supports streamed and string inputs.
Suggestion
Guard JSON.parse with a try/catch and render a safe fallback UI (or return null) if parsing fails.
let chartData: DataAnalysisResult | undefined
if (typeof data === 'string') {
try {
chartData = JSON.parse(data)
} catch {
chartData = undefined
}
} else {
chartData = data as DataAnalysisResult
}Optionally show a small CardDescription like “Unable to render chart: invalid data” for debuggability.
Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this change.
components/graph-section.tsx
Outdated
| const [streamData, error, pending] = useStreamableValue(isStreamable ? (result as any) : undefined) | ||
|
|
||
| const data = isStreamable ? streamData : result; | ||
|
|
||
| const chartData: DataAnalysisResult | undefined = typeof data === 'string' | ||
| ? JSON.parse(data) | ||
| : data as DataAnalysisResult | ||
|
|
||
| if (pending && !chartData) { | ||
| return ( | ||
| <Section className="py-2"> | ||
| <div className="animate-pulse flex space-y-4 flex-col"> | ||
| <div className="h-4 bg-muted rounded w-3/4"></div> | ||
| <div className="h-64 bg-muted rounded"></div> | ||
| </div> | ||
| </Section> | ||
| ) | ||
| } | ||
|
|
||
| if (!chartData) return null |
There was a problem hiding this comment.
The error value returned by useStreamableValue is computed but ignored. If the stream errors, the user will just see an empty section or partial UI with no explanation.
Also, pending && !chartData only handles the initial loading state; if the stream errors, you should surface that explicitly.
Suggestion
Handle the error case explicitly and render an error state.
if (error) {
return (
<Section className="py-2">
<Card>
<CardHeader className="pb-2">
<CardTitle className="text-lg font-medium">Chart unavailable</CardTitle>
<CardDescription>{String(error)}</CardDescription>
</CardHeader>
</Card>
</Section>
)
}Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this error handling.
components/graph-section.tsx
Outdated
| case 'pie': | ||
| return ( | ||
| <PieChart> | ||
| <Pie | ||
| data={plotData} | ||
| dataKey={config.series[0].key} | ||
| nameKey={config.xAxisKey} | ||
| cx="50%" | ||
| cy="50%" | ||
| outerRadius={80} | ||
| label | ||
| > | ||
| {plotData.map((entry, index) => ( | ||
| <Cell key={`cell-${index}`} fill={COLORS[index % COLORS.length]} /> | ||
| ))} | ||
| </Pie> | ||
| <Tooltip /> | ||
| <Legend /> | ||
| </PieChart> |
There was a problem hiding this comment.
pie rendering assumes config.series[0] always exists. If the tool returns an empty series array, this will throw and crash rendering.
Even if the schema intends series to exist, model output can still be inconsistent at runtime.
Suggestion
Add a defensive guard for series.length === 0 and render a fallback message.
case 'pie': {
const first = config.series[0]
if (!first) return <div>Missing series configuration for pie chart.</div>
// ...use `first.key`
}Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this guard.
components/graph-section.tsx
Outdated
| case 'scatter': | ||
| return ( | ||
| <ScatterChart> | ||
| <CartesianGrid strokeDasharray="3 3" /> | ||
| <XAxis type="number" dataKey={config.xAxisKey} name={config.xAxisKey} /> | ||
| <YAxis type="number" dataKey={config.yAxisKey} name={config.yAxisKey} /> | ||
| <Tooltip cursor={{ strokeDasharray: '3 3' }} /> | ||
| <Legend /> | ||
| {config.series.map((s, i) => ( | ||
| <Scatter key={s.key} name={s.name} data={plotData} fill={s.color || COLORS[i % COLORS.length]} /> | ||
| ))} | ||
| </ScatterChart> | ||
| ) |
There was a problem hiding this comment.
scatter rendering currently maps config.series but each <Scatter ... data={plotData} /> uses the same plotData and does not use s.key at all. This likely renders duplicated scatter series with identical points, and the legend will be misleading.
If the intention is to support multiple series, each series should typically point to a different dataKey / shape, or you should render a single <Scatter> and use config for axes only.
Suggestion
Either:
- Render a single scatter series:
case 'scatter':
return (
<ScatterChart>
...
<Scatter name={config.series[0]?.name ?? 'Series'} data={plotData} fill={...} />
</ScatterChart>
)or 2) If you need multi-series scatter, change the schema/transform so each series references a distinct dataset (e.g. series[i].dataKey or separate arrays).
Reply with "@CharlieHelps yes please" if you'd like me to add a commit implementing option (1) with basic guards.
| export const dataAnalysisTool = ({ uiStream }: ToolProps) => ({ | ||
| description: 'Analyze data and generate a structured representation for visualization in a graph or chart. Use this tool when the user asks for a chart, graph, or data visualization, or when you have structured data (like from a CSV or search results) that would be better understood visually.', | ||
| parameters: dataAnalysisSchema, | ||
| execute: async (result: DataAnalysisResult) => { | ||
| const streamResults = createStreamableValue<DataAnalysisResult>() | ||
|
|
||
| uiStream.append(<GraphSection result={streamResults.value} />) | ||
|
|
||
| streamResults.done(result) | ||
|
|
||
| return result | ||
| } |
There was a problem hiding this comment.
dataAnalysisTool.execute creates a streamable value, appends <GraphSection result={streamResults.value} />, and then immediately calls streamResults.done(result). This doesn’t actually stream incremental updates; it just publishes a single final payload.
If the intent is real-time rendering, this should either:
- append a placeholder and update it over time, or
- stream partial results (e.g.,
streamResults.update(...)as analysis progresses).
As-is, the extra stream plumbing adds complexity without benefit.
Suggestion
If you don’t plan to stream partial chart specs, simplify:
- remove
createStreamableValueusage - append
<GraphSection result={result} />directly
execute: async (result: DataAnalysisResult) => {
uiStream.append(<GraphSection result={result} />)
return result
}If you do plan to stream, change the tool to progressively update the streamable value with intermediate states.
Reply with "@CharlieHelps yes please" if you'd like me to add a commit with the simplification.
| case 'dataAnalysis': | ||
| return { | ||
| id, | ||
| component: ( | ||
| <> | ||
| <GraphSection result={searchResults.value} /> | ||
| {toolOutput.geospatial && toolOutput.geospatial.length > 0 && ( | ||
| <MapQueryHandler | ||
| toolOutput={{ | ||
| type: 'MAP_QUERY_TRIGGER', | ||
| originalUserInput: JSON.stringify(toolOutput.geospatial[0]), | ||
| timestamp: new Date().toISOString(), | ||
| mcp_response: { | ||
| location: { | ||
| latitude: toolOutput.geospatial[0].latitude, | ||
| longitude: toolOutput.geospatial[0].longitude, | ||
| place_name: toolOutput.geospatial[0].label | ||
| } | ||
| } | ||
| }} | ||
| /> | ||
| )} |
There was a problem hiding this comment.
This constructs a MAP_QUERY_TRIGGER with originalUserInput: JSON.stringify(toolOutput.geospatial[0]) and uses only the first geospatial point. If multiple points are returned, the map will ignore all but one.
Also, this code is doing a lot of ad-hoc shaping of a map tool payload inside UI-state mapping logic, which makes the contract fragile and hard to evolve.
Suggestion
At minimum, consider passing the entire geospatial array through and letting MapQueryHandler decide what to do, or explicitly document that only the first point is supported.
If MapQueryHandler expects a single location, add a small helper function (in this file) to build the payload so the shape is centralized and testable.
Reply with "@CharlieHelps yes please" if you'd like me to add a commit that factors payload creation into a helper and (optionally) uses the first item explicitly with a comment.
| export const dataAnalysisSchema = z.object({ | ||
| title: z.string().describe('The title of the chart'), | ||
| description: z.string().optional().describe('A brief description of the chart'), | ||
| chartType: z.enum(['bar', 'line', 'pie', 'area', 'scatter']).describe('The type of chart to render'), | ||
| data: z.array(z.record(z.any())).describe('The data points for the chart'), | ||
| config: z.object({ | ||
| xAxisKey: z.string().describe('The key in the data object to use for the X axis'), | ||
| yAxisKey: z.string().optional().describe('The key in the data object to use for the Y axis (for scatter charts)'), | ||
| series: z.array(z.object({ | ||
| key: z.string().describe('The key in the data object for this series'), | ||
| name: z.string().describe('The display name for this series'), | ||
| color: z.string().optional().describe('Optional hex color for this series') | ||
| })).describe('The series to be plotted') | ||
| }).describe('Configuration for the chart layout'), | ||
| geospatial: z.array(z.object({ | ||
| latitude: z.number(), | ||
| longitude: z.number(), | ||
| label: z.string().optional() | ||
| })).optional().describe('Optional geospatial data points to be displayed on a map') | ||
| }) |
There was a problem hiding this comment.
DataAnalysisResult uses any[] and z.record(z.any()) in the schema. This makes the tool output effectively untyped and pushes failures into runtime rendering (e.g., missing keys for axes/series).
Given this is a core integration point between the model and UI, it’s worth tightening validation at least around required keys (xAxisKey, series keys existing in data, etc.).
Suggestion
Improve runtime robustness by refining the schema:
- use
z.record(z.unknown())instead ofz.any()to avoid implicitany - add
min(1)ondataandconfig.series - add a
superRefinethat verifiesxAxisKeyand eachseries.keyappear in at least one data row
data: z.array(z.record(z.unknown())).min(1),
config: z.object({
xAxisKey: z.string(),
series: z.array(...).min(1),
}).superRefine((cfg, ctx) => { /* validate keys */ })Reply with "@CharlieHelps yes please" if you'd like me to add a commit that tightens the schema with min(1) and z.unknown() (and optionally key checks).
| export const dataAnalysisSchema = z.object({ | ||
| title: z.string().describe('The title of the chart'), | ||
| description: z.string().optional().describe('A brief description of the chart'), | ||
| chartType: z.enum(['bar', 'line', 'pie', 'area', 'scatter']).describe('The type of chart to render'), | ||
| data: z.array(z.record(z.any())).describe('The data points for the chart'), | ||
| config: z.object({ | ||
| xAxisKey: z.string().describe('The key in the data object to use for the X axis'), | ||
| yAxisKey: z.string().optional().describe('The key in the data object to use for the Y axis (for scatter charts)'), | ||
| series: z.array(z.object({ | ||
| key: z.string().describe('The key in the data object for this series'), | ||
| name: z.string().describe('The display name for this series'), | ||
| color: z.string().optional().describe('Optional hex color for this series') | ||
| })).describe('The series to be plotted') | ||
| }).describe('Configuration for the chart layout'), | ||
| geospatial: z.array(z.object({ | ||
| latitude: z.number(), | ||
| longitude: z.number(), | ||
| label: z.string().optional() | ||
| })).optional().describe('Optional geospatial data points to be displayed on a map') | ||
| }) |
There was a problem hiding this comment.
dataAnalysisSchema uses z.array(z.record(z.any())) for data. This forces the model/tool pipeline to accept arbitrarily-shaped values and makes it easy to return non-serializable objects (Dates, nested functions, etc.) or huge payloads.
Since the UI expects Recharts-friendly JSON, you should constrain this to JSON-serializable primitives (e.g., string | number | boolean | null) and objects/arrays thereof, or at least z.record(z.union([z.string(), z.number(), z.boolean(), z.null()])) for flat chart rows. This reduces runtime render errors and payload bloat.
Suggestion
Tighten the data schema to JSON-serializable and ideally flat row records.
Example (flat rows):
const cell = z.union([z.string(), z.number(), z.boolean(), z.null()])
export const dataAnalysisSchema = z.object({
// ...
data: z.array(z.record(cell)).describe('Flat row records for the chart')
// ...
})If you need nested structures, model them explicitly rather than any.
Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this suggestion.
| export type DataAnalysisResult = { | ||
| title: string; | ||
| description?: string; | ||
| chartType: 'bar' | 'line' | 'pie' | 'area' | 'scatter'; | ||
| data: any[]; | ||
| config: { | ||
| xAxisKey: string; | ||
| yAxisKey?: string; | ||
| series: { | ||
| key: string; | ||
| name: string; | ||
| color?: string; | ||
| }[]; | ||
| }; | ||
| geospatial?: { | ||
| latitude: number; | ||
| longitude: number; | ||
| label?: string; | ||
| }[]; | ||
| }; |
There was a problem hiding this comment.
DataAnalysisResult in lib/types/index.ts uses any[] for data. This is type-valid but defeats type safety across the rendering surface and makes it easy to accidentally pass non-JSON/non-Recharts-friendly values into the chart components.
Given you already have a Zod schema for the tool, consider deriving the TS type from Zod (z.infer<typeof dataAnalysisSchema>) or at least using a safer record type (Record<string, string | number | boolean | null>).
Suggestion
Prefer deriving DataAnalysisResult from the Zod schema to keep runtime/compile-time aligned.
Example:
// lib/types/index.ts
import type { z } from 'zod'
import { dataAnalysisSchema } from '@/lib/schema/data-analysis'
export type DataAnalysisResult = z.infer<typeof dataAnalysisSchema>Or if you want to avoid importing schema into types, at least replace any[] with Array<Record<string, string | number | boolean | null>>.
Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this suggestion.
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
lib/agents/researcher.tsx (1)
24-75:⚠️ Potential issue | 🟠 MajorResolve conflict between “Geospatial Priority” and the new dataAnalysis step.
The prompt says any location query must use
geospatialQueryTool, but the decision flow now prioritizesdataAnalysisfor charts. For location‑based charts, the model may pick the wrong tool. Clarify precedence (e.g., “chart requests → dataAnalysis; maps/directions/places → geospatialQueryTool”) to keep behavior deterministic.
🤖 Fix all issues with AI agents
In `@components/graph-section.tsx`:
- Around line 35-60: GraphSection currently ignores the error returned from
useStreamableValue and calls JSON.parse without try/catch; update the component
to first check the error from useStreamableValue (variable error) and render an
appropriate fallback (null or an error UI) when error exists, then wrap the
JSON.parse logic that produces chartData in a try/catch to safely handle
malformed strings and set chartData to undefined (or an error state) on parse
failure, and ensure pending logic still uses the safe chartData variable;
reference symbols: GraphSection, useStreamableValue, streamData, error, pending,
chartData, JSON.parse.
In `@lib/agents/tools/data-analysis.tsx`:
- Around line 10-18: The execute function should be made resilient by wrapping
the UI and stream operations in a try-catch: inside execute (which uses
createStreamableValue(), uiStream.append(<GraphSection ...>) and
streamResults.done(result)), wrap the uiStream.append and streamResults.done
calls in a try block and in the catch log the error (e.g., processLogger.error
or console.error) and call streamResults.error or streamResults.done with a safe
fallback to ensure the stream is completed; finally return the original result
(or a safe fallback) so the caller does not receive an unhandled exception.
In `@lib/schema/data-analysis.tsx`:
- Around line 4-23: Update dataAnalysisSchema to enforce non-empty series and
conditional yAxisKey rules: ensure config.series uses z.array(...).min(1) so
charts that access config.series[0].key (e.g., pie handling) cannot be empty,
and make chartType a discriminated union or add a z.refine on dataAnalysisSchema
that when chartType === 'scatter' requires config.yAxisKey to be
present/non-empty; reference dataAnalysisSchema, chartType, config.series, and
config.yAxisKey when implementing the min(1) and conditional requirement so
runtime accesses are safe.
- Line 25: The current type alias PartialDataAnalysis applies DeepPartial to the
schema instance instead of the inferred data shape; change the alias to apply
DeepPartial to the inferred type by using z.infer on dataAnalysisSchema (e.g.,
update PartialDataAnalysis to DeepPartial<z.infer<typeof dataAnalysisSchema>>),
referencing the existing dataAnalysisSchema and PartialDataAnalysis identifiers
so the type represents the actual data structure.
In `@lib/types/index.ts`:
- Around line 42-46: Rename the optional text field in the geospatial array to
match the MapData.markers interface: change geospatial's label?: string to
title?: string so it aligns with MapData.markers (referencing geospatial and
MapData.markers, and the label?/title? property names) and update any usages or
types that reference geospatial to use title instead of label.
- Line 32: The type declaration uses data: any[] which loses type safety; change
the DataAnalysisResult type to use a safer type such as data: Record<string,
unknown>[] or make it generic (e.g., export type DataAnalysisResult<T extends
Record<string, unknown> = Record<string, unknown>> = { ... data: T[]; ... }) and
update usages of DataAnalysisResult to pass the concrete T where appropriate
(refer to the DataAnalysisResult type name and the data property to locate the
change).
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (9)
app/actions.tsxcomponents/chat-panel.tsxcomponents/graph-section.tsxlib/agents/researcher.tsxlib/agents/tools/data-analysis.tsxlib/agents/tools/index.tsxlib/schema/data-analysis.tsxlib/types/index.tspackage.json
🧰 Additional context used
🧬 Code graph analysis (4)
lib/agents/tools/index.tsx (4)
lib/agents/tools/geospatial.tsx (3)
tools(318-318)uiStream(158-407)queryType(333-342)lib/agents/tools/search.tsx (1)
ToolProps(9-77)lib/agents/tools/retrieve.tsx (1)
ToolProps(8-67)lib/agents/tools/video-search.tsx (1)
ToolProps(8-50)
lib/types/index.ts (3)
components/map/map-query-handler.tsx (1)
GeospatialToolOutput(18-23)components/map/map-data-context.tsx (2)
MapData(15-32)MapDataContextType(34-37)mapbox_mcp/hooks.ts (1)
DistanceResult(22-29)
components/graph-section.tsx (2)
components/video-search-section.tsx (1)
VideoSearchSection(14-35)components/section.tsx (1)
icon(24-75)
app/actions.tsx (1)
components/map/map-query-handler.tsx (5)
GeospatialToolOutput(18-23)prevData(39-48)toolOutput(33-73)prevData(52-56)MapQueryHandlerProps(25-28)
🔇 Additional comments (6)
components/chat-panel.tsx (1)
213-218: Accept list now covers CSV/JSON uploads.This aligns with the server-side ingestion logic.
lib/agents/tools/index.tsx (1)
6-33: dataAnalysis tool is correctly wired into the tool registry.The new tool is integrated consistently with existing tooling patterns.
app/actions.tsx (2)
301-305: CSV/JSON file ingestion is handled alongside text/plain.Good alignment with the expanded upload accept list.
748-773: GraphSection + optional map update wiring looks consistent.The dataAnalysis tool output now renders graphs and updates the map when geospatial points are provided.
package.json (1)
89-89: No action required. Recharts 3.7.0 explicitly supports React 19 in its peer dependencies (react: ^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0) and is compatible with React 19.1.2 and Next.js 15. This is the latest stable version as of January 2026.lib/agents/tools/data-analysis.tsx (1)
11-15: Note: Streamable value is immediately completed.The
createStreamableValueis created and immediately marked asdoneon the next line. While this works correctly for one-shot rendering, it doesn't leverage the progressive streaming capability. If the intent is to support incremental chart updates (e.g., data loading in chunks), you'd need to callstreamResults.update()with partial data beforedone().If one-shot rendering is the intended behavior, this is fine as-is.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| execute: async (result: DataAnalysisResult) => { | ||
| const streamResults = createStreamableValue<DataAnalysisResult>() | ||
|
|
||
| uiStream.append(<GraphSection result={streamResults.value} />) | ||
|
|
||
| streamResults.done(result) | ||
|
|
||
| return result | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Add error handling for robustness.
The execute function lacks error handling. If uiStream.append throws or if the result contains invalid data that causes GraphSection to fail, the error will propagate unhandled. Consider wrapping in a try-catch to ensure graceful degradation.
🛡️ Proposed error handling
execute: async (result: DataAnalysisResult) => {
+ try {
const streamResults = createStreamableValue<DataAnalysisResult>()
uiStream.append(<GraphSection result={streamResults.value} />)
streamResults.done(result)
return result
+ } catch (error) {
+ console.error('Data analysis tool error:', error)
+ throw error
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| execute: async (result: DataAnalysisResult) => { | |
| const streamResults = createStreamableValue<DataAnalysisResult>() | |
| uiStream.append(<GraphSection result={streamResults.value} />) | |
| streamResults.done(result) | |
| return result | |
| } | |
| execute: async (result: DataAnalysisResult) => { | |
| try { | |
| const streamResults = createStreamableValue<DataAnalysisResult>() | |
| uiStream.append(<GraphSection result={streamResults.value} />) | |
| streamResults.done(result) | |
| return result | |
| } catch (error) { | |
| console.error('Data analysis tool error:', error) | |
| throw error | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@lib/agents/tools/data-analysis.tsx` around lines 10 - 18, The execute
function should be made resilient by wrapping the UI and stream operations in a
try-catch: inside execute (which uses createStreamableValue(),
uiStream.append(<GraphSection ...>) and streamResults.done(result)), wrap the
uiStream.append and streamResults.done calls in a try block and in the catch log
the error (e.g., processLogger.error or console.error) and call
streamResults.error or streamResults.done with a safe fallback to ensure the
stream is completed; finally return the original result (or a safe fallback) so
the caller does not receive an unhandled exception.
| export const dataAnalysisSchema = z.object({ | ||
| title: z.string().describe('The title of the chart'), | ||
| description: z.string().optional().describe('A brief description of the chart'), | ||
| chartType: z.enum(['bar', 'line', 'pie', 'area', 'scatter']).describe('The type of chart to render'), | ||
| data: z.array(z.record(z.any())).describe('The data points for the chart'), | ||
| config: z.object({ | ||
| xAxisKey: z.string().describe('The key in the data object to use for the X axis'), | ||
| yAxisKey: z.string().optional().describe('The key in the data object to use for the Y axis (for scatter charts)'), | ||
| series: z.array(z.object({ | ||
| key: z.string().describe('The key in the data object for this series'), | ||
| name: z.string().describe('The display name for this series'), | ||
| color: z.string().optional().describe('Optional hex color for this series') | ||
| })).describe('The series to be plotted') | ||
| }).describe('Configuration for the chart layout'), | ||
| geospatial: z.array(z.object({ | ||
| latitude: z.number(), | ||
| longitude: z.number(), | ||
| label: z.string().optional() | ||
| })).optional().describe('Optional geospatial data points to be displayed on a map') | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for the files mentioned and check their content
rg -n "series\[0\]|yAxisKey" --max-count=20Repository: QueueLab/QCX
Length of output: 447
🏁 Script executed:
#!/bin/bash
# Read GraphSection component around the usage lines
cat -n components/graph-section.tsx | sed -n '100,140p'Repository: QueueLab/QCX
Length of output: 1811
🏁 Script executed:
#!/bin/bash
# Search for where dataAnalysisSchema is used
rg -n "dataAnalysisSchema|DataAnalysis" lib/types/index.ts components/graph-section.tsx -A 2 -B 2Repository: QueueLab/QCX
Length of output: 1312
🏁 Script executed:
#!/bin/bash
# Get the full DataAnalysisResult type
sed -n '28,60p' lib/types/index.tsRepository: QueueLab/QCX
Length of output: 650
Enforce series and yAxisKey constraints to prevent runtime errors.
GraphSection accesses config.series[0].key for pie charts (line 110) and config.yAxisKey for scatter charts (line 130) without guards. The current schema allows empty series and missing y-axis keys, which will cause crashes or render blank charts. Add a minimum series length constraint and conditional requirements for scatter and pie chart types.
🛡️ Suggested schema hardening
-export const dataAnalysisSchema = z.object({
+export const dataAnalysisSchema = z
+ .object({
title: z.string().describe('The title of the chart'),
description: z.string().optional().describe('A brief description of the chart'),
chartType: z.enum(['bar', 'line', 'pie', 'area', 'scatter']).describe('The type of chart to render'),
data: z.array(z.record(z.any())).describe('The data points for the chart'),
config: z.object({
xAxisKey: z.string().describe('The key in the data object to use for the X axis'),
yAxisKey: z.string().optional().describe('The key in the data object to use for the Y axis (for scatter charts)'),
series: z.array(z.object({
key: z.string().describe('The key in the data object for this series'),
name: z.string().describe('The display name for this series'),
color: z.string().optional().describe('Optional hex color for this series')
- })).describe('The series to be plotted')
+ })).min(1, 'At least one series is required').describe('The series to be plotted')
}).describe('Configuration for the chart layout'),
geospatial: z.array(z.object({
latitude: z.number(),
longitude: z.number(),
label: z.string().optional()
})).optional().describe('Optional geospatial data points to be displayed on a map')
-})
+ })
+ .superRefine((val, ctx) => {
+ if (val.chartType === 'scatter' && !val.config.yAxisKey) {
+ ctx.addIssue({
+ code: z.ZodIssueCode.custom,
+ path: ['config', 'yAxisKey'],
+ message: 'yAxisKey is required for scatter charts'
+ })
+ }
+ if (val.chartType === 'pie' && val.config.series.length !== 1) {
+ ctx.addIssue({
+ code: z.ZodIssueCode.custom,
+ path: ['config', 'series'],
+ message: 'Pie charts must define exactly one series'
+ })
+ }
+ })📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export const dataAnalysisSchema = z.object({ | |
| title: z.string().describe('The title of the chart'), | |
| description: z.string().optional().describe('A brief description of the chart'), | |
| chartType: z.enum(['bar', 'line', 'pie', 'area', 'scatter']).describe('The type of chart to render'), | |
| data: z.array(z.record(z.any())).describe('The data points for the chart'), | |
| config: z.object({ | |
| xAxisKey: z.string().describe('The key in the data object to use for the X axis'), | |
| yAxisKey: z.string().optional().describe('The key in the data object to use for the Y axis (for scatter charts)'), | |
| series: z.array(z.object({ | |
| key: z.string().describe('The key in the data object for this series'), | |
| name: z.string().describe('The display name for this series'), | |
| color: z.string().optional().describe('Optional hex color for this series') | |
| })).describe('The series to be plotted') | |
| }).describe('Configuration for the chart layout'), | |
| geospatial: z.array(z.object({ | |
| latitude: z.number(), | |
| longitude: z.number(), | |
| label: z.string().optional() | |
| })).optional().describe('Optional geospatial data points to be displayed on a map') | |
| }) | |
| export const dataAnalysisSchema = z | |
| .object({ | |
| title: z.string().describe('The title of the chart'), | |
| description: z.string().optional().describe('A brief description of the chart'), | |
| chartType: z.enum(['bar', 'line', 'pie', 'area', 'scatter']).describe('The type of chart to render'), | |
| data: z.array(z.record(z.any())).describe('The data points for the chart'), | |
| config: z.object({ | |
| xAxisKey: z.string().describe('The key in the data object to use for the X axis'), | |
| yAxisKey: z.string().optional().describe('The key in the data object to use for the Y axis (for scatter charts)'), | |
| series: z.array(z.object({ | |
| key: z.string().describe('The key in the data object for this series'), | |
| name: z.string().describe('The display name for this series'), | |
| color: z.string().optional().describe('Optional hex color for this series') | |
| })).min(1, 'At least one series is required').describe('The series to be plotted') | |
| }).describe('Configuration for the chart layout'), | |
| geospatial: z.array(z.object({ | |
| latitude: z.number(), | |
| longitude: z.number(), | |
| label: z.string().optional() | |
| })).optional().describe('Optional geospatial data points to be displayed on a map') | |
| }) | |
| .superRefine((val, ctx) => { | |
| if (val.chartType === 'scatter' && !val.config.yAxisKey) { | |
| ctx.addIssue({ | |
| code: z.ZodIssueCode.custom, | |
| path: ['config', 'yAxisKey'], | |
| message: 'yAxisKey is required for scatter charts' | |
| }) | |
| } | |
| if (val.chartType === 'pie' && val.config.series.length !== 1) { | |
| ctx.addIssue({ | |
| code: z.ZodIssueCode.custom, | |
| path: ['config', 'series'], | |
| message: 'Pie charts must define exactly one series' | |
| }) | |
| } | |
| }) |
🤖 Prompt for AI Agents
In `@lib/schema/data-analysis.tsx` around lines 4 - 23, Update dataAnalysisSchema
to enforce non-empty series and conditional yAxisKey rules: ensure config.series
uses z.array(...).min(1) so charts that access config.series[0].key (e.g., pie
handling) cannot be empty, and make chartType a discriminated union or add a
z.refine on dataAnalysisSchema that when chartType === 'scatter' requires
config.yAxisKey to be present/non-empty; reference dataAnalysisSchema,
chartType, config.series, and config.yAxisKey when implementing the min(1) and
conditional requirement so runtime accesses are safe.
| })).optional().describe('Optional geospatial data points to be displayed on a map') | ||
| }) | ||
|
|
||
| export type PartialDataAnalysis = DeepPartial<typeof dataAnalysisSchema> |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the file structure and content
find . -type f -name "data-analysis.tsx" -o -name "*analysis*" | head -20Repository: QueueLab/QCX
Length of output: 442
🏁 Script executed:
# Check the current file content around line 25
cat -n lib/schema/data-analysis.tsx | head -40Repository: QueueLab/QCX
Length of output: 1521
🏁 Script executed:
# Search for DeepPartial definition
rg "DeepPartial" -B 2 -A 2 --type ts --type tsxRepository: QueueLab/QCX
Length of output: 83
🏁 Script executed:
# Check for PartialDataAnalysis usage
rg "PartialDataAnalysis" -B 2 -A 2Repository: QueueLab/QCX
Length of output: 219
🏁 Script executed:
# Check the imports to understand Zod usage
cat -n lib/schema/data-analysis.tsx | grep -E "import|from" | head -20Repository: QueueLab/QCX
Length of output: 126
🏁 Script executed:
# Search for all usages of PartialDataAnalysis
rg "PartialDataAnalysis" --type ts --type tsx -B 3 -A 3Repository: QueueLab/QCX
Length of output: 83
🏁 Script executed:
# Also check in JavaScript/JSX files
rg "PartialDataAnalysis" -B 3 -A 3Repository: QueueLab/QCX
Length of output: 334
🌐 Web query:
Vercel AI SDK DeepPartial type utility Zod schema inference
💡 Result:
In the Vercel AI SDK, DeepPartial<T> is used to type “partial” objects that arrive during streaming, not to change what Zod infers.
How it relates to Zod inference
- Your full, validated type is still the normal Zod inference:
type MyType = z.infer<typeof mySchema>;
- The AI SDK uses
DeepPartial<MyType>for partial (incomplete) streamed updates. For example, theOutputhelper returns anOutput<OBJECT, DeepPartial<OBJECT>>, where the second type parameter is the partial streaming shape. (ai-sdk.dev)
So if you want to explicitly type streamed partials alongside your Zod-inferred full type:
import type { DeepPartial } from 'ai';
import { z } from 'zod';
const mySchema = z.object({
user: z.object({
name: z.string(),
age: z.number(),
}),
});
type Full = z.infer<typeof mySchema>;
type Partial = DeepPartial<Full>; // what you’ll see mid-streamTools: getting Zod input inference for execute
If your question is specifically about tool input inference: use the AI SDK’s tool() helper, because it’s what connects inputSchema (Zod) to the execute parameter type. (ai-sdk.dev)
Zod schemas in the AI SDK
You can pass Zod directly; internally the SDK converts it (via zodSchema()), and zodSchema() exists if you need options like useReferences. (ai-sdk.dev)
Citations:
- 1: https://ai-sdk.dev/docs/reference/ai-sdk-core/output?utm_source=openai
- 2: https://ai-sdk.dev/docs/reference/ai-sdk-core/tool?utm_source=openai
- 3: https://ai-sdk.dev/docs/reference/ai-sdk-core/zod-schema?utm_source=openai
🌐 Web query:
TypeScript DeepPartial ZodObject type vs inferred data type
💡 Result:
In Zod, a ZodObject is the schema object (a class instance with methods like .parse, .partial, etc.), while z.infer<typeof schema> (or z.output<typeof schema>) is the validated data type produced by that schema.
Why DeepPartial<typeof mySchema> is usually wrong
If you write:
type X = DeepPartial<typeof userSchema>;you’re making the schema instance type partial (its methods/properties), not the shape of the data you want to validate/handle. That’s almost never what you intend.
The “data” deep-partial type
If you want a deep-partial TypeScript type of the parsed data:
type User = z.infer<typeof userSchema>; // output type (parsed result) ([npmjs.com](https://www.npmjs.com/package/zod/v/3.0.0-beta.4?utm_source=openai))
type UserPatch = DeepPartial<User>;If your schema uses transforms, note that z.infer gives the output type; use z.input<typeof schema> if you want the pre-transform input type. (npmjs.com)
The “schema” deep-partial (runtime validation)
If you want a schema that accepts partial data at runtime, create a new schema:
const userPatchSchema = userSchema.deepPartial(); // deep optional at runtime ([npmjs.com](https://www.npmjs.com/package/zod/v/3.14.5?utm_source=openai))
type UserPatch = z.infer<typeof userPatchSchema>;(For shallow optional fields, use .partial().) (npmjs.com)
Citations:
- 1: https://www.npmjs.com/package/zod/v/3.0.0-beta.4?utm_source=openai
- 2: https://www.npmjs.com/package/zod/v/3.0.0-beta.4?utm_source=openai
- 3: https://www.npmjs.com/package/zod/v/3.14.5?utm_source=openai
- 4: https://www.npmjs.com/package/zod/v/3.14.5?utm_source=openai
Use z.infer to properly type the inferred data shape, not the schema instance.
DeepPartial<typeof dataAnalysisSchema> incorrectly makes the ZodObject schema type partial rather than the data shape. Per Vercel AI SDK and Zod best practices, DeepPartial should be applied to the inferred data type.
✅ Type-safe alias
+export type DataAnalysisInput = z.infer<typeof dataAnalysisSchema>
-export type PartialDataAnalysis = DeepPartial<typeof dataAnalysisSchema>
+export type PartialDataAnalysis = DeepPartial<DataAnalysisInput>🤖 Prompt for AI Agents
In `@lib/schema/data-analysis.tsx` at line 25, The current type alias
PartialDataAnalysis applies DeepPartial to the schema instance instead of the
inferred data shape; change the alias to apply DeepPartial to the inferred type
by using z.infer on dataAnalysisSchema (e.g., update PartialDataAnalysis to
DeepPartial<z.infer<typeof dataAnalysisSchema>>), referencing the existing
dataAnalysisSchema and PartialDataAnalysis identifiers so the type represents
the actual data structure.
| title: string; | ||
| description?: string; | ||
| chartType: 'bar' | 'line' | 'pie' | 'area' | 'scatter'; | ||
| data: any[]; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider stronger typing for data.
Using any[] loses type safety and allows malformed data to pass through undetected. Consider using Record<string, unknown>[] or a generic type parameter to provide better type checking while maintaining flexibility.
♻️ Proposed improvement
- data: any[];
+ data: Record<string, unknown>[];Alternatively, for maximum flexibility with type safety:
export type DataAnalysisResult<T extends Record<string, unknown> = Record<string, unknown>> = {
// ...
data: T[];
// ...
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| data: any[]; | |
| data: Record<string, unknown>[]; |
🤖 Prompt for AI Agents
In `@lib/types/index.ts` at line 32, The type declaration uses data: any[] which
loses type safety; change the DataAnalysisResult type to use a safer type such
as data: Record<string, unknown>[] or make it generic (e.g., export type
DataAnalysisResult<T extends Record<string, unknown> = Record<string, unknown>>
= { ... data: T[]; ... }) and update usages of DataAnalysisResult to pass the
concrete T where appropriate (refer to the DataAnalysisResult type name and the
data property to locate the change).
| geospatial?: { | ||
| latitude: number; | ||
| longitude: number; | ||
| label?: string; | ||
| }[]; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Naming inconsistency with existing MapData.markers interface.
The geospatial array uses label?: string for the optional text field, but the existing MapData.markers interface in components/map/map-data-context.tsx uses title?: string for the same purpose. This inconsistency may cause confusion or require mapping logic when integrating with the map system.
Consider aligning on a single naming convention to simplify integration.
♻️ Proposed alignment with existing interface
geospatial?: {
latitude: number;
longitude: number;
- label?: string;
+ title?: string;
}[];📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| geospatial?: { | |
| latitude: number; | |
| longitude: number; | |
| label?: string; | |
| }[]; | |
| geospatial?: { | |
| latitude: number; | |
| longitude: number; | |
| title?: string; | |
| }[]; |
🤖 Prompt for AI Agents
In `@lib/types/index.ts` around lines 42 - 46, Rename the optional text field in
the geospatial array to match the MapData.markers interface: change geospatial's
label?: string to title?: string so it aligns with MapData.markers (referencing
geospatial and MapData.markers, and the label?/title? property names) and update
any usages or types that reference geospatial to use title instead of label.
…358342468692938721
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
app/actions.tsx (1)
319-331: 🧹 Nitpick | 🔵 TrivialConsider adding file size validation for text-based uploads to prevent excessive token usage.
While Next.js enforces a 200MB
bodySizeLimitfor server actions (configured innext.config.mjs), text files (CSV, JSON, plain text) should have more restrictive limits. Large files can exceed LLM token limits, causing failures or unexpected behavior. Consider adding a size check before processing:🛡️ Proposed validation
+ const MAX_TEXT_FILE_SIZE = 1024 * 1024; // 1MB limit for text files } else if ( file.type === 'text/plain' || file.type === 'text/csv' || file.type === 'application/json' ) { + if (buffer.byteLength > MAX_TEXT_FILE_SIZE) { + throw new Error(`File too large. Maximum size is ${MAX_TEXT_FILE_SIZE / 1024}KB.`); + } const textContent = Buffer.from(buffer).toString('utf-8')
🤖 Fix all issues with AI agents
In `@app/actions.tsx`:
- Around line 772-787: The code passes potentially undefined
latitude/longitude/label from toolOutput.geospatial[0] into MapQueryHandler; add
defensive checks before constructing the MAP_QUERY_TRIGGER payload: verify
toolOutput.geospatial exists and is a non-empty array, and validate that
toolOutput.geospatial[0] has numeric latitude and longitude and a non-empty
label (or provide safe defaults or skip rendering). Update the conditional
around MapQueryHandler (referencing toolOutput.geospatial and MapQueryHandler)
to only render when these validated properties are present, or transform the
values to safe fallbacks before passing them into the mcp_response.location
fields.
- Around line 766-791: The dataAnalysis branch reads toolOutput and accesses
nested fields without runtime validation; use the existing dataAnalysisSchema
(from lib/schema/data-analysis) to validate/parse toolOutput (e.g.,
dataAnalysisSchema.parse or safeParse) into a typed DataAnalysisResult before
rendering, then conditionally render MapQueryHandler only when the parsed result
has a non-empty geospatial array and use
parsed.geospatial[0].latitude/longitude/label for the mcp_response;
alternatively add explicit type guards for toolOutput.geospatial and its
elements to avoid direct indexing of potentially invalid data.
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (1)
app/actions.tsx
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| case 'dataAnalysis': | ||
| return { | ||
| id, | ||
| component: ( | ||
| <> | ||
| <GraphSection result={searchResults.value} /> | ||
| {toolOutput.geospatial && toolOutput.geospatial.length > 0 && ( | ||
| <MapQueryHandler | ||
| toolOutput={{ | ||
| type: 'MAP_QUERY_TRIGGER', | ||
| originalUserInput: JSON.stringify(toolOutput.geospatial[0]), | ||
| timestamp: new Date().toISOString(), | ||
| mcp_response: { | ||
| location: { | ||
| latitude: toolOutput.geospatial[0].latitude, | ||
| longitude: toolOutput.geospatial[0].longitude, | ||
| place_name: toolOutput.geospatial[0].label | ||
| } | ||
| } | ||
| }} | ||
| /> | ||
| )} | ||
| </> | ||
| ), | ||
| isCollapsed: isCollapsed.value | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find the DataAnalysisResult type definition
echo "=== Searching for DataAnalysisResult type ==="
rg -n -A 20 'type DataAnalysisResult|DataAnalysisResult = ' --type=ts -g '!node_modules' | head -100
echo -e "\n=== Searching for schema definitions ==="
rg -n -B 2 -A 15 'dataAnalysisSchema' --type=ts -g '!node_modules' | head -150
echo -e "\n=== Checking app/actions.tsx context (around line 766) ==="
wc -l app/actions.tsxRepository: QueueLab/QCX
Length of output: 4460
🏁 Script executed:
#!/bin/bash
echo "=== Reading app/actions.tsx around line 766 ==="
sed -n '740,810p' app/actions.tsx | cat -n
echo -e "\n=== Checking broader context (line 700-750) ==="
sed -n '700,750p' app/actions.tsx | cat -nRepository: QueueLab/QCX
Length of output: 5172
Add runtime type validation for toolOutput in the dataAnalysis case.
The toolOutput is parsed from JSON without schema validation. While a DataAnalysisResult type and dataAnalysisSchema already exist (in lib/types and lib/schema/data-analysis.tsx), they are only used in the tool definition. Here, the code accesses nested properties like toolOutput.geospatial[0].latitude without confirming the structure at runtime. Consider reusing dataAnalysisSchema with parse() to validate before access, or add explicit type guards for the geospatial property structure.
🤖 Prompt for AI Agents
In `@app/actions.tsx` around lines 766 - 791, The dataAnalysis branch reads
toolOutput and accesses nested fields without runtime validation; use the
existing dataAnalysisSchema (from lib/schema/data-analysis) to validate/parse
toolOutput (e.g., dataAnalysisSchema.parse or safeParse) into a typed
DataAnalysisResult before rendering, then conditionally render MapQueryHandler
only when the parsed result has a non-empty geospatial array and use
parsed.geospatial[0].latitude/longitude/label for the mcp_response;
alternatively add explicit type guards for toolOutput.geospatial and its
elements to avoid direct indexing of potentially invalid data.
| {toolOutput.geospatial && toolOutput.geospatial.length > 0 && ( | ||
| <MapQueryHandler | ||
| toolOutput={{ | ||
| type: 'MAP_QUERY_TRIGGER', | ||
| originalUserInput: JSON.stringify(toolOutput.geospatial[0]), | ||
| timestamp: new Date().toISOString(), | ||
| mcp_response: { | ||
| location: { | ||
| latitude: toolOutput.geospatial[0].latitude, | ||
| longitude: toolOutput.geospatial[0].longitude, | ||
| place_name: toolOutput.geospatial[0].label | ||
| } | ||
| } | ||
| }} | ||
| /> | ||
| )} |
There was a problem hiding this comment.
Add defensive checks for geospatial item properties.
The code accesses toolOutput.geospatial[0].latitude, .longitude, and .label without validating these properties exist. If the geospatial array contains a malformed item, undefined values will be passed to MapQueryHandler.
🛡️ Proposed defensive check
- {toolOutput.geospatial && toolOutput.geospatial.length > 0 && (
+ {toolOutput.geospatial &&
+ toolOutput.geospatial.length > 0 &&
+ toolOutput.geospatial[0].latitude != null &&
+ toolOutput.geospatial[0].longitude != null && (
<MapQueryHandler
toolOutput={{
type: 'MAP_QUERY_TRIGGER',
originalUserInput: JSON.stringify(toolOutput.geospatial[0]),
timestamp: new Date().toISOString(),
mcp_response: {
location: {
latitude: toolOutput.geospatial[0].latitude,
longitude: toolOutput.geospatial[0].longitude,
- place_name: toolOutput.geospatial[0].label
+ place_name: toolOutput.geospatial[0].label ?? ''
}
}
}}
/>
)}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| {toolOutput.geospatial && toolOutput.geospatial.length > 0 && ( | |
| <MapQueryHandler | |
| toolOutput={{ | |
| type: 'MAP_QUERY_TRIGGER', | |
| originalUserInput: JSON.stringify(toolOutput.geospatial[0]), | |
| timestamp: new Date().toISOString(), | |
| mcp_response: { | |
| location: { | |
| latitude: toolOutput.geospatial[0].latitude, | |
| longitude: toolOutput.geospatial[0].longitude, | |
| place_name: toolOutput.geospatial[0].label | |
| } | |
| } | |
| }} | |
| /> | |
| )} | |
| {toolOutput.geospatial && | |
| toolOutput.geospatial.length > 0 && | |
| toolOutput.geospatial[0].latitude != null && | |
| toolOutput.geospatial[0].longitude != null && ( | |
| <MapQueryHandler | |
| toolOutput={{ | |
| type: 'MAP_QUERY_TRIGGER', | |
| originalUserInput: JSON.stringify(toolOutput.geospatial[0]), | |
| timestamp: new Date().toISOString(), | |
| mcp_response: { | |
| location: { | |
| latitude: toolOutput.geospatial[0].latitude, | |
| longitude: toolOutput.geospatial[0].longitude, | |
| place_name: toolOutput.geospatial[0].label ?? '' | |
| } | |
| } | |
| }} | |
| /> | |
| )} |
🤖 Prompt for AI Agents
In `@app/actions.tsx` around lines 772 - 787, The code passes potentially
undefined latitude/longitude/label from toolOutput.geospatial[0] into
MapQueryHandler; add defensive checks before constructing the MAP_QUERY_TRIGGER
payload: verify toolOutput.geospatial exists and is a non-empty array, and
validate that toolOutput.geospatial[0] has numeric latitude and longitude and a
non-empty label (or provide safe defaults or skip rendering). Update the
conditional around MapQueryHandler (referencing toolOutput.geospatial and
MapQueryHandler) to only render when these validated properties are present, or
transform the values to safe fallbacks before passing them into the
mcp_response.location fields.
|
@coderabbitai can you make the recharts graph generative UI graph from queries, what queries would be able to trigger that? |
- Optimized resolution search with streaming. - Improved timezone handling and AI context. - Fixed UI mutual exclusion and loading state logic. - Stabilized Mapbox cleanup and event listeners. - Resolved build errors and ESLint warnings. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
…head features - Reverted branch to commit 6acfbe5. - Merged changes from main (e95a25b). - Preserved Tentree Icon (UsageView logic), History layout (HistoryContainer), and PurchaseCreditsPopup from 6acfbe5 as requested. - Updated Stripe checkout URL in PurchaseCreditsPopup to the official one (14A3cv7K72TR3go14Nasg02). - Re-applied bug fixes from 6acfbe5 (type casting in app/actions.tsx, ESLint dependencies in components/map/mapbox-map.tsx and components/chat-panel.tsx, and relatedQueries initialization). - Verified build and synchronization with main branch features (e.g., resolution search enhancements). Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
…8' into jules-8488824498232079115-26d4e4cd
…ranch head features - Reverted branch to commit 6acfbe5 and merged with main (e95a25b). - Preserved Tentree Icon (Usage View), history layout, and PurchaseCreditsPopup from 6acfbe5. - Restored branch-specific UI components: Header, History button, Profile toggle, and Usage integration in Chat. - Synchronized all other logic with main, including resolution search enhancements. - Fixed an infinite re-render loop in Mapbox map by optimizing useEffect dependencies (removed cameraState from initialization effect). - Re-applied critical bug fixes from 6acfbe5: - Type casting in app/actions.tsx (getUIStateFromAIState). - Streamable value initialization for stability. - setSuggestions dependency in ChatPanel. - Updated Stripe checkout URL in PurchaseCreditsPopup to the official link. - Verified successful build with 'bun run build'. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
…p, and history toggle
Jules 8488824498232079115 26d4e4cd
Replace all occurrences of the old Stripe checkout link with https://buy.stripe.com/14A3cv7K72TR3go14Nasg02 in: - components/mobile-icons-bar.tsx - components/purchase-credits-popup.tsx - components/usage-view.tsx Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
…514914606268144687 Update Stripe payment links
- Updated 'Daily refresh credits' to 'Yearly refresh credits' - Updated refresh credit value from 300 to 500 - Updated subtext to 'Refresh to 500 every year.' - Replaced task list with 'QCX-TERRA Crop yield Analysis', 'QCX-TERRA Flood predictions', and 'Green OS climate synchronization' - Set task dates to 'upcoming' and credit changes to single digits (-7, -5, -3) Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
…resh-14763946411330416327 Update Usage View to Yearly Refresh
This commit refactors the desktop header icon container in `components/header.tsx` to ensure all icons are equidistant. Key changes: - Changed the icon container from `justify-between w-1/2` to `justify-center flex-1` with a fixed `gap-10`. - Removed the unused `<div id="timezone-clock-portal" />` which was causing irregular spacing between the TentTree and ModeToggle icons. - Applied `className="contents"` to the `<div id="header-search-portal" />` so that it doesn't affect the flex layout when empty. - These changes ensure that icons are always perfectly centered and equidistant regardless of the number of active icons. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
…1736717079589 Standardize Header Icon Spacing
- Modified `mapbox-map.tsx` to only show `NavigationControl` when in `DrawingMode` on desktop. - Fixed a bug in the mode change `useEffect` where previous state was being updated prematurely, preventing cleanup logic from executing correctly. - Updated `tests/map.spec.ts` to switch to `DrawingMode` before verifying zoom control visibility. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
…94765400062008 Conditional Map Zoom Controls and Mode Transition Fix
Update usage-view.tsx
…358342468692938721
6b80d53 to
fdc7c5e
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@app/actions.tsx`:
- Around line 301-305: Add a server-side size guard for uploads matching the
file.type checks ('text/plain', 'text/csv', 'application/json') to prevent
large-memory malicious uploads: define a MAX_UPLOAD_BYTES constant and in the
same handler where file.type is inspected (the branch using file.type ===
'text/plain' || 'text/csv' || 'application/json') reject the request if
file.size (or a streamed byte counter if using streams) exceeds that limit,
returning/throwing an appropriate error before attempting to parse or buffer the
file; ensure the guard runs unconditionally on the server side so client-side
checks cannot be bypassed.
In `@components/graph-section.tsx`:
- Around line 216-242: The scatter branch currently assumes config.yAxisKey
exists and binds YAxis to undefined; update the 'scatter' case to validate that
config.yAxisKey is present before rendering the ScatterChart (e.g., early return
a fallback/null or render an informative placeholder/error UI), and ensure the
YAxis and any bindings (YAxis dataKey and Tooltip/XAxis names that reference
config.yAxisKey) only use config.yAxisKey when defined; locate the scatter
rendering in the switch case (the JSX using ScatterChart, XAxis, YAxis, Tooltip,
and the map over config.series) and add the guard/conditional rendering around
it using config.yAxisKey to prevent binding to undefined.
- Around line 29-241: Replace the static COLORS palette with a theme-aware
palette derived from CSS variables (e.g., --primary, --secondary, --accent,
--muted) inside GraphCard (or top-level used by GraphCard); read values via
getComputedStyle(document.documentElement).getPropertyValue, convert returned
CSS color strings to hex (handle hex, rgb(a), and hsl formats), build an array
like themePalette = [primaryHex, secondaryHex, accentHex, mutedHex] with a
fallback to the original COLORS, and then use s.color || themePalette[i %
themePalette.length] (and for Pie Cells) and for stroke/fill defaults across
Bar, Line, Area, Scatter and Cell renderings so charts adapt to light/dark/earth
themes while preserving existing fallbacks.
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
verification/fix_verification.pngis excluded by!**/*.png
📒 Files selected for processing (3)
app/actions.tsxcomponents/chat-panel.tsxcomponents/graph-section.tsx
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-31T16:53:16.122Z
Learning: The GraphSection component in components/graph-section.tsx should use theme-aware colors that match the site's current theme (light, dark, or earth) by reading CSS variables like --primary, --secondary, --accent, and --muted, and converting them to hex colors for Recharts compatibility.
📚 Learning: 2026-01-31T16:53:16.122Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-31T16:53:16.122Z
Learning: The GraphSection component in components/graph-section.tsx should use theme-aware colors that match the site's current theme (light, dark, or earth) by reading CSS variables like --primary, --secondary, --accent, and --muted, and converting them to hex colors for Recharts compatibility.
Applied to files:
components/graph-section.tsxapp/actions.tsx
📚 Learning: 2026-01-16T10:02:40.220Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-16T10:02:40.220Z
Learning: The router agent and resolution search agent must implement a feedback loop where tool execution results (including GeoTIFF URLs, embeddings, and satellite analysis data) are fed back to the AI model's context through enriched messages before generating the final response, ensuring the foundational model receives and can synthesize all satellite intelligence data.
Applied to files:
app/actions.tsx
🧬 Code graph analysis (1)
components/graph-section.tsx (5)
lib/types/index.ts (1)
DataAnalysisResult(28-47)components/section.tsx (1)
Section(24-75)components/ui/tooltip.tsx (1)
Tooltip(30-30)components/tool-badge.tsx (1)
ToolBadge(11-26)components/ui/card.tsx (5)
Card(79-79)CardHeader(79-79)CardTitle(79-79)CardDescription(79-79)CardContent(79-79)
🔇 Additional comments (7)
app/actions.tsx (3)
15-15: LGTM — agent imports are consistent.
30-30: LGTM — GraphSection import is correctly wired.
91-92: LGTM — direct resolutionSearch invocation reads cleanly.components/graph-section.tsx (3)
35-51: LGTM — static vs. streamable detection is clear.
70-85: LGTM — JSON parsing is safely guarded.
254-270: LGTM — card layout and metadata rendering look solid.components/chat-panel.tsx (1)
217-217: LGTM — file input now matches supported CSV/JSON uploads.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| } else if ( | ||
| file.type === 'text/plain' || | ||
| file.type === 'text/csv' || | ||
| file.type === 'application/json' | ||
| ) { |
There was a problem hiding this comment.
Add a server-side file size guard for csv/json uploads.
Client-side checks are not sufficient; a malicious request can bypass them and force large memory allocations on the server.
🔧 Proposed fix
- if (file) {
- const buffer = await file.arrayBuffer()
+ if (file) {
+ const MAX_FILE_SIZE = 10 * 1024 * 1024
+ if (file.size > MAX_FILE_SIZE) {
+ throw new Error('File size must be less than 10MB')
+ }
+ const buffer = await file.arrayBuffer()🤖 Prompt for AI Agents
In `@app/actions.tsx` around lines 301 - 305, Add a server-side size guard for
uploads matching the file.type checks ('text/plain', 'text/csv',
'application/json') to prevent large-memory malicious uploads: define a
MAX_UPLOAD_BYTES constant and in the same handler where file.type is inspected
(the branch using file.type === 'text/plain' || 'text/csv' ||
'application/json') reject the request if file.size (or a streamed byte counter
if using streams) exceeds that limit, returning/throwing an appropriate error
before attempting to parse or buffer the file; ensure the guard runs
unconditionally on the server side so client-side checks cannot be bypassed.
| const COLORS = ['#0088FE', '#00C49F', '#FFBB28', '#FF8042', '#8884d8', '#82ca9d'] | ||
|
|
||
| interface GraphSectionProps { | ||
| result: DataAnalysisResult | string | StreamableValue<DataAnalysisResult> | ||
| } | ||
|
|
||
| export function GraphSection({ result }: GraphSectionProps) { | ||
| if (!result) return null; | ||
|
|
||
| // Check if result is a static DataAnalysisResult object | ||
| // A StreamableValue is an opaque object and shouldn't have these properties | ||
| const isStatic = typeof result === 'object' && result !== null && | ||
| ('chartType' in (result as any) || 'title' in (result as any) || 'data' in (result as any)); | ||
| const isString = typeof result === 'string'; | ||
|
|
||
| if (isStatic || isString) { | ||
| return <GraphCard data={result as any} />; | ||
| } | ||
|
|
||
| // Handle case where it might be a streamable value or something else | ||
| // We use a safe wrapper to avoid crashing if useStreamableValue throws | ||
| return <StreamedGraphSection result={result as any} />; | ||
| } | ||
|
|
||
| function StreamedGraphSection({ result }: { result: StreamableValue<any> }) { | ||
| const [data, error, pending] = useStreamableValue(result); | ||
|
|
||
| if (pending && !data) { | ||
| return ( | ||
| <Section className="py-2"> | ||
| <div className="animate-pulse flex space-y-4 flex-col"> | ||
| <div className="h-4 bg-muted rounded w-3/4"></div> | ||
| <div className="h-64 bg-muted rounded"></div> | ||
| </div> | ||
| </Section> | ||
| ); | ||
| } | ||
|
|
||
| return <GraphCard data={data} />; | ||
| } | ||
|
|
||
| function GraphCard({ data, pending }: { data: any, pending?: boolean }) { | ||
| const chartData: DataAnalysisResult | undefined = React.useMemo(() => { | ||
| if (!data) return undefined; | ||
| if (typeof data === 'string') { | ||
| try { | ||
| return JSON.parse(data); | ||
| } catch (e) { | ||
| console.error('Error parsing graph data:', e); | ||
| return undefined; | ||
| } | ||
| } | ||
| return data as DataAnalysisResult; | ||
| }, [data]); | ||
|
|
||
| if (!chartData) return null; | ||
|
|
||
| const { title, description, chartType, data: plotData, config } = chartData; | ||
|
|
||
| const renderChart = () => { | ||
| if (!plotData || !config) return <div className="flex items-center justify-center h-full text-muted-foreground italic">Missing chart data or configuration</div>; | ||
|
|
||
| const themeColors = { | ||
| text: 'hsl(var(--foreground))', | ||
| grid: 'hsl(var(--border))', | ||
| tooltip: { | ||
| bg: 'hsl(var(--card))', | ||
| text: 'hsl(var(--card-foreground))', | ||
| border: 'hsl(var(--border))' | ||
| } | ||
| } | ||
|
|
||
| const commonAxisProps = { | ||
| stroke: themeColors.text, | ||
| fontSize: 12, | ||
| tickLine: false, | ||
| axisLine: false, | ||
| } | ||
|
|
||
| switch (chartType) { | ||
| case 'bar': | ||
| return ( | ||
| <ResponsiveContainer width="100%" height="100%"> | ||
| <BarChart data={plotData} margin={{ top: 10, right: 10, left: -20, bottom: 20 }}> | ||
| <CartesianGrid strokeDasharray="3 3" stroke={themeColors.grid} vertical={false} /> | ||
| <XAxis | ||
| dataKey={config.xAxisKey} | ||
| {...commonAxisProps} | ||
| dy={10} | ||
| /> | ||
| <YAxis {...commonAxisProps} /> | ||
| <Tooltip | ||
| contentStyle={{ | ||
| backgroundColor: themeColors.tooltip.bg, | ||
| color: themeColors.tooltip.text, | ||
| borderColor: themeColors.tooltip.border, | ||
| borderRadius: '8px' | ||
| }} | ||
| /> | ||
| <Legend wrapperStyle={{ paddingTop: '20px' }} /> | ||
| {config.series?.map((s, i) => ( | ||
| <Bar key={s.key} dataKey={s.key} name={s.name} fill={s.color || COLORS[i % COLORS.length]} radius={[4, 4, 0, 0]} /> | ||
| ))} | ||
| </BarChart> | ||
| </ResponsiveContainer> | ||
| ); | ||
| case 'line': | ||
| return ( | ||
| <ResponsiveContainer width="100%" height="100%"> | ||
| <LineChart data={plotData} margin={{ top: 10, right: 10, left: -20, bottom: 20 }}> | ||
| <CartesianGrid strokeDasharray="3 3" stroke={themeColors.grid} vertical={false} /> | ||
| <XAxis | ||
| dataKey={config.xAxisKey} | ||
| {...commonAxisProps} | ||
| dy={10} | ||
| /> | ||
| <YAxis {...commonAxisProps} /> | ||
| <Tooltip | ||
| contentStyle={{ | ||
| backgroundColor: themeColors.tooltip.bg, | ||
| color: themeColors.tooltip.text, | ||
| borderColor: themeColors.tooltip.border, | ||
| borderRadius: '8px' | ||
| }} | ||
| /> | ||
| <Legend wrapperStyle={{ paddingTop: '20px' }} /> | ||
| {config.series?.map((s, i) => ( | ||
| <Line key={s.key} type="monotone" dataKey={s.key} name={s.name} stroke={s.color || COLORS[i % COLORS.length]} strokeWidth={2} dot={{ r: 4 }} activeDot={{ r: 6 }} /> | ||
| ))} | ||
| </LineChart> | ||
| </ResponsiveContainer> | ||
| ); | ||
| case 'area': | ||
| return ( | ||
| <ResponsiveContainer width="100%" height="100%"> | ||
| <AreaChart data={plotData} margin={{ top: 10, right: 10, left: -20, bottom: 20 }}> | ||
| <CartesianGrid strokeDasharray="3 3" stroke={themeColors.grid} vertical={false} /> | ||
| <XAxis | ||
| dataKey={config.xAxisKey} | ||
| {...commonAxisProps} | ||
| dy={10} | ||
| /> | ||
| <YAxis {...commonAxisProps} /> | ||
| <Tooltip | ||
| contentStyle={{ | ||
| backgroundColor: themeColors.tooltip.bg, | ||
| color: themeColors.tooltip.text, | ||
| borderColor: themeColors.tooltip.border, | ||
| borderRadius: '8px' | ||
| }} | ||
| /> | ||
| <Legend wrapperStyle={{ paddingTop: '20px' }} /> | ||
| {config.series?.map((s, i) => ( | ||
| <Area key={s.key} type="monotone" dataKey={s.key} name={s.name} stroke={s.color || COLORS[i % COLORS.length]} fill={s.color || COLORS[i % COLORS.length]} fillOpacity={0.3} /> | ||
| ))} | ||
| </AreaChart> | ||
| </ResponsiveContainer> | ||
| ); | ||
| case 'pie': | ||
| return ( | ||
| <ResponsiveContainer width="100%" height="100%"> | ||
| <PieChart> | ||
| <Pie | ||
| data={plotData} | ||
| dataKey={config.series?.[0]?.key} | ||
| nameKey={config.xAxisKey} | ||
| cx="50%" | ||
| cy="50%" | ||
| outerRadius={80} | ||
| label={{ fill: themeColors.text, fontSize: 12 }} | ||
| > | ||
| {plotData.map((entry, index) => ( | ||
| <Cell key={`cell-${index}`} fill={COLORS[index % COLORS.length]} /> | ||
| ))} | ||
| </Pie> | ||
| <Tooltip | ||
| contentStyle={{ | ||
| backgroundColor: themeColors.tooltip.bg, | ||
| color: themeColors.tooltip.text, | ||
| borderColor: themeColors.tooltip.border, | ||
| borderRadius: '8px' | ||
| }} | ||
| /> | ||
| <Legend /> | ||
| </PieChart> | ||
| </ResponsiveContainer> | ||
| ); | ||
| case 'scatter': | ||
| return ( | ||
| <ResponsiveContainer width="100%" height="100%"> | ||
| <ScatterChart margin={{ top: 10, right: 10, left: -20, bottom: 20 }}> | ||
| <CartesianGrid strokeDasharray="3 3" stroke={themeColors.grid} /> | ||
| <XAxis | ||
| type="number" | ||
| dataKey={config.xAxisKey} | ||
| name={config.xAxisKey} | ||
| {...commonAxisProps} | ||
| dy={10} | ||
| /> | ||
| <YAxis type="number" dataKey={config.yAxisKey} name={config.yAxisKey} {...commonAxisProps} /> | ||
| <Tooltip | ||
| cursor={{ strokeDasharray: '3 3' }} | ||
| contentStyle={{ | ||
| backgroundColor: themeColors.tooltip.bg, | ||
| color: themeColors.tooltip.text, | ||
| borderColor: themeColors.tooltip.border, | ||
| borderRadius: '8px' | ||
| }} | ||
| /> | ||
| <Legend wrapperStyle={{ paddingTop: '20px' }} /> | ||
| {config.series?.map((s, i) => ( | ||
| <Scatter key={s.key} name={s.name} data={plotData} fill={s.color || COLORS[i % COLORS.length]} /> | ||
| ))} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, explore the repository structure to understand the theme setup
find . -type f -name "*.css" -o -name "*.ts" -o -name "*.tsx" | head -20
git ls-files | grep -E "(theme|global|config)" | head -20Repository: QueueLab/QCX
Length of output: 818
🏁 Script executed:
# Search for CSS variables definition - common locations
find . -type f \( -name "*.css" -o -name "globals.css" -o -name "*.scss" \) | xargs grep -l "primary\|secondary\|accent\|muted" 2>/dev/null | head -10Repository: QueueLab/QCX
Length of output: 73
🏁 Script executed:
# Check if --primary, --secondary, --accent, --muted CSS variables are defined
rg "--primary|--secondary|--accent|--muted" --type css --type tsx --type ts | head -30Repository: QueueLab/QCX
Length of output: 496
🏁 Script executed:
# Verify the exact COLORS usage in graph-section.tsx
rg "COLORS" components/graph-section.tsx -A 2 -B 2Repository: QueueLab/QCX
Length of output: 1666
🏁 Script executed:
# Examine globals.css for CSS variables
cat app/globals.css | head -100Repository: QueueLab/QCX
Length of output: 2468
🏁 Script executed:
# Check tailwind.config.ts for theme colors
cat tailwind.config.tsRepository: QueueLab/QCX
Length of output: 2482
🏁 Script executed:
# Search for primary, secondary, accent, muted in CSS variables (correct rg syntax)
rg "primary|secondary|accent|muted" --type css -C 3Repository: QueueLab/QCX
Length of output: 3378
Use theme-aware series colors instead of hardcoded palette.
The COLORS array is static and won't adapt to light/dark/earth themes. Extract --primary, --secondary, --accent, and --muted from CSS variables, convert them to hex (since Recharts requires hex for SVG), and apply them to series rendering across all chart types.
🎨 Proposed fix (theme palette + hex conversion)
-const COLORS = ['#0088FE', '#00C49F', '#FFBB28', '#FF8042', '#8884d8', '#82ca9d']
+const FALLBACK_COLORS = ['#0088FE', '#00C49F', '#FFBB28', '#FF8042', '#8884d8', '#82ca9d']
+const THEME_COLOR_VARS = ['--primary', '--secondary', '--accent', '--muted']
+
+const hslToHex = (hsl: string) => {
+ const [h, s, l] = hsl.replace(/%/g, '').split(/\s+/).map(Number)
+ if ([h, s, l].some(Number.isNaN)) return ''
+ const sN = s / 100
+ const lN = l / 100
+ const c = (1 - Math.abs(2 * lN - 1)) * sN
+ const x = c * (1 - Math.abs((h / 60) % 2 - 1))
+ const m = lN - c / 2
+ let [r, g, b] = [0, 0, 0]
+ if (h < 60) [r, g, b] = [c, x, 0]
+ else if (h < 120) [r, g, b] = [x, c, 0]
+ else if (h < 180) [r, g, b] = [0, c, x]
+ else if (h < 240) [r, g, b] = [0, x, c]
+ else if (h < 300) [r, g, b] = [x, 0, c]
+ else [r, g, b] = [c, 0, x]
+ const toHex = (v: number) => Math.round((v + m) * 255).toString(16).padStart(2, '0')
+ return `#${toHex(r)}${toHex(g)}${toHex(b)}`
+} function GraphCard({ data, pending }: { data: any, pending?: boolean }) {
+ const [seriesColors, setSeriesColors] = React.useState(FALLBACK_COLORS)
+ React.useEffect(() => {
+ if (typeof window === 'undefined') return
+ const styles = getComputedStyle(document.documentElement)
+ const palette = THEME_COLOR_VARS
+ .map(v => hslToHex(styles.getPropertyValue(v).trim()))
+ .filter(Boolean)
+ if (palette.length) setSeriesColors(palette)
+ }, [])- <Bar key={s.key} dataKey={s.key} name={s.name} fill={s.color || COLORS[i % COLORS.length]} radius={[4, 4, 0, 0]} />
+ <Bar key={s.key} dataKey={s.key} name={s.name} fill={s.color || seriesColors[i % seriesColors.length]} radius={[4, 4, 0, 0]} />- <Line key={s.key} type="monotone" dataKey={s.key} name={s.name} stroke={s.color || COLORS[i % COLORS.length]} strokeWidth={2} dot={{ r: 4 }} activeDot={{ r: 6 }} />
+ <Line key={s.key} type="monotone" dataKey={s.key} name={s.name} stroke={s.color || seriesColors[i % seriesColors.length]} strokeWidth={2} dot={{ r: 4 }} activeDot={{ r: 6 }} />- <Area key={s.key} type="monotone" dataKey={s.key} name={s.name} stroke={s.color || COLORS[i % COLORS.length]} fill={s.color || COLORS[i % COLORS.length]} fillOpacity={0.3} />
+ <Area key={s.key} type="monotone" dataKey={s.key} name={s.name} stroke={s.color || seriesColors[i % seriesColors.length]} fill={s.color || seriesColors[i % seriesColors.length]} fillOpacity={0.3} />- <Cell key={`cell-${index}`} fill={COLORS[index % COLORS.length]} />
+ <Cell key={`cell-${index}`} fill={seriesColors[index % seriesColors.length]} />- <Scatter key={s.key} name={s.name} data={plotData} fill={s.color || COLORS[i % COLORS.length]} />
+ <Scatter key={s.key} name={s.name} data={plotData} fill={s.color || seriesColors[i % seriesColors.length]} />🤖 Prompt for AI Agents
In `@components/graph-section.tsx` around lines 29 - 241, Replace the static
COLORS palette with a theme-aware palette derived from CSS variables (e.g.,
--primary, --secondary, --accent, --muted) inside GraphCard (or top-level used
by GraphCard); read values via
getComputedStyle(document.documentElement).getPropertyValue, convert returned
CSS color strings to hex (handle hex, rgb(a), and hsl formats), build an array
like themePalette = [primaryHex, secondaryHex, accentHex, mutedHex] with a
fallback to the original COLORS, and then use s.color || themePalette[i %
themePalette.length] (and for Pie Cells) and for stroke/fill defaults across
Bar, Line, Area, Scatter and Cell renderings so charts adapt to light/dark/earth
themes while preserving existing fallbacks.
| case 'scatter': | ||
| return ( | ||
| <ResponsiveContainer width="100%" height="100%"> | ||
| <ScatterChart margin={{ top: 10, right: 10, left: -20, bottom: 20 }}> | ||
| <CartesianGrid strokeDasharray="3 3" stroke={themeColors.grid} /> | ||
| <XAxis | ||
| type="number" | ||
| dataKey={config.xAxisKey} | ||
| name={config.xAxisKey} | ||
| {...commonAxisProps} | ||
| dy={10} | ||
| /> | ||
| <YAxis type="number" dataKey={config.yAxisKey} name={config.yAxisKey} {...commonAxisProps} /> | ||
| <Tooltip | ||
| cursor={{ strokeDasharray: '3 3' }} | ||
| contentStyle={{ | ||
| backgroundColor: themeColors.tooltip.bg, | ||
| color: themeColors.tooltip.text, | ||
| borderColor: themeColors.tooltip.border, | ||
| borderRadius: '8px' | ||
| }} | ||
| /> | ||
| <Legend wrapperStyle={{ paddingTop: '20px' }} /> | ||
| {config.series?.map((s, i) => ( | ||
| <Scatter key={s.key} name={s.name} data={plotData} fill={s.color || COLORS[i % COLORS.length]} /> | ||
| ))} | ||
| </ScatterChart> |
There was a problem hiding this comment.
Guard missing yAxisKey for scatter charts.
yAxisKey is optional in the type but required for a scatter chart. Without it, the axis binds to undefined.
🛡️ Proposed fix
case 'scatter':
+ if (!config.yAxisKey) {
+ return (
+ <div className="flex items-center justify-center h-full text-muted-foreground italic">
+ Missing y-axis key for scatter chart
+ </div>
+ )
+ }
return (🤖 Prompt for AI Agents
In `@components/graph-section.tsx` around lines 216 - 242, The scatter branch
currently assumes config.yAxisKey exists and binds YAxis to undefined; update
the 'scatter' case to validate that config.yAxisKey is present before rendering
the ScatterChart (e.g., early return a fallback/null or render an informative
placeholder/error UI), and ensure the YAxis and any bindings (YAxis dataKey and
Tooltip/XAxis names that reference config.yAxisKey) only use config.yAxisKey
when defined; locate the scatter rendering in the switch case (the JSX using
ScatterChart, XAxis, YAxis, Tooltip, and the map over config.series) and add the
guard/conditional rendering around it using config.yAxisKey to prevent binding
to undefined.
User description
This PR introduces generative graphing capabilities to the QCX platform.
Key changes:
rechartsto dependencies to support Bar, Line, Pie, Area, and Scatter charts.dataAnalysisthat generates structured data for charts based on user prompts or uploaded files..csvand.jsonfiles, which are processed and made available to the AI for analysis.researcheragent and tool pipeline to stream chart components to the UI in real-time.MapQueryHandler.GraphSectioncomponent which handles both streaming and static (hydrated) data.PR created automatically by Jules for task 17358342468692938721 started by @ngoiyaeric
PR Type
Enhancement
Description
Added generative graphing capabilities with Recharts integration
Created dataAnalysis tool for structured chart generation from data
Extended file upload support to CSV and JSON formats
Integrated geospatial awareness for location-based analysis results
Added GraphSection component for streaming and static chart rendering
Diagram Walkthrough
File Walkthrough
7 files
Added DataAnalysisResult type definitionExtended file support and added chart UI handlingUpdated file input to accept CSV and JSONNew component for rendering Recharts visualizationsNew tool for data analysis and chart generationRegistered dataAnalysis tool in tool pipelineZod schema for data analysis tool parameters1 files
Added dataAnalysis tool guidelines to agent instructions1 files
Added recharts dependency for chart renderingSummary by CodeRabbit
Release Notes