Skip to content

Add LLM related operations to the Platform API#841

Open
Arshardh wants to merge 12 commits intowso2:mainfrom
Arshardh:aiwspc
Open

Add LLM related operations to the Platform API#841
Arshardh wants to merge 12 commits intowso2:mainfrom
Arshardh:aiwspc

Conversation

@Arshardh
Copy link
Contributor

@Arshardh Arshardh commented Jan 28, 2026

Summary by CodeRabbit

  • New Features

    • Full REST management (CRUD, list, filter, pagination) for LLM provider templates, providers, and project proxies; provider creation supports upstreams, auth, token/model extraction, access controls, and policies; proxies enable project-level routing.
    • Bundled default templates for major LLMs (OpenAI, Anthropic, AWS Bedrock, Azure OpenAI, Gemini, Mistral) with automatic seeding for new organizations.
    • Persistent storage and indexes for LLM entities.
  • Chores

    • Runtime now includes default templates and a configurable template path.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 28, 2026

Walkthrough

Adds end-to-end LLM support: DB schema (templates, providers, proxies), domain models/DTOs, repositories, services, HTTP handlers, template loader and default provider templates, startup seeding, and configuration for managing LLM provider templates, providers, proxies, and policies.

Changes

Cohort / File(s) Summary
Configuration & Image
platform-api/Dockerfile, platform-api/src/config/config.go
Copy default LLM templates into the image and add LLM_TEMPLATE_DEFINITIONS_PATH env/config.
Errors
platform-api/src/internal/constants/error.go
Added exported LLM-related errors (exists / not found variants for templates, providers, proxies).
Database schemas
platform-api/src/internal/database/schema.sql, platform-api/src/internal/database/schema.postgres.sql, platform-api/src/internal/database/schema.sqlite.sql
Added tables llm_provider_templates, llm_providers, llm_proxies with FKs, UNIQUE constraints and multiple indexes; adjusted a trailing-comma syntax in one schema file.
Models & DTOs
platform-api/src/internal/model/llm.go, platform-api/src/internal/dto/llm.go
New domain models and DTOs for templates, providers, proxies, policies, token extraction, upstream/auth, access control, and rate-limiting structures.
Repository interfaces & impl
platform-api/src/internal/repository/interfaces.go, platform-api/src/internal/repository/llm.go
New repository interfaces and concrete repos implementing CRUD, JSON (de)serialization, transactional policy management helpers, counts, exists, and paginated listing.
Services & Seeder
platform-api/src/internal/service/llm.go, platform-api/src/internal/service/llm_template_seeder.go, platform-api/src/internal/service/organization.go
New services for templates/providers/proxies with mapping/validation logic; LLMTemplateSeeder to bootstrap defaults per org; seeder wired into organization registration.
HTTP Handler / API
platform-api/src/internal/handler/llm.go
New REST endpoints under /api/v1 for CRUD and listings (templates, providers, proxies), including scoped listings and standard error mappings.
Server wiring
platform-api/src/internal/server/server.go
Wired new repositories, services, handler, and invoked template seeder at startup; added dependencies to constructors.
Utilities
platform-api/src/internal/utils/llm_provider_template_loader.go
YAML loader for default LLM provider templates with validation, mapping helpers, and descriptive errors; skips non-YAML files.
Default templates
platform-api/src/resources/default-llm-provider-templates/*.yaml
Added multiple default LlmProviderTemplate YAMLs (openai, azure-openai, azureai-foundry, anthropic, awsbedrock, gemini, mistralai) describing endpoints, auth, token/model extraction and logos.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant HTTP as "LLM HTTP Handler"
    participant Service as "LLM Service"
    participant Repo as "LLM Repository"
    participant DB as "Database"

    Client->>HTTP: POST /api/v1/llm/providers
    HTTP->>HTTP: extract org, bind & validate
    HTTP->>Service: Create(orgUUID, createdBy, req)
    Service->>Service: validate template/upstream, defaults
    Service->>Repo: Exists(providerID, orgUUID)
    Repo->>DB: SELECT llm_providers...
    DB-->>Repo: result
    Repo-->>Service: exists? / no
    Service->>Repo: Create(provider) + policies (transaction)
    Repo->>DB: INSERT llm_providers / INSERT llm_policies (tx)
    DB-->>Repo: success
    Repo-->>Service: created provider
    Service-->>HTTP: DTO
    HTTP-->>Client: 201 Created + DTO
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Poem

🐇 I hop through YAML and seed with care,

templates tucked in folders, waiting there,
providers, proxies find a cozy nest,
policies penned, migrations dressed,
I nibble bugs and wish you tests.

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description check ⚠️ Warning The pull request description is completely empty, missing all required template sections including Purpose, Goals, Approach, User stories, Documentation, Automation tests, Security checks, Samples, and Test environment. Add a comprehensive pull request description following the provided template, including purpose, goals, approach, documentation, test coverage, and security validation details.
Docstring Coverage ⚠️ Warning Docstring coverage is 4.26% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely summarizes the main change: adding LLM (Large Language Model) related operations to the Platform API.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 12

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
platform-api/src/internal/server/server.go (1)

155-176: Resolve merge conflict markers before merge — build is currently broken.
The conflict markers (<<<<<<<, =======, >>>>>>>) must be removed and the final code should include both deployment and LLM wiring as intended.

🛠️ Possible resolution sketch
-<<<<<<< HEAD
-deploymentService := service.NewDeploymentService(...)
-=======
-llmTemplateService := service.NewLLMProviderTemplateService(llmTemplateRepo)
-llmProviderService := service.NewLLMProviderService(llmProviderRepo, llmTemplateRepo, orgRepo, llmTemplateSeeder)
-llmProxyService := service.NewLLMProxyService(llmProxyRepo, llmProviderRepo, projectRepo)
->>>>>>> 4f804b8f (Add backend operations for the AI workspace)
+deploymentService := service.NewDeploymentService(...)
+llmTemplateService := service.NewLLMProviderTemplateService(llmTemplateRepo)
+llmProviderService := service.NewLLMProviderService(llmProviderRepo, llmTemplateRepo, orgRepo, llmTemplateSeeder)
+llmProxyService := service.NewLLMProxyService(llmProxyRepo, llmProviderRepo, projectRepo)

...
-deploymentHandler := handler.NewDeploymentHandler(deploymentService)
+deploymentHandler := handler.NewDeploymentHandler(deploymentService)
+llmHandler := handler.NewLLMHandler(llmTemplateService, llmProviderService, llmProxyService)

...
-deploymentHandler.RegisterRoutes(router)
+deploymentHandler.RegisterRoutes(router)
+llmHandler.RegisterRoutes(router)

Also applies to: 207-211

🤖 Fix all issues with AI agents
In `@platform-api/src/internal/database/schema.sql`:
- Around line 406-416: Add a foreign key constraint on
llm_policies.organization_uuid to reference organizations(uuid) so org integrity
is enforced and orphaned policies are prevented; update the CREATE TABLE
llm_policies (or use ALTER TABLE llm_policies ADD CONSTRAINT ...) to declare
FOREIGN KEY (organization_uuid) REFERENCES organizations(uuid) ON DELETE CASCADE
(and ensure the column types/lengths match the organizations.uuid definition).

In `@platform-api/src/internal/repository/llm.go`:
- Around line 245-281: The Create flow in LLMProviderRepo writes the main
llm_providers row and then calls replaceProviderPolicies separately, which can
leave partial state on failure; wrap the insert and policy replacement in a
single DB transaction: begin a transaction at the start of
LLMProviderRepo.Create, use the transaction handle for the INSERT (instead of
r.db.Exec), call a transactional variant of replaceProviderPolicies (e.g.
replaceProviderPoliciesTx(tx, orgUUID, providerID, policies) or add an extra tx
parameter to replaceProviderPolicies) so it executes its deletes/inserts on the
same tx, rollback the tx on any error and commit only after both main row and
policies succeed; apply the same transactional pattern to the Update/Create
flows that call replaceProviderPolicies/replaceProxyPolicies (and create
corresponding transactional helper variants) so provider/proxy writes and policy
replacement are atomic.
- Around line 805-817: In scanPolicies, the json.Unmarshal call that decodes
pathsJSON into policy.Paths currently ignores errors; change it to capture and
return the unmarshal error (wrapped with context) so callers see corrupted
policy data; specifically, in function scanPolicies handle the error returned
from json.Unmarshal([]byte(pathsJSON.String), &policy.Paths) (using pathsJSON
and policy.Paths) and return that error instead of discarding it.

In `@platform-api/src/internal/service/llm.go`:
- Around line 650-655: validateUpstream currently only checks u.URL; update it
to also validate auth when present by ensuring u.Auth (on dto.LLMUpstream) is
non-nil and that u.Auth.Header and u.Auth.Value are non-empty strings, returning
constants.ErrInvalidInput for any missing/empty required auth fields; make these
checks inside validateUpstream so callers like any transformer or gateway won't
assume header/value exist.

In
`@platform-api/src/resources/default-llm-provider-templates/awsbedrock-template.yaml`:
- Around line 36-41: The YAML uses unsupported lookaround regexes in the
requestModel and responseModel identifier fields; replace the
lookbehind/lookahead pattern (?<=model/)[a-zA-Z0-9.:-]+(?=/) with a
capture-based pattern such as model/([a-zA-Z0-9.:-]+)/ (or model/([^/]+)/) so
Go's regexp (RE2) can compile it; update both requestModel.identifier and
responseModel.identifier accordingly and ensure any code that reads the
identifier extracts the first capture group.
- Around line 23-26: The AWS Bedrock template is missing region-specific
endpoint and authentication config and uses PCRE lookaround regexes incompatible
with Go's RE2; update the template to add metadata.endpointUrl with a region
placeholder (e.g., https://bedrock-runtime.{region}.amazonaws.com) and a
metadata.auth entry indicating aws-sigv4 (or document that SigV4 must be handled
externally if the auth schema doesn't support it), and replace the
requestModel/responseModel regexes that use lookbehind/lookahead with RE2-safe
patterns (e.g., use explicit capture groups like model/([^/]+) or simple
anchors) or add validation at template load to reject unsupported lookaround
patterns (refer to the metadata.endpointUrl, metadata.auth, requestModel, and
responseModel identifiers to locate the edits).

In
`@platform-api/src/resources/default-llm-provider-templates/azureopenai-template.yaml`:
- Around line 30-38: The Azure OpenAI template uses the wrong token field names;
update the identifiers for promptTokens and completionTokens in the YAML so they
match Azure OpenAI's response fields: change the promptTokens identifier from
$.usage.input_tokens to $.usage.prompt_tokens and change the completionTokens
identifier from $.usage.output_tokens to $.usage.completion_tokens (leave
totalTokens as $.usage.total_tokens); modify the entries for promptTokens and
completionTokens in the azureopenai-template.yaml accordingly.

In
`@platform-api/src/resources/default-llm-provider-templates/gemini-template.yaml`:
- Line 30: Replace the placeholder logo URL in the gemini-template.yaml by
updating the logoUrl key to point to a real hosted asset (mirror the pattern
used by other templates, e.g., GitHub-hosted images); locate the logoUrl entry
in the gemini-template.yaml and swap the example.com URL for the actual logo
file URL and ensure the asset is publicly accessible and uses HTTPS.
- Around line 44-45: The requestModel in the template uses location: pathParam
with identifier (?<=models/)[a-zA-Z0-9.\-]+ which won't work because the
analytics extractor (gateway/system-policies/analytics/v0.1.0/analytics.go
extract function) only supports "payload" and "header" and Go's regexp lacks
lookbehind; update the template to either switch to location: payload with an
appropriate JSONPath expression or change identifier to a simple capturing group
and move extraction into the request context (and update analytics extraction to
handle path params if you choose that route), ensuring you reference the
requestModel and its identifier field when applying the change.

In `@portals/ai-workspace/index.html`:
- Around line 9-10: The <body> tag currently includes an invalid HTML attribute
`width`; remove the `width="100%"` attribute from the <body> element and, if
needed, apply width styling via the inline style (e.g., keep
style="background-color: `#ffffff`; width: 100%;") or preferably in a stylesheet;
update the <body> element in the file and ensure the existing <div id="root"
style="width: 100%;"></div> remains as the width controller if you prefer
keeping width in the root element.

In `@portals/ai-workspace/package.json`:
- Line 21: Update the react-is dependency in package.json from "18.3.1" to a
React 19-compatible version (for example "19.2.3" or "^19.0.0") so it matches
the installed react/react-dom 19.1.2; modify the "react-is" entry in
package.json (the dependency key "react-is") and then run your package manager
(npm/yarn/pnpm) to reinstall/update lockfile and verify build/tests.

In `@portals/ai-workspace/src/App.tsx`:
- Around line 25-33: Replace the non-semantic clickable divs rendered in the
sidebar (the JSX expression using sidebarItems.map and the element with
className "sidebar-item" and the active check item === "Overview") with semantic
interactive controls (preferably a <button> or <a> per item) so they are
keyboard-focusable and operable; ensure the active item uses aria-current (e.g.,
aria-current="page" when item === "Overview") and keep the existing
"sidebar-item" and "active" class names for styling, and if you choose buttons,
reset default button styles in App.css to match the previous visual design
(background, border, text alignment, width).
🧹 Nitpick comments (14)
portals/ai-workspace/package.json (3)

24-24: Inconsistent version pinning for swagger-ui-react.

Most dependencies use exact versions for reproducibility, but swagger-ui-react uses a caret range (^5.31.0). Consider pinning to an exact version for consistency.

Suggested fix
-    "swagger-ui-react": "^5.31.0"
+    "swagger-ui-react": "5.31.0"

33-33: Inconsistent version pinning for @types/swagger-ui-react.

Same as above—consider pinning to an exact version for consistency with other dependencies.

Suggested fix
-    "@types/swagger-ui-react": "^5.18.0",
+    "@types/swagger-ui-react": "5.18.0",

6-11: Missing test script despite Jest being included.

Jest 30.2.0 is listed in devDependencies, but there's no test script defined. Consider adding one to enable running tests.

Suggested fix
   "scripts": {
     "dev": "vite",
     "build": "tsc -b && vite build",
     "lint": "eslint .",
-    "preview": "vite preview"
+    "preview": "vite preview",
+    "test": "jest"
   },
portals/ai-workspace/tsconfig.json (1)

3-4: Consider aligning target version with referenced configs.

The root tsconfig.json uses target: "ES2020" and lib: ["ES2020", ...] while tsconfig.app.json uses target: "ES2022" and lib: ["ES2022", ...]. While the referenced configs take precedence during compilation, aligning these values would improve consistency and IDE behavior.

Suggested fix
   "compilerOptions": {
-    "target": "ES2020",
-    "lib": ["ES2020", "DOM", "DOM.Iterable"],
+    "target": "ES2022",
+    "lib": ["ES2022", "DOM", "DOM.Iterable"],
portals/ai-workspace/src/App.css (1)

55-74: Add a keyboard focus style for sidebar items.

If these are interactive controls, the hover-only state misses keyboard users. Consider adding a visible :focus-visible style aligned with the hover/active look.

♻️ Suggested CSS addition
 .sidebar-item:not(.active):hover {
   background-color: `#f5f5f5`;
 }
+
+.sidebar-item:focus-visible {
+  outline: 2px solid `#8c5eff`;
+  outline-offset: 2px;
+  background-color: `#f5f5f5`;
+}
portals/ai-workspace/Dockerfile (5)

38-38: Port 5173 is unconventional for nginx.

Port 5173 is typically associated with Vite's development server. For a production nginx container, port 80 or 8080 is more conventional. Verify this aligns with your nginx.conf and deployment requirements.


31-54: Consider adding a HEALTHCHECK instruction.

Adding a HEALTHCHECK helps container orchestrators (Docker Swarm, Kubernetes) determine if the container is healthy and serving requests properly.

💡 Suggested improvement
 FROM nginx:alpine

+HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
+    CMD wget --no-verbose --tries=1 --spider http://localhost:5173/ || exit 1
+
 COPY --from=builder /app/dist /usr/share/nginx/html

55-56: Remove trailing blank lines.

There are two unnecessary blank lines at the end of the file.

✂️ Remove trailing blank lines
 CMD ["nginx", "-g", "daemon off;"]
-
-

19-19: Pin base image versions for reproducibility.

Using floating tags like node:22-alpine and nginx:alpine can lead to non-reproducible builds when the underlying images are updated. Pin to specific versions (e.g., node:22.13.1-alpine3.21, nginx:1.27.3-alpine).

🔧 Suggested improvement
-FROM node:22-alpine AS builder
+FROM node:22.13.1-alpine3.21 AS builder
-FROM nginx:alpine
+FROM nginx:1.27.3-alpine

Also applies to: 32-32


25-25: Document or resolve the peer dependency conflict requiring --legacy-peer-deps.

The flag bypasses peer dependency resolution due to unresolved @babel/core peer dependencies from transitive @babel/* packages. Either add @babel/core as an explicit dependency or document in the Dockerfile why this flag is necessary.

platform-api/src/internal/service/organization.go (1)

91-97: Consider logging the organization ID instead of name for easier debugging.

The name variable may have been defaulted from handle (line 72), and the organization ID is more useful for tracing issues in logs.

Suggested change
 	// Seed default LLM provider templates for the new organization (best-effort)
 	if s.llmTemplateSeeder != nil {
 		if seedErr := s.llmTemplateSeeder.SeedForOrg(id); seedErr != nil {
-			log.Printf("[OrganizationService] Failed to seed default LLM templates for organization %s: %v", name, seedErr)
+			log.Printf("[OrganizationService] Failed to seed default LLM templates for organization %s (id=%s): %v", name, id, seedErr)
 		}
 	}
platform-api/src/internal/utils/llm_provider_template_loader.go (1)

121-129: Consider logging a warning when extraction identifiers are partially defined.

Returning nil silently when only Location or only Identifier is set could mask template configuration errors. A warning log would help operators debug incomplete template definitions.

Suggested change
+import "log"
+
 func mapExtractionIdentifier(in *extractionIdentifierYAML) *model.ExtractionIdentifier {
 	if in == nil {
 		return nil
 	}
 	if strings.TrimSpace(in.Location) == "" || strings.TrimSpace(in.Identifier) == "" {
+		if strings.TrimSpace(in.Location) != "" || strings.TrimSpace(in.Identifier) != "" {
+			log.Printf("[LLMTemplateLoader] Ignoring partial extraction identifier: location=%q, identifier=%q", in.Location, in.Identifier)
+		}
 		return nil
 	}
 	return &model.ExtractionIdentifier{Location: in.Location, Identifier: in.Identifier}
 }
platform-api/src/internal/service/llm_template_seeder.go (1)

60-68: Redundant map can be simplified.

Both existingByID and existingByHandle use t.ID as the key. The existence check at line 74 can be replaced with a nil-check on existingByHandle, eliminating the need for existingByID.

Suggested simplification
-	existingByID := make(map[string]struct{}, len(existing))
-	existingByHandle := make(map[string]*model.LLMProviderTemplate, len(existing))
+	existingByHandle := make(map[string]*model.LLMProviderTemplate, len(existing))
 	for _, t := range existing {
 		if t == nil {
 			continue
 		}
-		existingByID[t.ID] = struct{}{}
 		existingByHandle[t.ID] = t
 	}

 	for _, tpl := range s.templates {
 		if tpl == nil || tpl.ID == "" {
 			continue
 		}
-		if _, ok := existingByID[tpl.ID]; ok {
-			current := existingByHandle[tpl.ID]
+		if current, ok := existingByHandle[tpl.ID]; ok {
 			if current != nil && current.Metadata == nil && tpl.Metadata != nil {
platform-api/src/internal/dto/llm.go (1)

86-154: Align list item JSON field casing with the main DTOs.
List items use created_at/updated_at while the primary DTOs use createdAt/updatedAt. If the API contract is camelCase, this will produce inconsistent responses. Please confirm the expected casing and align accordingly.

♻️ Suggested alignment (apply to all list item structs)
 type LLMProviderTemplateListItem struct {
 	ID          string    `json:"id" yaml:"id"`
 	DisplayName string    `json:"displayName" yaml:"displayName"`
-	CreatedAt   time.Time `json:"created_at" yaml:"created_at"`
-	UpdatedAt   time.Time `json:"updated_at" yaml:"updated_at"`
+	CreatedAt   time.Time `json:"createdAt" yaml:"createdAt"`
+	UpdatedAt   time.Time `json:"updatedAt" yaml:"updatedAt"`
 }

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Fix all issues with AI agents
In `@platform-api/src/internal/database/schema.postgres.sql`:
- Around line 345-423: llm_policies currently only references organizations so
deleting an llm_providers or llm_proxies row can leave orphaned policies; fix by
splitting policies into two tables (e.g., llm_provider_policies and
llm_proxy_policies) that include organization_uuid and target_handle and add
FOREIGN KEY (organization_uuid, target_handle) REFERENCES
llm_providers(organization_uuid, handle) ON DELETE CASCADE for provider policies
and FOREIGN KEY (organization_uuid, target_handle) REFERENCES
llm_proxies(organization_uuid, handle) ON DELETE CASCADE for proxy policies (or
alternatively implement AFTER DELETE triggers on llm_providers and llm_proxies
that delete matching rows from llm_policies); update any code that reads/writes
llm_policies to use the new tables or ensure the trigger is deployed.

In `@platform-api/src/internal/database/schema.sqlite.sql`:
- Around line 345-423: The llm_policies table can become orphaned when a
provider/proxy is deleted because it only references organizations; to fix this,
split policies into two tables (llm_provider_policies and llm_proxy_policies) or
add explicit FK columns that reference the canonical resource UUIDs and cascade
deletes: create llm_provider_policies with FOREIGN KEY (organization_uuid,
provider_uuid) REFERENCES llm_providers(organization_uuid, uuid) ON DELETE
CASCADE and create llm_proxy_policies with FOREIGN KEY (organization_uuid,
proxy_uuid) REFERENCES llm_proxies(organization_uuid, uuid) ON DELETE CASCADE
(or alternatively add triggers to delete rows from llm_policies when
llm_providers or llm_proxies rows are removed); update any code paths that
insert into or query llm_policies to use the new tables/columns and ensure
application inserts include provider_uuid or proxy_uuid rather than relying only
on target_handle/target_type.

In `@platform-api/src/internal/repository/llm.go`:
- Around line 419-432: LLMProviderRepo.Delete currently only removes the
llm_providers row; update it to also delete associated policy rows (e.g., from
llm_provider_policies or the table that stores provider->policy links) within a
transaction: begin a tx, Exec DELETE FROM llm_provider_policies WHERE handle = ?
AND organization_uuid = ? using providerID and orgUUID, then Exec DELETE FROM
llm_providers ... check RowsAffected for the provider delete (and return
sql.ErrNoRows if zero), rollback on any error and commit on success; apply the
same transactional cleanup change to the other Delete implementation in the file
(the one around lines 702-715).

In `@platform-api/src/internal/server/server.go`:
- Around line 84-103: Trim whitespace from cfg.LLMTemplateDefinitionsPath, then
guard against treating "." or "src" (and empty strings) as valid for building a
fallbackPath before calling utils.LoadLLMProviderTemplatesFromDirectory;
specifically, update the fallback resolution logic around cleanPath and
fallbackPath so that if cleanPath == "" or cleanPath == "." or cleanPath ==
"src" you do not set fallbackPath = filepath.Join("src", cleanPath) or attempt
the fallback load, and only attempt the src-prefixed fallback when cleanPath is
a non-empty, non-"." non-"src" relative path; keep references to
cfg.LLMTemplateDefinitionsPath, cleanPath, fallbackPath and
utils.LoadLLMProviderTemplatesFromDirectory when implementing the guard.

In `@platform-api/src/internal/utils/llm_provider_template_loader.go`:
- Around line 48-64: The YAML struct llmProviderTemplateYAML.Spec is missing a
description field so any "description" in template YAML is ignored; add a
Description string `yaml:"description"` to the Spec block in
llmProviderTemplateYAML and ensure the code that constructs
model.LLMProviderTemplate (the function that maps YAML ->
model.LLMProviderTemplate) populates model.Description from the new
Spec.Description field (or remove model.Description if you intend templates not
to provide it). Update references to Spec.Description in the converter/loader so
the database-backed model.Description is set from the YAML.
🧹 Nitpick comments (4)
platform-api/src/internal/service/llm_template_seeder.go (1)

60-68: Redundant map or misleading naming.

Both existingByID and existingByHandle use t.ID as the key. The existingByID map (only used for existence checks) is redundant since the same lookup can be done via existingByHandle. Consider consolidating to a single map or renaming for clarity.

♻️ Suggested simplification
-	existingByID := make(map[string]struct{}, len(existing))
-	existingByHandle := make(map[string]*model.LLMProviderTemplate, len(existing))
+	existingByHandle := make(map[string]*model.LLMProviderTemplate, len(existing))
 	for _, t := range existing {
 		if t == nil {
 			continue
 		}
-		existingByID[t.ID] = struct{}{}
 		existingByHandle[t.ID] = t
 	}

Then replace the existence check on line 74:

-		if _, ok := existingByID[tpl.ID]; ok {
+		if _, ok := existingByHandle[tpl.ID]; ok {
platform-api/src/internal/utils/llm_provider_template_loader.go (2)

121-129: Consider trimming values when setting fields.

The function checks trimmed values but assigns the original (potentially whitespace-padded) values. For consistency, consider trimming on assignment as well.

♻️ Suggested fix
 func mapExtractionIdentifier(in *extractionIdentifierYAML) *model.ExtractionIdentifier {
 	if in == nil {
 		return nil
 	}
-	if strings.TrimSpace(in.Location) == "" || strings.TrimSpace(in.Identifier) == "" {
+	location := strings.TrimSpace(in.Location)
+	identifier := strings.TrimSpace(in.Identifier)
+	if location == "" || identifier == "" {
 		return nil
 	}
-	return &model.ExtractionIdentifier{Location: in.Location, Identifier: in.Identifier}
+	return &model.ExtractionIdentifier{Location: location, Identifier: identifier}
 }

135-142: Auth object created even when all fields are empty.

When in.Auth is non-nil but all its fields are empty strings after trimming, an auth object with empty strings is still created. Consider returning nil for auth if all fields are empty.

♻️ Suggested fix
 	var auth *model.LLMProviderTemplateAuth
 	if in.Auth != nil {
-		auth = &model.LLMProviderTemplateAuth{
-			Type:        strings.TrimSpace(in.Auth.Type),
-			Header:      strings.TrimSpace(in.Auth.Header),
-			ValuePrefix: in.Auth.ValuePrefix,
+		authType := strings.TrimSpace(in.Auth.Type)
+		authHeader := strings.TrimSpace(in.Auth.Header)
+		if authType != "" || authHeader != "" || in.Auth.ValuePrefix != "" {
+			auth = &model.LLMProviderTemplateAuth{
+				Type:        authType,
+				Header:      authHeader,
+				ValuePrefix: in.Auth.ValuePrefix,
+			}
 		}
 	}
platform-api/src/internal/database/schema.sql (1)

453-462: Some indexes may be redundant with UNIQUE constraints.

SQLite automatically creates indexes for UNIQUE constraints. The following explicit indexes overlap:

  • idx_llm_provider_templates_handle overlaps with UNIQUE(organization_uuid, handle)
  • idx_llm_providers_handle overlaps with UNIQUE(organization_uuid, handle)
  • idx_llm_proxies_handle overlaps with UNIQUE(organization_uuid, handle)

These redundant indexes consume storage without providing additional query performance benefits.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In `@platform-api/src/internal/repository/llm.go`:
- Around line 363-376: In the List method of llm.go, don't ignore json.Unmarshal
failures for upstreamAuthJSON and accessControlJSON; replace the discarded
unmarshals with error-checked calls (e.g., err :=
json.Unmarshal([]byte(upstreamAuthJSON.String), &p.UpstreamAuth) and err :=
json.Unmarshal([]byte(accessControlJSON.String), &p.AccessControl)) and return
the error (or wrap it) instead of discarding it so corrupted JSON surfaces to
the caller; locate these in the block around r.listProviderPolicies and amend
accordingly.
- Around line 548-550: The code currently ignores json.Unmarshal errors when
decoding accessControlJSON into p.AccessControl; update the LLMProxyRepo methods
GetByID, List, ListByProject, and ListByProvider so that after calling
json.Unmarshal([]byte(accessControlJSON.String), &p.AccessControl) you check the
returned error and propagate it (return it from the method, or wrap with
context) instead of discarding; ensure you handle only when
accessControlJSON.Valid && accessControlJSON.String != "" and include a helpful
message referencing the ID or row context if available.
- Around line 316-330: The json.Unmarshal calls for upstreamAuthJSON and
accessControlJSON are ignoring errors which can hide corrupted DB data; in the
function that populates p (where upstreamAuthJSON and accessControlJSON are
read), replace the discarded unmarshal calls with error checks: call
json.Unmarshal on []byte(upstreamAuthJSON.String) into p.UpstreamAuth and if it
returns an error, return nil and the error (or wrap it with context mentioning
"unmarshal upstreamAuth for provider" and the provider ID); do the same for
accessControlJSON into p.AccessControl, returning the error instead of
discarding it so callers (and logs) can detect malformed JSON.
- Around line 790-799: Remove the unused helper function
mapUniqueConstraintError from the file; locate the function definition named
mapUniqueConstraintError and delete it entirely (including its imports usage if
that function was the sole user of any imports like strings or fmt—remove those
imports only if they become unused after deletion) so no dead code remains.
🧹 Nitpick comments (2)
platform-api/src/internal/repository/llm.go (2)

62-70: Consider handling json.Marshal error.

The error from json.Marshal is discarded. While Marshal rarely fails for structs with basic types, propagating the error would make debugging easier if unexpected data causes serialization issues.

♻️ Suggested fix
-	configJSON, _ := json.Marshal(&llmProviderTemplateConfig{
+	configJSON, err := json.Marshal(&llmProviderTemplateConfig{
 		Metadata:         t.Metadata,
 		PromptTokens:     t.PromptTokens,
 		CompletionTokens: t.CompletionTokens,
 		TotalTokens:      t.TotalTokens,
 		RemainingTokens:  t.RemainingTokens,
 		RequestModel:     t.RequestModel,
 		ResponseModel:    t.ResponseModel,
 	})
+	if err != nil {
+		return fmt.Errorf("failed to marshal template config: %w", err)
+	}

801-819: Remove unused non-transactional policy helper functions.

replaceProviderPolicies and replaceProxyPolicies (non-tx versions) are unused throughout the codebase. All Create, Update, and Delete operations use the transactional variants (replaceProviderPoliciesTx and replaceProxyPoliciesTx). Remove these unused functions to reduce code duplication.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@platform-api/src/internal/repository/llm.go`:
- Around line 61-69: The code currently ignores errors from json.Marshal (e.g.,
where llmProviderTemplateConfig is marshaled) which can corrupt stored JSON;
update every json.Marshal call (including those in llmProviderTemplate creation,
LLMProvider/LLMProxy access control and upstream auth marshaling, and policy
path marshaling in replaceProviderPolicies, replaceProviderPoliciesTx,
replaceProxyPolicies, replaceProxyPoliciesTx) to check the returned error,
return it (or the enclosing function's error) immediately on failure, and avoid
writing or using the nil/empty byte slice; locate uses of json.Marshal around
symbols like llmProviderTemplateConfig, replaceProviderPolicies,
replaceProviderPoliciesTx, replaceProxyPolicies, replaceProxyPoliciesTx and add
standard error handling (if err != nil { return err }) before proceeding.
🧹 Nitpick comments (1)
platform-api/src/internal/repository/llm.go (1)

372-375: Avoid N+1 policy lookups in list endpoints.

Each list call triggers one extra query per row. Consider prefetching policies for all handles in the page (single IN (...) query) and mapping them in memory, or joining on llm_policies with aggregation to reduce query count.

Also applies to: 601-605, 644-648, 687-691

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
platform-api/src/internal/database/schema.postgres.sql (1)

91-99: ⚠️ Potential issue | 🔴 Critical

Missing comma causes syntax error in api_key_security table.

Line 93 is missing a comma after NOT NULL, which will cause the table creation to fail.

🐛 Proposed fix
 CREATE TABLE IF NOT EXISTS api_key_security (
     id SERIAL PRIMARY KEY,
-    api_uuid VARCHAR(40) NOT NULL
+    api_uuid VARCHAR(40) NOT NULL,
     enabled BOOLEAN,
     header VARCHAR(255),
     query VARCHAR(255),
     cookie VARCHAR(255),
     FOREIGN KEY (api_uuid) REFERENCES apis(uuid) ON DELETE CASCADE
 );
🧹 Nitpick comments (1)
platform-api/src/internal/service/llm.go (1)

32-36: Unused status constants.

llmStatusDeployed and llmStatusFailed are defined but never used in this file. Only llmStatusPending is referenced. Consider removing unused constants or adding a TODO if they're planned for future use.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant