feat(editor): workspace skills with draft→publish versioning and blob storage#13156
feat(editor): workspace skills with draft→publish versioning and blob storage#13156DanielSLew merged 21 commits intomainfrom
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
🦋 Changeset detectedLatest commit: 207b111 The changes in this PR will be included in the next version bump. This PR includes changesets to release 28 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds versioned workspace and skill storage, a content-addressable BlobStore API and multiple implementations, editor provider registries with built-in providers, editor workspace/skill namespaces (hydration & publish), server REST routes/schemas for stored workspaces/skills, and extensive tests. Changes
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 11
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
stores/pg/src/storage/domains/agents/index.ts (1)
481-497:⚠️ Potential issue | 🟠 MajorBug:
configFieldNamesin PGupdateis missingworkspace,skills,skillsFormat.The local
configFieldNamesarray (lines 481–497) omits the three snapshot fields that all other storage backends include in their config detection:
- MongoDB includes them in
SNAPSHOT_FIELDS(lines 49–51)- LibSQL includes them in
SNAPSHOT_FIELDS(lines 48–50)- InMemory includes them in
configFieldNames(lines 114–116)Without these fields, PG's
updatewon't detect changes toworkspace,skills, orskillsFormatas config updates, so new agent versions won't be created when these fields change.🐛 Proposed fix
const configFieldNames = [ 'name', 'description', 'instructions', 'model', 'tools', 'defaultOptions', 'workflows', 'agents', 'integrationTools', 'inputProcessors', 'outputProcessors', 'memory', 'scorers', 'mcpClients', 'requestContextSchema', + 'workspace', + 'skills', + 'skillsFormat', ];Consider extracting a shared
SNAPSHOT_FIELDSconstant across all backends to prevent this kind of drift in the future.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@stores/pg/src/storage/domains/agents/index.ts` around lines 481 - 497, The PG storage's local configFieldNames array used in the update flow (configFieldNames in stores/pg/src/storage/domains/agents/index.ts) is missing the snapshot fields 'workspace', 'skills', and 'skillsFormat', so update won't treat changes to those as config updates; add 'workspace', 'skills', and 'skillsFormat' to the configFieldNames array used by the update code (the same array referenced in the update function) and consider replacing per-backend arrays with a shared SNAPSHOT_FIELDS constant (like the SNAPSHOT_FIELDS used in the MongoDB/LibSQL modules) to avoid future drift.stores/pg/src/storage/index.ts (1)
39-47:⚠️ Potential issue | 🟡 MinorNew domains (
BlobsPG,SkillsPG,WorkspacesPG) lackgetExportDDLimplementation and should be added toALL_DOMAINSfor schema export.The
ALL_DOMAINSarray at lines 39–47 is iterated byexportSchemas()to generate offline DDL. All existing domains in this array have a staticgetExportDDLmethod, but the three new domains do not yet have this implemented. Consequently, their tables won't appear in exported schema SQL, only at runtime viainit().Add
getExportDDLtoBlobsPG,SkillsPG, andWorkspacesPG, then include them inALL_DOMAINSso users doing managed migrations get complete DDL.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@stores/pg/src/storage/index.ts` around lines 39 - 47, Add static getExportDDL implementations to the new domain classes BlobsPG, SkillsPG, and WorkspacesPG (matching the pattern used by existing domains like MemoryPG/AgentsPG) and then include these classes in the ALL_DOMAINS array so exportSchemas() will pick up their DDL; locate the class definitions for BlobsPG, SkillsPG, and WorkspacesPG, implement a static getExportDDL() that returns the table DDL SQL string (consistent with other domains), and update the ALL_DOMAINS constant to include BlobsPG, SkillsPG, and WorkspacesPG alongside the existing entries.packages/core/src/storage/mock.ts (1)
52-63:⚠️ Potential issue | 🔴 CriticalAdd missing
blobsdomain toInMemoryStore.
LibSQLStoreincludes ablobsdomain, butInMemoryStoreomits it despiteInMemoryBlobStoreexisting. Althoughblobsis optional inStorageDomains, this inconsistency means any code path callinggetStore('blobs')on anInMemoryStorewill getundefined.Proposed fix
Add the import:
+import { InMemoryBlobStore } from './domains/blobs/inmemory';Add
blobsto the stores object:workspaces: new InMemoryWorkspacesStorage({ db: this.#db }), skills: new InMemorySkillsStorage({ db: this.#db }), + blobs: new InMemoryBlobStore(),Note:
InMemoryBlobStoretakes no constructor arguments, unlike other domains.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/storage/mock.ts` around lines 52 - 63, The InMemory store initialization omits the blobs domain so getStore('blobs') can return undefined; add an import for InMemoryBlobStore and include blobs: new InMemoryBlobStore() in the this.stores object initialization (note InMemoryBlobStore requires no constructor args) to mirror LibSQLStore's domains and prevent undefined returns from getStore('blobs').stores/libsql/src/storage/index.ts (1)
155-170:⚠️ Potential issue | 🟠 MajorAdd missing
blobsdomain toInMemoryStore.The
InMemoryStoreomits theblobsdomain despite wiring all other optional domains (workspaces,skills,observability,agents,promptBlocks,scorerDefinitions,mcpClients). This causesgetStore('blobs')to returnundefinedfor tests and development flows usingInMemoryStore. TheInMemoryBlobStoreimplementation is available inpackages/core/src/storage/domains/blobs/inmemory.tsand should be added to maintain consistency with how other optional domains are handled.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@stores/libsql/src/storage/index.ts` around lines 155 - 170, The InMemoryStore is missing the blobs domain so getStore('blobs') returns undefined; modify the InMemoryStore initialization to include blobs by importing and instantiating InMemoryBlobStore (from packages/core/src/storage/domains/blobs/inmemory.ts) and adding it to the this.stores map alongside workspaces, skills, observability, agents, promptBlocks, scorerDefinitions, and mcpClients so getStore('blobs') returns the InMemoryBlobStore instance. Ensure the symbol names InMemoryStore, InMemoryBlobStore, and getStore('blobs') are used to locate and update the code.packages/core/src/storage/base.ts (1)
213-224:⚠️ Potential issue | 🔴 Critical
blobsdomain is missing from the composition block — overrides will be silently dropped.
StorageDomainsdeclaresblobs?: BlobStore(Line 28), andinit()correctly initializes it (Lines 300–302), but the composition block that buildsthis.storesfromdefault/domainsomitsblobs. A user passingdomains: { blobs: myBlobStore }will have it silently ignored.Proposed fix
this.stores = { memory: domainOverrides.memory ?? defaultStores?.memory, workflows: domainOverrides.workflows ?? defaultStores?.workflows, scores: domainOverrides.scores ?? defaultStores?.scores, observability: domainOverrides.observability ?? defaultStores?.observability, agents: domainOverrides.agents ?? defaultStores?.agents, promptBlocks: domainOverrides.promptBlocks ?? defaultStores?.promptBlocks, scorerDefinitions: domainOverrides.scorerDefinitions ?? defaultStores?.scorerDefinitions, mcpClients: domainOverrides.mcpClients ?? defaultStores?.mcpClients, workspaces: domainOverrides.workspaces ?? defaultStores?.workspaces, skills: domainOverrides.skills ?? defaultStores?.skills, + blobs: domainOverrides.blobs ?? defaultStores?.blobs, } as StorageDomains;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/storage/base.ts` around lines 213 - 224, The composition that builds this.stores omits the blobs domain so any domainOverrides.blobs or defaultStores?.blobs will be ignored; update the object literal in the constructor (the this.stores assignment) to include blobs: domainOverrides.blobs ?? defaultStores?.blobs (matching the StorageDomains `blobs` property), ensuring it aligns with the init() handling and other domain keys so passed-in blob stores aren’t silently dropped.
🟡 Minor comments (16)
.changeset/silly-cars-roll.md-1-12 (1)
1-12:⚠️ Potential issue | 🟡 Minor
@mastra/clickhouseis listed in the frontmatter but not described in the changeset body.Per coding guidelines, each package in the frontmatter should have its changes reflected in the description. The
@mastra/clickhouse(patch) bump has no corresponding explanation. Similarly,@mastra/gcs(patch) and@mastra/e2b(patch) are only indirectly referenced.Either add a brief note explaining what changed in those packages (e.g., "Updated storage domain interfaces for compatibility") or split them into a separate changeset if the changes are unrelated.
potential_issueAs per coding guidelines: "Check that the description inside the changeset file only applies to the packages listed in the frontmatter."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.changeset/silly-cars-roll.md around lines 1 - 12, The changeset frontmatter lists `@mastra/clickhouse` (patch), `@mastra/gcs` (patch) and `@mastra/e2b` (patch) but the body lacks descriptions for those packages; update the .changeset/silly-cars-roll.md body to include a short note for each listed package (e.g., "@mastra/clickhouse: small bugfix to query serialization", "@mastra/gcs: updated storage domain interfaces for compatibility", "@mastra/e2b: fixed edge-case encoding") or, if their changes are unrelated, move those package bumps into a separate changeset file so each changeset's description only applies to the packages in its frontmatter..changeset/clever-shirts-hide.md-6-27 (1)
6-27:⚠️ Potential issue | 🟡 Minor
potential_issue: Description and code example reference packages not listed in the frontmatter.Per coding guidelines, the changeset description should only cover packages listed in the frontmatter (
@mastra/e2band@mastra/gcs). Two issues:
- The opening line mentions "blob store packages" — neither
@mastra/e2bnor@mastra/gcsis a blob store package.- The code example imports
MastraEditorand callseditor.getFilesystemProviders()/editor.getSandboxProviders(), which belong to the editor package (not listed in the frontmatter). The example should focus on what these two packages export (the provider descriptors themselves), or a separate changeset should be created for the editor package changes.As per coding guidelines: "Check that the description inside the changeset file only applies to the packages listed in the frontmatter. Do not allow descriptions that mention changes to packages not listed in the frontmatter. In these cases, the user must create a separate changeset file for those packages."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.changeset/clever-shirts-hide.md around lines 6 - 27, The changeset description references packages and APIs not in the frontmatter: remove or reword any lines that call or describe MastraEditor, getFilesystemProviders/getSandboxProviders, or generic "blob store packages" and instead limit the text to what `@mastra/gcs` and `@mastra/e2b` actually export (gcsFilesystemProvider and e2bSandboxProvider and their configSchema fields); if you need to document MastraEditor or provider discovery APIs (MastraEditor, getFilesystemProviders, getSandboxProviders) create a separate changeset for the editor package; ensure the example only imports and demonstrates gcsFilesystemProvider and e2bSandboxProvider (or drop the example) so the changeset description only applies to the packages listed in the frontmatter.stores/pg/src/storage/domains/skills/index.ts-753-762 (1)
753-762:⚠️ Potential issue | 🟡 MinorPotential
nullvsundefinedmismatch from DB rows.PostgreSQL returns
nullfor missing column values, notundefined. The type assertionsas string | undefinedonactiveVersionIdandauthorIdwon't convertnulltoundefined, soStorageSkillTypeconsumers may receivenullwhen they expectundefined.🔧 Suggested fix
private parseSkillRow(row: any): StorageSkillType { return { id: row.id as string, status: row.status as StorageSkillType['status'], - activeVersionId: row.activeVersionId as string | undefined, - authorId: row.authorId as string | undefined, + activeVersionId: row.activeVersionId ?? undefined, + authorId: row.authorId ?? undefined, createdAt: new Date(row.createdAtZ || row.createdAt), updatedAt: new Date(row.updatedAtZ || row.updatedAt), }; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@stores/pg/src/storage/domains/skills/index.ts` around lines 753 - 762, The parseSkillRow function can return nulls for activeVersionId and authorId because Postgres uses null, but the code asserts as string | undefined; change the mapping in parseSkillRow to convert DB nulls to undefined for those keys (e.g., replace direct casts with a null-to-undefined check for row.activeVersionId and row.authorId) so StorageSkillType consumers always receive undefined instead of null; keep the rest of the field mappings (id, status, createdAt/updatedAt) unchanged.stores/pg/src/storage/domains/skills/index.ts-764-784 (1)
764-784:⚠️ Potential issue | 🟡 MinorSame
nullvsundefinedconcern inparseVersionRow.Fields like
description,instructions,license, andchangeMessageuseas string | undefinedbut may benullfrom Postgres. Apply?? undefinedto match the expected types.🔧 Suggested fix
private parseVersionRow(row: any): SkillVersion { return { id: row.id as string, skillId: row.skillId as string, versionNumber: row.versionNumber as number, name: row.name as string, - description: row.description as string, - instructions: row.instructions as string, - license: row.license as string | undefined, + description: row.description ?? undefined, + instructions: row.instructions ?? undefined, + license: row.license ?? undefined, compatibility: this.parseJson(row.compatibility, 'compatibility'), source: this.parseJson(row.source, 'source'), references: this.parseJson(row.references, 'references'), scripts: this.parseJson(row.scripts, 'scripts'), assets: this.parseJson(row.assets, 'assets'), metadata: this.parseJson(row.metadata, 'metadata'), tree: this.parseJson(row.tree, 'tree'), changedFields: this.parseJson(row.changedFields, 'changedFields'), - changeMessage: row.changeMessage as string | undefined, + changeMessage: row.changeMessage ?? undefined, createdAt: new Date(row.createdAtZ || row.createdAt), }; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@stores/pg/src/storage/domains/skills/index.ts` around lines 764 - 784, In parseVersionRow, some nullable DB columns (description, instructions, license, changeMessage) are being cast as "string | undefined" but can be null from Postgres; update the returns for those properties in the parseVersionRow function to coerce null to undefined (e.g., use row.description ?? undefined, row.instructions ?? undefined, row.license ?? undefined, row.changeMessage ?? undefined) so the runtime values match the declared types while leaving existing parseJson calls unchanged.packages/server/src/server/handlers/stored-skills.ts-296-302 (1)
296-302:⚠️ Potential issue | 🟡 MinorDELETE does not clean up associated versions or blobs.
When a skill is deleted, its version history and orphaned blobs in the blob store are not removed. Over time this will accumulate storage waste. The workspace DELETE handler should be checked for the same pattern.
Do you want me to open an issue to track orphan cleanup for deleted skills (version rows + unreferenced blobs)?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/server/handlers/stored-skills.ts` around lines 296 - 302, The DELETE handler only removes the skill row via skillStore.delete(storedSkillId) but leaves version rows and blob data behind; update the handler to first fetch and remove all versions and any referenced blobs for storedSkillId (e.g., use skillVersionStore.getBySkillId/removeBySkillId or iterate versions returned by skillVersionStore.list/getForSkill to collect blob ids) and then delete those blob records from the blob store (blobStore.delete or blobStore.deleteIfUnreferenced) before calling skillStore.delete(storedSkillId); also mirror the same cleanup logic in the workspace DELETE handler to prevent orphaned versions/blobs.packages/core/src/workspace/skills/publish.ts-174-186 (1)
174-186:⚠️ Potential issue | 🟡 MinorNo validation that
frontmatter.nameandfrontmatter.descriptionexist.
snapshot.nameandsnapshot.descriptionare pulled directly from frontmatter without checking they're defined. A SKILL.md missing these fields will produce a snapshot withundefinedname/description, which could cause downstream failures or silent data corruption.Consider validating these required fields after parsing:
Proposed validation
const parsed = matter(skillMdFile.content); const frontmatter = parsed.data; const instructions = parsed.content.trim(); + if (!frontmatter.name || typeof frontmatter.name !== 'string') { + throw new Error(`SKILL.md in ${skillPath} is missing a required 'name' frontmatter field`); + } + if (!frontmatter.description || typeof frontmatter.description !== 'string') { + throw new Error(`SKILL.md in ${skillPath} is missing a required 'description' frontmatter field`); + } + // 4. Discover references/, scripts/, assets/ subdirectories🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/workspace/skills/publish.ts` around lines 174 - 186, Validate that required frontmatter fields are present before building the snapshot: check frontmatter.name and frontmatter.description (and optionally frontmatter.license/compatibility if required) after the frontmatter parsing step and before constructing the snapshot object; if either is missing, throw a clear error or return a rejected Promise indicating which field is missing so we never create a StorageSkillSnapshotType with undefined values. Update the code around the use of frontmatter and snapshot in publish.ts (the frontmatter variable and the snapshot construction) to perform these checks and fail fast with a descriptive message.packages/editor/src/namespaces/skill.ts-71-86 (1)
71-86:⚠️ Potential issue | 🟡 MinorTwo sequential updates introduce a non-atomic window.
The publish flow performs two separate
skillStore.update()calls: one to persist the snapshot+tree (Line 72) and another to setactiveVersionId(Line 82). If the process crashes between them, the skill will have a new version butactiveVersionIdwon't point to it, leaving the skill in an inconsistent state.Consider merging this into a single update if the storage layer supports it, or at minimum documenting this as a known limitation.
Proposed single-update approach
- // Update the skill with new version data + tree (creates a new version) - await skillStore.update({ - id: skillId, - ...snapshot, - tree, - status: 'published', - }); - - // Point activeVersionId to the newly created version - const latestVersion = await skillStore.getLatestVersion(skillId); - if (latestVersion) { - await skillStore.update({ - id: skillId, - activeVersionId: latestVersion.id, - }); - } + // Update the skill with new version data + tree (creates a new version) + await skillStore.update({ + id: skillId, + ...snapshot, + tree, + status: 'published', + }); + + // Point activeVersionId to the newly created version + const latestVersion = await skillStore.getLatestVersion(skillId); + if (latestVersion) { + await skillStore.update({ + id: skillId, + activeVersionId: latestVersion.id, + status: 'published', + }); + }Ideally the storage layer would support setting
activeVersionIdin the same call that creates the version, avoiding the two-step entirely.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/editor/src/namespaces/skill.ts` around lines 71 - 86, The publish flow currently does two separate skillStore.update calls (one writing the new version with snapshot/tree/status, then another setting activeVersionId after calling skillStore.getLatestVersion), creating a non-atomic window; change this to a single atomic update if the storage supports it by calling skillStore.update once with id, ...snapshot, tree, status: 'published', and activeVersionId set to the new version id (obtainable from the version creation result or use a transactional API/skillStore.transaction wrapper if available), or if your store cannot atomically set activeVersionId in the same call, wrap both operations in a transaction API (e.g., skillStore.transaction or similar) or add a clear comment documenting the race condition and retry/compensation strategy; locate the logic around skillStore.update and skillStore.getLatestVersion to implement the merged update or transaction.packages/server/src/server/handlers/stored-workspaces.ts-153-157 (1)
153-157:⚠️ Potential issue | 🟡 MinorTOCTOU race between existence check and create.
The
getById(id)check on line 154 and thecreate()on line 159 are not atomic. A concurrent request could create a workspace with the same ID between these two calls, leading to a duplicate or a database error. This is largely mitigated if the storage layer enforces a unique constraint on the ID — the 409 check here mainly provides a friendlier error message.If the storage layer doesn't enforce uniqueness, consider using an upsert or catching the duplicate key error from the DB.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/server/handlers/stored-workspaces.ts` around lines 153 - 157, The pre-check using workspaceStore.getById(id) creates a TOCTOU race with workspaceStore.create(...); remove the non-atomic existence check and instead attempt the create directly, then catch and translate a duplicate-key or unique-constraint error from the storage layer into the HTTPException(409, ...) (or implement/ call an atomic upsert/create-with-unique-guard on workspaceStore if available). Update error handling around workspaceStore.create to detect the DB-specific duplicate error and throw the same 409 message so concurrent requests yield a friendly conflict response.packages/server/src/server/schemas/stored-skills.ts-104-114 (1)
104-114:⚠️ Potential issue | 🟡 MinorSame
z.unionbranch ordering issue as in stored-workspaces.ts.The first branch (minimal fields) is a subset of
storedSkillSchema, so Zod will always match on it first, potentially stripping snapshot fields from full responses. Swap the order to putstoredSkillSchemafirst.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/server/schemas/stored-skills.ts` around lines 104 - 114, The z.union branch ordering in updateStoredSkillResponseSchema causes the broader storedSkillSchema to be shadowed by the smaller object branch; change the z.union to list storedSkillSchema first and the minimal fields object second so storedSkillSchema is matched before the subset (update the definition of updateStoredSkillResponseSchema that currently uses z.union([...]) to swap the two branches).packages/server/src/server/schemas/stored-workspaces.ts-134-145 (1)
134-145:⚠️ Potential issue | 🟡 Minor
z.unionbranch ordering may silently strip fields from full responses.The first branch of the union is a subset of
storedWorkspaceSchema. When Zod evaluates az.union, it returns the result of the first branch that parses successfully. Sincez.objectstrips unknown keys by default, a full workspace response will match the first (minimal) branch and have its snapshot fields (name,description,filesystem, etc.) stripped away.If this schema is only used for OpenAPI documentation and never for actual response parsing/validation, this is cosmetic. If the framework parses outgoing responses through this schema, full update responses will lose data.
Consider reordering so the more specific (fuller) schema comes first, or use
z.discriminatedUnionif a discriminator field is available:♻️ Proposed fix
export const updateStoredWorkspaceResponseSchema = z.union([ - z.object({ - id: z.string(), - status: z.string(), - activeVersionId: z.string().optional(), - authorId: z.string().optional(), - metadata: z.record(z.string(), z.unknown()).optional(), - createdAt: z.coerce.date(), - updatedAt: z.coerce.date(), - }), storedWorkspaceSchema, + z.object({ + id: z.string(), + status: z.string(), + activeVersionId: z.string().optional(), + authorId: z.string().optional(), + metadata: z.record(z.string(), z.unknown()).optional(), + createdAt: z.coerce.date(), + updatedAt: z.coerce.date(), + }), ]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/server/schemas/stored-workspaces.ts` around lines 134 - 145, The union in updateStoredWorkspaceResponseSchema places a minimal z.object branch before storedWorkspaceSchema, so Zod will match the smaller branch and strip extra fields from full workspace responses; fix by ordering the union with the more specific storedWorkspaceSchema first (i.e., z.union([storedWorkspaceSchema, z.object({...})])) or, if a discriminator exists, replace z.union with z.discriminatedUnion using that field to ensure full responses aren't truncated when parsed.packages/editor/src/namespaces/agent.ts-1146-1164 (1)
1146-1164:⚠️ Potential issue | 🟡 MinorAdd validation to verify pinned skill version belongs to the expected skill.
When
isPinnedis true,skillStore.getVersion(config.pin!)fetches a version by its global version ID. The code does not verify that the returned version actually belongs toskillId. A misconfigured pin could silently mount a version from a different skill under the wrongdirName.Add a check to validate the skill association:
🛡️ Proposed validation
if (isPinned) { // Look up the specific pinned version version = await skillStore.getVersion(config.pin!); + if (version && version.skillId !== skillId) { + this.logger?.warn( + `[resolveAgentSkillSource] Pinned version '${config.pin}' belongs to skill '${version.skillId}', not '${skillId}'. Skipping.`, + ); + continue; + } dirName = version?.name || skillId;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/editor/src/namespaces/agent.ts` around lines 1146 - 1164, When isPinned is true and you call skillStore.getVersion(config.pin!), validate that the returned version actually belongs to the expected skillId: check the version's owning skill identifier (e.g., version.skillId or version.ownerId) against skillId after the call to getVersion; if it doesn't match, treat it as invalid (throw an error or handle as missing) rather than proceeding to set dirName = version.name, so you don't mount a version from a different skill under the wrong dirName; update the branch that sets version and dirName (the isPinned branch using isPinned, skillStore.getVersion, config.pin, skillId, version, dirName) to include this verification and a clear failure path.workspaces/s3/src/blob-store/index.ts-166-172 (1)
166-172:⚠️ Potential issue | 🟡 Minor
sizefallback is incorrect for zero-byte blobs.
Number(metadata.size) || Buffer.byteLength(body, 'utf-8')— if the stored size is0,Number('0')evaluates to0(falsy), so it incorrectly falls back toBuffer.byteLength. Use nullish coalescing instead.Proposed fix
- size: Number(metadata.size) || Buffer.byteLength(body, 'utf-8'), + size: metadata.size != null ? Number(metadata.size) : Buffer.byteLength(body, 'utf-8'),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspaces/s3/src/blob-store/index.ts` around lines 166 - 172, The size fallback incorrectly treats a stored "0" as falsy; in the blob retrieval code (return object with properties hash, content, size, mimeType, createdAt) replace the logical OR used for size (Number(metadata.size) || Buffer.byteLength(body, 'utf-8')) with a nullish coalescing check so zero-byte blobs keep size 0: compute Number(metadata.size) and use ?? Buffer.byteLength(body, 'utf-8') as the fallback.packages/core/src/storage/constants.ts-254-281 (1)
254-281:⚠️ Potential issue | 🟡 Minor
SKILLS_SCHEMAomitsmetadatawhileSKILL_VERSIONS_SCHEMAmarksdescriptionandinstructionsas NOT NULL.Two observations:
SKILLS_SCHEMAhas nometadatacolumn, which is intentional per the code comment inSkillsLibSQL.list(). However,StorageListSkillsInputaccepts ametadatafilter, andMongoDBSkillsStorage.list()filters onmetadata.*against the skills collection. The MongoDB implementation will query a field that doesn't exist on the thin skill record, silently returning no results when metadata filters are used. Consider documenting this discrepancy or aligning the filter logic.
SKILL_VERSIONS_SCHEMAdeclaresdescription(Line 268) andinstructions(Line 269) asnullable: false, but the LibSQLcreateVersiondefensively passesinput.description ?? null. WhileStorageSkillSnapshotTyperequires these fields, the?? nullfallback would violate the NOT NULL constraint if the types were ever loosened.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/storage/constants.ts` around lines 254 - 281, SKILLS_SCHEMA vs SKILL_VERSIONS_SCHEMA mismatch: either add a metadata column to SKILLS_SCHEMA so MongoDBSkillsStorage.list() and StorageListSkillsInput metadata filters work against the thin skill record, or update SkillsLibSQL.list() / MongoDBSkillsStorage.list() to only filter metadata against skill versions and document the discrepancy; additionally, make description and instructions in SKILL_VERSIONS_SCHEMA nullable or tighten createVersion (in SkillsLibSQL.createVersion) to never pass null (remove the `?? null` fallback) so the NOT NULL constraint on description/instructions cannot be violated, and update StorageSkillSnapshotType if you change nullability to keep types consistent.stores/libsql/src/storage/domains/skills/index.ts-158-165 (1)
158-165:⚠️ Potential issue | 🟡 MinorSame plain
Errorinconsistency inupdate()for "not found" — mirrors WorkspacesLibSQL.Lines 163 and 195 throw plain
Errorinstead ofMastraError, causing the error to be re-wrapped asTHIRD_PARTYinstead ofUSERcategory. Same fix as suggested forWorkspacesLibSQL.update().Proposed fix
if (!existing) { - throw new Error(`Skill with id ${id} not found`); + throw new MastraError({ + id: createStorageErrorId('LIBSQL', 'UPDATE_SKILL', 'NOT_FOUND'), + domain: ErrorDomain.STORAGE, + category: ErrorCategory.USER, + text: `Skill ${id} not found`, + details: { skillId: id }, + }); }And similarly for line 195:
if (!latestVersion) { - throw new Error(`No versions found for skill ${id}`); + throw new MastraError({ + id: createStorageErrorId('LIBSQL', 'UPDATE_SKILL', 'NO_VERSIONS'), + domain: ErrorDomain.STORAGE, + category: ErrorCategory.USER, + text: `No versions found for skill ${id}`, + details: { skillId: id }, + }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@stores/libsql/src/storage/domains/skills/index.ts` around lines 158 - 165, In the update method (async update(input: StorageUpdateSkillInput): Promise<StorageSkillType>) replace the plain Error throws for "not found" with a MastraError so the error is categorized as USER instead of being re-wrapped as THIRD_PARTY; locate the two places that currently do throw new Error(`Skill with id ${id} not found`) (one after this.getById(id) and the other later around line 195) and throw new MastraError({ message: `Skill with id ${id} not found`, type: ErrorType.USER }) (or the project’s equivalent MastraError constructor/enum) so callers receive a USER-category MastraError.stores/libsql/src/storage/domains/workspaces/index.ts-159-165 (1)
159-165:⚠️ Potential issue | 🟡 Minor
update()throws plainErrorfor "not found" instead ofMastraError.Line 164 throws
new Error(...)which is caught at Line 246 and re-wrapped asTHIRD_PARTYcategory. The PG and MongoDB implementations throwMastraErrorwithUSERcategory for this case, allowing callers to distinguish user errors from system failures. This inconsistency could lead to incorrect error handling downstream (e.g., a 500 instead of a 404).Proposed fix
if (!existing) { - throw new Error(`Workspace with id ${id} not found`); + throw new MastraError({ + id: createStorageErrorId('LIBSQL', 'UPDATE_WORKSPACE', 'NOT_FOUND'), + domain: ErrorDomain.STORAGE, + category: ErrorCategory.USER, + text: `Workspace ${id} not found`, + details: { workspaceId: id }, + }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@stores/libsql/src/storage/domains/workspaces/index.ts` around lines 159 - 165, The update method currently throws a plain Error when a workspace isn't found; change that to throw a MastraError with category USER so callers can distinguish user NOT_FOUND from system errors. In the update function (accepting StorageUpdateWorkspaceInput and returning StorageWorkspaceType, which calls getById), replace the new Error(...) for the missing workspace with a new MastraError({ message: `Workspace with id ${id} not found`, category: MastraErrorCategory.USER, code: 'NOT_FOUND' }) (or the project's MastraError constructor shape) so upstream error handling treats it as a client/user error rather than a THIRD_PARTY/system failure.packages/core/src/storage/domains/skills/inmemory.ts-216-223 (1)
216-223:⚠️ Potential issue | 🟡 MinorMetadata filter silently returns no results.
The filter at lines 219–222 unconditionally returns
false, meaning anylist()call with ametadatafilter will always yield an empty array — with no error or warning. This could confuse callers since theStorageListSkillsInputtype accepts ametadataparameter, implying it's supported.Either throw an unsupported-filter error, log a warning, or omit
metadatafrom the input type for skills if it's not applicable.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/storage/domains/skills/inmemory.ts` around lines 216 - 223, The current metadata branch in the list() implementation for skills silently returns no results because it always returns false; change it to explicitly fail or ignore the filter: either (A) throw a clear unsupported-filter error (e.g., throw new Error("metadata filter not supported for skills") or a specific UnsupportedFilterError) when metadata is provided, or (B) log a warning and skip the metadata filtering so configs remain unchanged. Update the block inside the list() method that references configs and StorageSkillType to implement one of these behaviors and ensure callers get an explicit signal instead of an empty array.
| export interface StorageSkillType { | ||
| /** Unique, immutable identifier */ | ||
| id: string; | ||
| /** Skill status: 'draft' on creation, 'published' when a version is activated */ | ||
| status: 'draft' | 'published' | 'archived'; | ||
| /** FK to skill_versions.id - the currently active version */ | ||
| activeVersionId?: string; | ||
| /** Author identifier for multi-tenant filtering */ | ||
| authorId?: string; | ||
| createdAt: Date; | ||
| updatedAt: Date; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "*.ts" | grep -i skill | head -20Repository: mastra-ai/mastra
Length of output: 1203
🏁 Script executed:
# Check the SkillsPG implementation
cat -n ./packages/core/src/storage/domains/skills/index.ts | head -150Repository: mastra-ai/mastra
Length of output: 125
🏁 Script executed:
# Check the base.ts file which likely has SkillsPG
wc -l ./packages/core/src/storage/domains/skills/base.tsRepository: mastra-ai/mastra
Length of output: 113
🏁 Script executed:
cat -n ./packages/core/src/storage/domains/skills/base.tsRepository: mastra-ai/mastra
Length of output: 3436
🏁 Script executed:
# Find the PostgreSQL implementation
find . -type f -name "*.ts" | xargs rg "class SkillsPG" -lRepository: mastra-ai/mastra
Length of output: 109
🏁 Script executed:
wc -l ./stores/pg/src/storage/domains/skills/index.tsRepository: mastra-ai/mastra
Length of output: 111
🏁 Script executed:
# Check the list method implementation in SkillsPG
sed -n '1,100p' ./stores/pg/src/storage/domains/skills/index.ts | cat -nRepository: mastra-ai/mastra
Length of output: 3435
🏁 Script executed:
# Search for the list method in SkillsPG
rg -n "async list" ./stores/pg/src/storage/domains/skills/index.ts -A 50 | head -80Repository: mastra-ai/mastra
Length of output: 2999
🏁 Script executed:
# Check the StorageListSkillsInput type definition
rg -n "StorageListSkillsInput" ./packages/core/src/storage/types.ts -A 10Repository: mastra-ai/mastra
Length of output: 433
🏁 Script executed:
# Get the complete StorageListSkillsInput definition
rg -n "StorageListSkillsInput" ./packages/core/src/storage/types.ts -A 25 | head -35Repository: mastra-ai/mastra
Length of output: 910
🏁 Script executed:
# Check StorageWorkspaceType to see if it has metadata
rg -n "export interface StorageWorkspaceType" ./packages/core/src/storage/types.ts -A 15Repository: mastra-ai/mastra
Length of output: 666
🏁 Script executed:
# Check StorageAgentType for comparison
rg -n "export interface StorageAgentType" ./packages/core/src/storage/types.ts -A 15Repository: mastra-ai/mastra
Length of output: 634
🏁 Script executed:
# Check the TABLE_SCHEMAS definition for skills
rg -n "TABLE_SKILLS.*=" ./packages/core/src/storage/constants.ts -A 30 | head -50Repository: mastra-ai/mastra
Length of output: 1189
🏁 Script executed:
# Find the SKILLS_SCHEMA definition
rg -n "SKILLS_SCHEMA" ./packages/core/src/storage/constants.ts -A 30Repository: mastra-ai/mastra
Length of output: 2077
🏁 Script executed:
# Check the create method in SkillsPG to see how metadata is handled
rg -n "async create" ./stores/pg/src/storage/domains/skills/index.ts -A 80 | head -120Repository: mastra-ai/mastra
Length of output: 4350
🏁 Script executed:
# Check the entire list method to see if metadata is ever processed
sed -n '359,450p' ./stores/pg/src/storage/domains/skills/index.ts | cat -nRepository: mastra-ai/mastra
Length of output: 3691
StorageListSkillsInput exposes metadata filtering, but it is silently ignored and StorageSkillType has no metadata field.
StorageSkillType (line 1549) lacks a metadata field, unlike StorageWorkspaceType and StorageAgentType. However, StorageListSkillsInput (line 1593) includes a metadata?: Record<string, unknown> filter option. The implementation confirms this inconsistency: the SkillsPG.list() method extracts only authorId from the input and never processes the metadata parameter. The skills table schema contains no metadata column, and the create method explicitly states "no metadata on entity."
This means the metadata filter in StorageListSkillsInput is silently ignored at runtime—developers passing a metadata filter will see no effect, violating the type contract. Align with other entity types: either add metadata?: Record<string, unknown> to StorageSkillType and implement metadata filtering in the list method, or remove the metadata filter from StorageListSkillsInput.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/storage/types.ts` around lines 1549 - 1560, The Storage API
types are inconsistent: StorageSkillType has no metadata but
StorageListSkillsInput exposes metadata which SkillsPG.list() ignores; fix by
removing metadata from StorageListSkillsInput and any callers/tests that pass it
(or, if you prefer to support metadata, add metadata?: Record<string, unknown>
to StorageSkillType, add a metadata column to the skills table, update
SkillsPG.create to persist metadata, and implement metadata filtering inside
SkillsPG.list()). Update the StorageListSkillsInput type/signature, adjust
SkillsPG.list() parameter handling, and modify related tests and docs to reflect
the chosen approach so types and runtime behavior match.
There was a problem hiding this comment.
Claude says: @coderabbitai The metadata discrepancy is intentional — the thin skill record (SKILLS_SCHEMA) deliberately omits metadata because skills don't carry entity-level metadata (unlike workspaces). The StorageListSkillsInput.metadata filter exists for API symmetry with workspaces but is documented as unsupported for skills in the InMemory implementation. The description/instructions NOT NULL + ?? null fallback is defensive coding for type-level loosening — the constraint is correct as-is since StorageSkillSnapshotType requires both fields. No changes needed here.
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check the InMemory skills implementation for documentation about metadata
cat packages/core/src/storage/domains/skills/inmemory.ts | grep -A 10 -B 5 "metadata"
Length of output: 2576
🏁 Script executed:
#!/bin/bash
# Check if there are any comments or documentation about metadata in the list method
sed -n '/async list/,/^ }/p' packages/core/src/storage/domains/skills/inmemory.ts | head -60
Length of output: 2307
🏁 Script executed:
#!/bin/bash
# Check StorageListSkillsInput documentation
rg -A 5 "StorageListSkillsInput" packages/core/src/storage/types.ts | head -30
Length of output: 206
@DanielSLew, thank you for the clarification! I can confirm the documentation exists in the InMemory implementation:
// Filter by metadata if provided (AND logic) — skills don't have metadata on the record,
// but we support the filter interface for consistency
if (metadata && Object.keys(metadata).length > 0) {
configs = configs.filter(_config => {
// StorageSkillType doesn't have metadata on the thin record
return false;
});
}However, I'd like to raise a API design concern: exposing a metadata filter parameter that is documented to always return zero results creates a confusing developer experience. Developers passing metadata will encounter unexpected empty result sets with no error or warning.
Alternatives to consider:
-
Remove from API (cleanest): Remove
metadatafromStorageListSkillsInputsince it's non-functional. API symmetry shouldn't come at the cost of exposing broken parameters. -
Add metadata support: If skills might need metadata filtering in the future, add
metadata?: Record<string, unknown>toStorageSkillTypeand implement proper filtering. -
Runtime validation: Throw an error when
metadatais passed, making the limitation explicit rather than silent.
If you're confident that "API symmetry" justifies keeping a non-functional parameter despite the developer experience tradeoff, I'll defer to your decision as the PR author. But I wanted to surface this design consideration for future reference.
🧬 Code Graph Analysis Results
- File: packages/core/src/storage/types.ts
-
Summary: StorageSkillSnapshotType
- Purpose: Defines all fields that constitute a skill version snapshot (content-defining properties for a skill).
- Parameters/Fields: name (string), description (string), instructions (string), license (string | undefined), compatibility (unknown | undefined), source (StorageContentSource | undefined), references (string[] | undefined), scripts (string[] | undefined), assets (string[] | undefined), metadata (Record<string, unknown> | undefined), tree (SkillVersionTree | undefined).
- Returns/Usage: Used as the version-snapshot content stored alongside the thin skill record; not the thin record itself.
- Notes/Constraints: Metadata is present in the snapshot type; this type’s fields are consumed by the resolved skill type and by operations that create/update skill versions.
-
Summary: StorageSkillType
- Purpose: Thin skill record with only metadata fields; the actual skill definition/content lives in the version snapshot.
- Parameters/Fields: id (string), status ('draft' | 'published' | 'archived'), activeVersionId (string | undefined), authorId (string | undefined), createdAt (Date), updatedAt (Date).
- Returns/Usage: Represents the lightweight, stable identifier and lifecycle state for a skill; used when listing or querying skills alongside their snapshots.
- Notes/Constraints: Does not contain content fields; content is retrieved via the active version snapshot.
-
Summary: StorageResolvedSkillType
- Purpose: Combined type that includes the thin skill record plus its snapshot content.
- Parameters/Fields: All fields from StorageSkillType plus all fields from StorageSkillSnapshotType.
- Returns/Usage: Used when a caller needs a fully-resolved view of a skill (metadata + content).
- Notes/Constraints: Represents the runtime resolution result of getSkillByIdResolved / listSkillsResolved.
-
Summary: StorageCreateSkillInput
- Purpose: Input type for creating a new skill.
- Parameters/Fields: id (string), authorId (string | undefined), plus all fields from StorageSkillSnapshotType.
- Returns/Usage: Used to create the initial skill along with its first version snapshot.
- Notes/Constraints: Flat union of thin-record fields and initial content fields.
-
Summary: StorageUpdateSkillInput
- Purpose: Input type for updating a skill (metadata-level and optional content fields).
- Parameters/Fields: id (string), authorId (string | undefined), activeVersionId (string | undefined), status ('draft' | 'published' | 'archived' | undefined), plus all fields from Partial.
- Returns/Usage: Used to apply metadata/content updates; may trigger a new version creation at higher layers.
- Notes/Constraints: Partial<...> for snapshot fields; allows updating descriptive fields without requiring full snapshot content.
-
Summary: StorageListSkillsInput
- Purpose: Pagination/metadata query input for listing skills.
- Parameters/Fields: perPage (number | false), page (number), orderBy (StorageOrderBy), authorId (string | undefined), metadata (Record<string, unknown> | undefined).
- Returns/Usage: Used to filter/paginate skill listings.
- Notes/Constraints: Metadata filtering uses AND semantics on key-value pairs.
-
Summary: StorageListSkillsOutput
- Purpose: Paginated list output for thin skill records.
- Parameters/Fields: Inherits PaginationInfo; skills: StorageSkillType[].
- Returns/Usage: Response type for listSkills queries returning thin skill rows.
-
Summary: StorageListSkillsResolvedOutput
- Purpose: Paginated list output for resolved skills.
- Parameters/Fields: Inherits PaginationInfo; skills: StorageResolvedSkillType[].
- Returns/Usage: Response type for listSkillsResolved queries returning fully resolved skill views.
-
Summary: SkillVersionTree
- Purpose: Represents the complete file tree manifest for a skill version.
- Parameters/Fields: entries: Record<string, SkillVersionTreeEntry>.
- Returns/Usage: Stored as JSONB on skill_version rows to map files to blob entries.
-
Summary: SkillVersionTreeEntry
- Purpose: Entry in a skill version’s manifest.
- Parameters/Fields: blobHash (string), size (number), mimeType (string | undefined).
- Returns/Usage: Keyed by relative file path in SkillVersionTree.entries.
-
Summary: StorageBlobEntry
- Purpose: Represents a stored content-addressable blob.
- Parameters/Fields: hash (string), content (string), size (number), mimeType (string | undefined), createdAt (Date).
- Returns/Usage: Backend storage primitive for skill assets referenced by SkillVersionTree.
-
Summary: StorageWorkspaceRef
- Purpose: Reference type used for embedding workspace configuration within agent snapshots.
- Variants:
- { type: 'id'; workspaceId: string }
- { type: 'inline'; config: StorageWorkspaceSnapshotType }
- Returns/Usage: Allows skills/agent configurations to point to a workspace by id or inline configuration as needed.
-
✏️ Learnings added
Learnt from: DanielSLew
Repo: mastra-ai/mastra PR: 13156
File: packages/core/src/storage/types.ts:1549-1560
Timestamp: 2026-02-17T20:45:40.701Z
Learning: In the Mastra repository's storage types (packages/core/src/storage/types.ts), StorageSkillType intentionally omits a metadata field (unlike StorageWorkspaceType and StorageAgentType), but StorageListSkillsInput includes a metadata filter parameter for "API symmetry." The InMemory implementation documents this as unsupported and always returns false/empty results when metadata filtering is attempted. This design choice prioritizes API consistency over functional completeness.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
🧠 Learnings used
Learnt from: gmethvin
Repo: mastra-ai/mastra PR: 11464
File: stores/libsql/src/drizzle/schema.ts:92-123
Timestamp: 2025-12-29T10:04:20.133Z
Learning: The LibSQL and PostgreSQL stores in the Mastra repository have known schema differences, particularly in the mastraScorers table where several columns (scorerId, runId, scorer, score, input, output, source, createdAt, updatedAt) are nullable in LibSQL but notNull in PostgreSQL. This drift is expected and not a bug in Drizzle schema generation.
Learnt from: gmethvin
Repo: mastra-ai/mastra PR: 11464
File: stores/pg/src/drizzle/schema.ts:213-231
Timestamp: 2025-12-29T10:44:37.564Z
Learning: In the Mastra repository, the Drizzle schema generator (scripts/drizzle-schema-generator/index.mjs) strips `.op()` calls from auto-generated Drizzle schemas due to a drizzle-kit bug (https://github.com/drizzle-team/drizzle-orm/issues/5056) that swaps operator classes in composite indexes. PostgreSQL uses correct default operator classes based on column types, so this workaround is safe. The stripping code can be removed once the drizzle-kit bug is fixed.
Learnt from: taofeeq-deru
Repo: mastra-ai/mastra PR: 11486
File: packages/core/src/loop/workflows/agentic-execution/llm-execution-step.ts:643-678
Timestamp: 2025-12-30T15:03:01.474Z
Learning: In `packages/core/src/loop/workflows/agentic-execution/llm-execution-step.ts`, the system guarantees that `metadata.suspendedTools` and `metadata.pendingToolApprovals` never contain resumed tools, so no filtering is needed when using metadata directly.
| invalidateAgentsReferencingSkill(skillId: string): void { | ||
| for (const [agentId, agent] of this._cache.entries()) { | ||
| const raw = (agent as Agent).toRawConfig?.(); | ||
| if (!raw?.skills) continue; | ||
|
|
||
| const skills = raw.skills as Record<string, unknown>; | ||
| if (skillId in skills) { | ||
| this.logger?.debug( | ||
| `[invalidateAgentsReferencingSkill] Evicting agent "${agentId}" (references skill "${skillId}")`, | ||
| ); | ||
| this._cache.delete(agentId); | ||
| this.onCacheEvict(agentId); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's explore the repository structure
git ls-files | grep -E "(agent|skill)" | head -20Repository: mastra-ai/mastra
Length of output: 1468
🏁 Script executed:
# Find the Agent class definition
fd -t f agent.ts | head -10Repository: mastra-ai/mastra
Length of output: 669
🏁 Script executed:
# Check toRawConfig implementation in the agent namespace file
rg -n "toRawConfig" packages/editor/src/namespaces/agent.ts -A 10Repository: mastra-ai/mastra
Length of output: 509
🏁 Script executed:
# Search for StorageConditionalVariant to understand the conditional variants structure
rg -n "StorageConditionalVariant" --type=ts | head -20Repository: mastra-ai/mastra
Length of output: 2488
🏁 Script executed:
# Check how rawConfig is used and what shape it has
rg -n "rawConfig" packages/editor/src/namespaces/agent.ts -B 2 -A 2Repository: mastra-ai/mastra
Length of output: 244
🏁 Script executed:
# Find StoredAgent interface definition
rg -n "interface StoredAgent" --type=ts -A 50 | head -100Repository: mastra-ai/mastra
Length of output: 6101
🏁 Script executed:
# Check the skills field type in StoredAgent
rg -n "skills" packages/core/src/storage/types.ts -B 2 -A 2Repository: mastra-ai/mastra
Length of output: 1431
🏁 Script executed:
# Look for how toRawConfig is implemented
rg -n "toRawConfig\|rawConfig" packages/core/src/agent/ --type=ts -B 3 -A 3 | head -60Repository: mastra-ai/mastra
Length of output: 42
🏁 Script executed:
# Search for toRawConfig implementation more broadly
rg -n "toRawConfig" --type=ts -B 3 -A 10 | head -100Repository: mastra-ai/mastra
Length of output: 7701
🏁 Script executed:
# Check if there's a method that converts conditional fields in rawConfig
rg -n "toRawConfig\|getRawConfig\|rawConfig" packages/core/src/agent.ts -B 2 -A 5Repository: mastra-ai/mastra
Length of output: 126
🏁 Script executed:
# Look at the Agent class structure
rg -n "class Agent" packages/core/src/agent.ts -A 30 | head -80Repository: mastra-ai/mastra
Length of output: 126
🏁 Script executed:
# Find Agent class
fd -t f "agent.ts" | grep -v node_modules | grep -v testRepository: mastra-ai/mastra
Length of output: 3971
🏁 Script executed:
# Search for Agent class definition
rg -n "class Agent" --type=ts | head -10Repository: mastra-ai/mastra
Length of output: 1018
🏁 Script executed:
# Search for toRawConfig definition
rg -n "toRawConfig" --type=ts -B 2 -A 15 | head -150Repository: mastra-ai/mastra
Length of output: 11858
🏁 Script executed:
# Look at the Agent class toRawConfig implementation
head -200 packages/core/src/agent/agent.ts | tail -70Repository: mastra-ai/mastra
Length of output: 2740
🏁 Script executed:
# Search more specifically for toRawConfig method
rg -n "toRawConfig" packages/core/src/agent/agent.ts -A 10Repository: mastra-ai/mastra
Length of output: 42
🏁 Script executed:
# Check the full Agent constructor and relevant fields
rg -n "this\.rawConfig" packages/core/src/agent/agent.ts -B 3 -A 3Repository: mastra-ai/mastra
Length of output: 42
🏁 Script executed:
# Search for toRawConfig in the entire Agent class file
rg -n "toRawConfig" packages/core/src/agent/agent.tsRepository: mastra-ai/mastra
Length of output: 42
🏁 Script executed:
# Check MastraBase class to see where rawConfig is used
rg -n "class MastraBase" --type=ts -A 20Repository: mastra-ai/mastra
Length of output: 2697
🏁 Script executed:
# Look at Agent constructor's use of rawConfig
sed -n '170,250p' packages/core/src/agent/agent.tsRepository: mastra-ai/mastra
Length of output: 2746
🏁 Script executed:
# Search for toRawConfig method across the entire codebase
rg -n "toRawConfig\s*\(" --type=ts | grep -v test | grep -v "\.toRawConfig"Repository: mastra-ai/mastra
Length of output: 234
🏁 Script executed:
# Check MastraBase for toRawConfig method
cat packages/core/src/base.tsRepository: mastra-ai/mastra
Length of output: 1477
🏁 Script executed:
# Verify how conditional skill variants are resolved when creating agents
rg -n "isConditionalVariants.*skills\|resolveConditionalField.*skills" packages/editor/src/namespaces/agent.ts -B 3 -A 3Repository: mastra-ai/mastra
Length of output: 42
🏁 Script executed:
# Check if there's any handling of conditional skills in createAgentFromStoredConfig
sed -n '443,500p' packages/editor/src/namespaces/agent.tsRepository: mastra-ai/mastra
Length of output: 2499
🏁 Script executed:
# Check resolveAgentSkillSource to understand how it handles conditional skills
sed -n '1116,1200p' packages/editor/src/namespaces/agent.tsRepository: mastra-ai/mastra
Length of output: 2587
🏁 Script executed:
# Look for where skills might be resolved as a conditional field
rg -n "storedAgent\.skills" packages/editor/src/namespaces/agent.ts -B 2 -A 2Repository: mastra-ai/mastra
Length of output: 684
Handle conditional skill variants in invalidateAgentsReferencingSkill.
The method assumes raw.skills is always a Record, but it can be a StorageConditionalVariant<Record<string, StorageSkillConfig>>[] array when an agent uses conditional skills. When skills is an array, the skillId in skills check evaluates array indices (0, 1, 2…) rather than skill IDs, causing agents with conditional skill configs to never be invalidated when a referenced skill is published.
Add a check to handle both the Record and conditional variant array cases:
const skills = raw.skills as Record<string, unknown> | unknown[];
if (!Array.isArray(skills) && skillId in skills) {
// Evict agent
}Alternatively, extract the skill record from conditional variants if present before the check.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/editor/src/namespaces/agent.ts` around lines 118 - 132, The
invalidateAgentsReferencingSkill loop assumes raw.skills is a Record but it can
be an array of StorageConditionalVariant; update
invalidateAgentsReferencingSkill to handle both cases by detecting
Array.isArray(raw.skills) and either (a) iterating the conditional variants to
extract their skill records and checking for skillId in each extracted record,
or (b) treating raw.skills as a Record when not an array and using skillId in
skills; ensure you still call this.logger.debug, this._cache.delete(agentId) and
this.onCacheEvict(agentId) when a match is found so agents with conditional
skill configs are correctly evicted.
| handler: async ({ mastra, storedSkillId, skillPath }) => { | ||
| try { | ||
| const storage = mastra.getStorage(); | ||
|
|
||
| if (!storage) { | ||
| throw new HTTPException(500, { message: 'Storage is not configured' }); | ||
| } | ||
|
|
||
| const skillStore = await storage.getStore('skills'); | ||
| if (!skillStore) { | ||
| throw new HTTPException(500, { message: 'Skills storage domain is not available' }); | ||
| } | ||
|
|
||
| const blobStore = await storage.getStore('blobs'); | ||
| if (!blobStore) { | ||
| throw new HTTPException(500, { message: 'Blob storage domain is not available' }); | ||
| } | ||
|
|
||
| // Verify skill exists | ||
| const existing = await skillStore.getById(storedSkillId); | ||
| if (!existing) { | ||
| throw new HTTPException(404, { message: `Stored skill with id ${storedSkillId} not found` }); | ||
| } | ||
|
|
||
| // Use LocalSkillSource to read from the server filesystem | ||
| const source = new LocalSkillSource(); | ||
| const { snapshot, tree } = await publishSkillFromSource(source, skillPath, blobStore); |
There was a problem hiding this comment.
Path traversal vulnerability — skillPath from request body is passed unsanitized to filesystem reads.
skillPath is user-supplied (from the request body) and passed directly to LocalSkillSource() + publishSkillFromSource() which reads files from the server's filesystem. An authenticated user could supply paths like ../../../etc to exfiltrate arbitrary server files into the blob store.
Add path validation to restrict reads to an allowed base directory:
Proposed mitigation
handler: async ({ mastra, storedSkillId, skillPath }) => {
try {
+ // Validate skillPath is within an allowed base directory
+ const resolvedPath = path.resolve(skillPath);
+ const allowedBase = path.resolve(process.env.SKILLS_BASE_DIR || process.cwd());
+ if (!resolvedPath.startsWith(allowedBase + path.sep) && resolvedPath !== allowedBase) {
+ throw new HTTPException(400, {
+ message: `skillPath must be within the allowed directory: ${allowedBase}`,
+ });
+ }
+
const storage = mastra.getStorage();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/server/handlers/stored-skills.ts` around lines 331 - 357,
The handler currently passes the user-supplied skillPath into
LocalSkillSource/publishSkillFromSource allowing path traversal; fix by
enforcing that skillPath is resolved against a configured allowed base directory
before use: reject absolute paths, compute const resolved = await
fs.realpath(path.resolve(ALLOWED_BASE_DIR, skillPath)) and ensure
resolved.startsWith(await fs.realpath(ALLOWED_BASE_DIR)) (or equivalent
normalized check) and throw HTTPException(400) if it fails; perform this check
in the handler just before creating LocalSkillSource or calling
publishSkillFromSource (references: handler, skillPath, LocalSkillSource,
publishSkillFromSource).
| export const publishStoredSkillBodySchema = z.object({ | ||
| skillPath: z.string().describe('Path to the skill directory on the server filesystem (containing SKILL.md)'), | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find usage of skillPath to understand how it's handled
rg -n 'skillPath' --type=ts -C 5 -g '!**/node_modules/**' | head -150Repository: mastra-ai/mastra
Length of output: 13824
🏁 Script executed:
#!/bin/bash
# Find publishSkillFromSource implementation
rg -n 'publishSkillFromSource' --type=ts -B 2 -A 10 | head -100Repository: mastra-ai/mastra
Length of output: 8737
🏁 Script executed:
#!/bin/bash
# Find the publish.ts file in core/workspace
find packages/core -name 'publish.ts' -o -name 'publish.js' | head -5Repository: mastra-ai/mastra
Length of output: 105
🏁 Script executed:
#!/bin/bash
# Read the publish.ts file to check skillPath handling
cat packages/core/src/workspace/skills/publish.tsRepository: mastra-ai/mastra
Length of output: 6726
🏁 Script executed:
#!/bin/bash
# Find and read LocalSkillSource implementation
rg -n 'class LocalSkillSource' --type=ts -A 30 | head -150Repository: mastra-ai/mastra
Length of output: 2868
🏁 Script executed:
#!/bin/bash
# Read the full LocalSkillSource file to see readdir implementation
cat packages/core/src/workspace/skills/local-skill-source.tsRepository: mastra-ai/mastra
Length of output: 2558
🏁 Script executed:
#!/bin/bash
# Find assertSafeSkillName implementation
rg -n 'assertSafeSkillName' --type=ts -B 2 -A 10 | head -80Repository: mastra-ai/mastra
Length of output: 5795
🏁 Script executed:
#!/bin/bash
# Read the stored-skills handler to confirm no skillPath validation exists
sed -n '320,375p' packages/server/src/server/handlers/stored-skills.tsRepository: mastra-ai/mastra
Length of output: 2088
Validate and restrict skillPath to prevent directory traversal attacks.
The publishStoredSkillBodySchema accepts skillPath as an unrestricted string, and the handler passes it directly to publishSkillFromSource() without validation. Since LocalSkillSource uses path.resolve() without bounds checking, an attacker can supply relative paths like ../../etc/passwd to read arbitrary files from the server filesystem. Apply validation similar to the assertSafeSkillName() pattern used elsewhere in the codebase (workspace.ts:1150–1156) to restrict skillPath to a safe character set (alphanumeric, hyphens, underscores) or enforce it against a whitelist of allowed skill directories.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/server/schemas/stored-skills.ts` around lines 125 - 127,
The publishStoredSkillBodySchema currently allows arbitrary strings for
skillPath which enables directory traversal when later passed to
publishSkillFromSource/LocalSkillSource; update the schema to validate/restrict
skillPath (e.g., use z.string().regex(/^[A-Za-z0-9_-]+$/) or a refine that calls
the existing assertSafeSkillName helper pattern from workspace.ts) and ensure
the request handler enforces this schema before calling publishSkillFromSource
so only safe skill names or whitelisted directories are accepted.
| // Quote all column names to handle SQL reserved words (e.g. "references") | ||
| return colDef?.type === 'jsonb' ? `json("${parsedCol}") as "${parsedCol}"` : `"${parsedCol}"`; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find the TABLE_SCHEMAS definition in the LibSQL store
fd "schema.ts" stores/libsql/src -x cat -n {} | head -150Repository: mastra-ai/mastra
Length of output: 42
🏁 Script executed:
# Also check the utils.ts file to confirm the quoting patterns in different functions
cat -n stores/libsql/src/storage/db/utils.ts | head -280Repository: mastra-ai/mastra
Length of output: 11287
🏁 Script executed:
# Search for TABLE_SCHEMAS definition more specifically
rg "TABLE_SCHEMAS" stores/libsql/src -A 20 | head -100Repository: mastra-ai/mastra
Length of output: 7327
🏁 Script executed:
# Find TABLE_SCHEMAS definition in core/storage
fd schema.ts -g "*/core/*" -x grep -l "TABLE_SCHEMAS" {} \;Repository: mastra-ai/mastra
Length of output: 155
🏁 Script executed:
# Also check the core storage directory structure
find . -path "*/core/storage*" -name "*.ts" -type f | grep -E "(schema|storage)" | head -20Repository: mastra-ai/mastra
Length of output: 127
🏁 Script executed:
# Look for TABLE_SCHEMAS export
rg "export.*TABLE_SCHEMAS|TABLE_SCHEMAS.*=" -A 30 --type=ts | head -150Repository: mastra-ai/mastra
Length of output: 10378
🏁 Script executed:
# Find the core storage module
find . -path "*/core/storage*" -name "*.ts" -type f ! -path "*/node_modules/*" | head -20Repository: mastra-ai/mastra
Length of output: 127
🏁 Script executed:
# Search for TABLE_SCHEMAS definition
rg "export.*TABLE_SCHEMAS\s*[=:{]" --type=ts -A 100 | head -200Repository: mastra-ai/mastra
Length of output: 6394
🏁 Script executed:
# Get the full list of column names by finding all schema definitions
cat packages/core/src/storage/constants.ts | head -500Repository: mastra-ai/mastra
Length of output: 19587
🏁 Script executed:
# Check specifically for any columns that might be SQL reserved words
rg "SCORERS_SCHEMA|SPAN_SCHEMA|AGENTS_SCHEMA|AGENT_VERSIONS_SCHEMA|PROMPT_BLOCKS_SCHEMA" --type=ts packages/core/src/storage/ -A 50 | head -300Repository: mastra-ai/mastra
Length of output: 18791
🏁 Script executed:
# Check if the LibSQL store actually uses all these schemas
rg "TABLE_SKILL_VERSIONS|buildSelectColumns" stores/libsql/src -A 2 -B 2Repository: mastra-ai/mastra
Length of output: 29406
🏁 Script executed:
# Verify which tables are actually used in LibSQL operations
grep -r "TABLE_" stores/libsql/src/storage/index.ts | head -40Repository: mastra-ai/mastra
Length of output: 42
🏁 Script executed:
# Check if references column is ever used in WHERE or UPDATE for skills
rg "references" stores/libsql/src/storage/domains/skills/ -B 3 -A 3Repository: mastra-ai/mastra
Length of output: 3366
🏁 Script executed:
# Also check the actual SKILL_VERSIONS_SCHEMA definition more carefully
rg "references.*jsonb" packages/core/src/storage/constants.ts -B 2 -A 2Repository: mastra-ai/mastra
Length of output: 296
Column quoting is inconsistently applied across SQL builders — other operations remain unquoted.
The comment correctly identifies the issue: quoting in buildSelectColumns (lines 27-28) protects against reserved words like references, but the same parseSqlIdentifier-returned values are interpolated without double-quoting in prepareStatement (line 135), prepareUpdateStatement (line 170), buildCondition (lines 244, 254), and buildDateRangeCondition (lines 267, 272).
This is not theoretical—the SKILL_VERSIONS_SCHEMA has a references column (a SQLite reserved word), which is actively used. The skills domain already works around this for INSERT with raw SQL, but UPDATE and WHERE operations using reserved word columns would fail without proper quoting.
Apply the same quoting pattern to all SQL builders:
prepareStatementline 135: quote columns in the INSERT column listprepareUpdateStatementline 170: quote columns in the SET clausebuildConditionlines 244, 254: quote column in IS NULL and equality checksbuildDateRangeConditionlines 267, 272: quote column in range comparisons
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@stores/libsql/src/storage/db/utils.ts` around lines 27 - 28, The SQL builders
inconsistently quote identifiers: mimic the quoting logic used in
buildSelectColumns (which wraps parsedCol in double-quotes and handles
json/jsonb specially) across all builders — update prepareStatement to wrap each
parsed column in the INSERT column list with double-quotes (use the same
parseSqlIdentifier output and jsonb handling), update prepareUpdateStatement to
quote columns in the SET clause, update buildCondition to quote the column in IS
NULL and equality checks, and update buildDateRangeCondition to quote the column
in range comparisons; ensure you reuse parseSqlIdentifier/parsedCol and apply
the jsonb special-case only where select expressions require it.
There was a problem hiding this comment.
Claude says: @coderabbitai This is a valid observation, but the inconsistent quoting pattern is pre-existing across the entire LibSQL codebase and not introduced by this PR. The references column in SKILL_VERSIONS_SCHEMA works around this via raw SQL in the skills domain's INSERT operations. Fixing all SQL builders to consistently quote identifiers is a worthwhile improvement but should be done as a separate dedicated refactor across the full LibSQL store, not scoped to this PR.
There was a problem hiding this comment.
@DanielSLew, you're absolutely right — this inconsistency pre-dates your PR. Looking at the diff, you're only modifying buildSelectColumns to add proper quoting, and the other SQL builders (prepareStatement, prepareUpdateStatement, etc.) are untouched and already have this pattern.
Since the workaround for the references column is already in place via raw SQL and this broader quoting issue exists across the entire LibSQL codebase, it makes sense to address it separately rather than expanding the scope of this PR.
Would you like me to create a follow-up issue to track systematically adding identifier quoting across all LibSQL SQL builders? That way this improvement can be properly scoped and tested as a dedicated refactor.
🧠 Learnings used
Learnt from: gmethvin
Repo: mastra-ai/mastra PR: 11464
File: stores/pg/src/drizzle/schema.ts:213-231
Timestamp: 2025-12-29T10:44:37.564Z
Learning: In the Mastra repository, the Drizzle schema generator (scripts/drizzle-schema-generator/index.mjs) strips `.op()` calls from auto-generated Drizzle schemas due to a drizzle-kit bug (https://github.com/drizzle-team/drizzle-orm/issues/5056) that swaps operator classes in composite indexes. PostgreSQL uses correct default operator classes based on column types, so this workaround is safe. The stripping code can be removed once the drizzle-kit bug is fixed.
Learnt from: gmethvin
Repo: mastra-ai/mastra PR: 11464
File: stores/libsql/src/drizzle/schema.ts:92-123
Timestamp: 2025-12-29T10:04:20.133Z
Learning: The LibSQL and PostgreSQL stores in the Mastra repository have known schema differences, particularly in the mastraScorers table where several columns (scorerId, runId, scorer, score, input, output, source, createdAt, updatedAt) are nullable in LibSQL but notNull in PostgreSQL. This drift is expected and not a bug in Drizzle schema generation.
| async create(input: { skill: StorageCreateSkillInput }): Promise<StorageSkillType> { | ||
| const { skill } = input; | ||
| try { | ||
| const collection = await this.getCollection(TABLE_SKILLS); | ||
|
|
||
| // Derive skill ID from name | ||
| const id = skill.name | ||
| .toLowerCase() | ||
| .replace(/[^a-z0-9]+/g, '-') | ||
| .replace(/(^-|-$)/g, ''); | ||
|
|
||
| // Check if skill already exists | ||
| const existing = await collection.findOne({ id }); | ||
| if (existing) { | ||
| throw new MastraError({ | ||
| id: createStorageErrorId('MONGODB', 'CREATE_SKILL', 'ALREADY_EXISTS'), | ||
| domain: ErrorDomain.STORAGE, | ||
| category: ErrorCategory.USER, | ||
| details: { id }, | ||
| text: `Skill with id ${id} already exists`, | ||
| }); | ||
| } | ||
|
|
||
| const now = new Date(); | ||
|
|
||
| // Create thin skill record | ||
| const newSkill: StorageSkillType = { | ||
| id, | ||
| status: 'draft', | ||
| activeVersionId: undefined, | ||
| authorId: skill.authorId, | ||
| createdAt: now, | ||
| updatedAt: now, | ||
| }; | ||
|
|
||
| await collection.insertOne(this.serializeSkill(newSkill)); | ||
|
|
||
| // Extract snapshot config from flat input | ||
| const snapshotConfig: Record<string, any> = {}; | ||
| for (const field of SNAPSHOT_FIELDS) { | ||
| if ((skill as any)[field] !== undefined) { | ||
| snapshotConfig[field] = (skill as any)[field]; | ||
| } | ||
| } | ||
|
|
||
| // Create version 1 | ||
| const versionId = randomUUID(); | ||
| try { | ||
| await this.createVersion({ | ||
| id: versionId, | ||
| skillId: id, | ||
| versionNumber: 1, | ||
| ...snapshotConfig, | ||
| changedFields: Object.keys(snapshotConfig), | ||
| changeMessage: 'Initial version', | ||
| } as CreateSkillVersionInput); | ||
| } catch (versionError) { | ||
| // Clean up the orphaned skill record | ||
| await collection.deleteOne({ id }); | ||
| throw versionError; | ||
| } | ||
|
|
||
| return newSkill; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check whether the LibSQL and PG implementations also derive ID from name or use skill.id
echo "=== LibSQL skills create ==="
rg -n -A 15 'async create\(input.*StorageCreateSkillInput' stores/libsql/src/storage/domains/skills/
echo ""
echo "=== PG skills create ==="
rg -n -A 15 'async create\(input.*StorageCreateSkillInput' stores/pg/src/storage/domains/skills/
echo ""
echo "=== InMemory skills create ==="
rg -n -A 15 'async create\(input.*StorageCreateSkillInput' packages/core/src/storage/domains/skills/Repository: mastra-ai/mastra
Length of output: 4469
MongoDB skill creation derives ID from name instead of using the caller-provided ID, unlike all other storage implementations.
The MongoDB create() method derives the skill ID from the name using a regex transformation (lines 157–160), but LibSQL, PostgreSQL, and in-memory implementations all use skill.id directly from the input. A caller providing { skill: { id: 'explicit-id', name: 'My Skill', ... } } would receive inconsistent behavior depending on the storage backend:
- LibSQL, PG, InMemory: skill ID is
'explicit-id' - MongoDB: skill ID is
'my-skill'(derived from name)
The caller-provided id field is silently ignored. Either align MongoDB with the other implementations by using skill.id, or document this intentional divergence and update the type contract accordingly.
Proposed fix (align with other implementations)
- // Derive skill ID from name
- const id = skill.name
- .toLowerCase()
- .replace(/[^a-z0-9]+/g, '-')
- .replace(/(^-|-$)/g, '');
+ const id = skill.id;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@stores/mongodb/src/storage/domains/skills/index.ts` around lines 151 - 213,
The create method currently derives the skill id from the name, causing
divergence; change it to prefer the caller-provided id (skill.id) and only fall
back to the name-based slug when skill.id is absent or empty. Locate the create
function in this file (references: TABLE_SKILLS, SNAPSHOT_FIELDS,
this.createVersion, this.serializeSkill) and replace the id derivation so id =
skill.id || generatedSlugFrom(skill.name); keep the rest of the flow (existence
check, insertOne, createVersion, cleanup) unchanged.
| async create(input: { workspace: StorageCreateWorkspaceInput }): Promise<StorageWorkspaceType> { | ||
| const { workspace } = input; | ||
| try { | ||
| const collection = await this.getCollection(TABLE_WORKSPACES); | ||
|
|
||
| // Derive workspace ID from name | ||
| const id = workspace.name | ||
| .toLowerCase() | ||
| .replace(/[^a-z0-9]+/g, '-') | ||
| .replace(/(^-|-$)/g, ''); | ||
|
|
||
| // Check if workspace already exists | ||
| const existing = await collection.findOne({ id }); | ||
| if (existing) { | ||
| throw new MastraError({ | ||
| id: createStorageErrorId('MONGODB', 'CREATE_WORKSPACE', 'ALREADY_EXISTS'), | ||
| domain: ErrorDomain.STORAGE, | ||
| category: ErrorCategory.USER, | ||
| details: { id }, | ||
| text: `Workspace with id ${id} already exists`, | ||
| }); | ||
| } | ||
|
|
||
| const now = new Date(); | ||
|
|
||
| // Create thin workspace record | ||
| const newWorkspace: StorageWorkspaceType = { | ||
| id, | ||
| status: 'draft', | ||
| activeVersionId: undefined, | ||
| authorId: workspace.authorId, | ||
| metadata: workspace.metadata, | ||
| createdAt: now, | ||
| updatedAt: now, | ||
| }; | ||
|
|
||
| await collection.insertOne(this.serializeWorkspace(newWorkspace)); | ||
|
|
||
| // Extract snapshot config from flat input | ||
| const snapshotConfig: Record<string, any> = {}; | ||
| for (const field of SNAPSHOT_FIELDS) { | ||
| if ((workspace as any)[field] !== undefined) { | ||
| snapshotConfig[field] = (workspace as any)[field]; | ||
| } | ||
| } | ||
|
|
||
| // Create version 1 | ||
| const versionId = randomUUID(); | ||
| try { | ||
| await this.createVersion({ | ||
| id: versionId, | ||
| workspaceId: id, | ||
| versionNumber: 1, | ||
| ...snapshotConfig, | ||
| changedFields: Object.keys(snapshotConfig), | ||
| changeMessage: 'Initial version', | ||
| } as CreateWorkspaceVersionInput); | ||
| } catch (versionError) { | ||
| // Clean up the orphaned workspace record | ||
| await collection.deleteOne({ id }); | ||
| throw versionError; | ||
| } |
There was a problem hiding this comment.
Workspace name slugification can produce an empty id.
If workspace.name contains only non-alphanumeric characters (e.g., "!!!", "---"), the regex chain at lines 156–159 will produce an empty string "" as the id. This empty ID would be stored and matched by findOne({ id: "" }), silently corrupting lookups and colliding with any other empty-ID workspace.
Add a guard after slugification:
🐛 Proposed fix
const id = workspace.name
.toLowerCase()
.replace(/[^a-z0-9]+/g, '-')
.replace(/(^-|-$)/g, '');
+ if (!id) {
+ throw new MastraError({
+ id: createStorageErrorId('MONGODB', 'CREATE_WORKSPACE', 'INVALID_NAME'),
+ domain: ErrorDomain.STORAGE,
+ category: ErrorCategory.USER,
+ details: { name: workspace.name },
+ text: `Workspace name "${workspace.name}" produces an empty id after slugification`,
+ });
+ }
+
// Check if workspace already exists📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async create(input: { workspace: StorageCreateWorkspaceInput }): Promise<StorageWorkspaceType> { | |
| const { workspace } = input; | |
| try { | |
| const collection = await this.getCollection(TABLE_WORKSPACES); | |
| // Derive workspace ID from name | |
| const id = workspace.name | |
| .toLowerCase() | |
| .replace(/[^a-z0-9]+/g, '-') | |
| .replace(/(^-|-$)/g, ''); | |
| // Check if workspace already exists | |
| const existing = await collection.findOne({ id }); | |
| if (existing) { | |
| throw new MastraError({ | |
| id: createStorageErrorId('MONGODB', 'CREATE_WORKSPACE', 'ALREADY_EXISTS'), | |
| domain: ErrorDomain.STORAGE, | |
| category: ErrorCategory.USER, | |
| details: { id }, | |
| text: `Workspace with id ${id} already exists`, | |
| }); | |
| } | |
| const now = new Date(); | |
| // Create thin workspace record | |
| const newWorkspace: StorageWorkspaceType = { | |
| id, | |
| status: 'draft', | |
| activeVersionId: undefined, | |
| authorId: workspace.authorId, | |
| metadata: workspace.metadata, | |
| createdAt: now, | |
| updatedAt: now, | |
| }; | |
| await collection.insertOne(this.serializeWorkspace(newWorkspace)); | |
| // Extract snapshot config from flat input | |
| const snapshotConfig: Record<string, any> = {}; | |
| for (const field of SNAPSHOT_FIELDS) { | |
| if ((workspace as any)[field] !== undefined) { | |
| snapshotConfig[field] = (workspace as any)[field]; | |
| } | |
| } | |
| // Create version 1 | |
| const versionId = randomUUID(); | |
| try { | |
| await this.createVersion({ | |
| id: versionId, | |
| workspaceId: id, | |
| versionNumber: 1, | |
| ...snapshotConfig, | |
| changedFields: Object.keys(snapshotConfig), | |
| changeMessage: 'Initial version', | |
| } as CreateWorkspaceVersionInput); | |
| } catch (versionError) { | |
| // Clean up the orphaned workspace record | |
| await collection.deleteOne({ id }); | |
| throw versionError; | |
| } | |
| async create(input: { workspace: StorageCreateWorkspaceInput }): Promise<StorageWorkspaceType> { | |
| const { workspace } = input; | |
| try { | |
| const collection = await this.getCollection(TABLE_WORKSPACES); | |
| // Derive workspace ID from name | |
| const id = workspace.name | |
| .toLowerCase() | |
| .replace(/[^a-z0-9]+/g, '-') | |
| .replace(/(^-|-$)/g, ''); | |
| if (!id) { | |
| throw new MastraError({ | |
| id: createStorageErrorId('MONGODB', 'CREATE_WORKSPACE', 'INVALID_NAME'), | |
| domain: ErrorDomain.STORAGE, | |
| category: ErrorCategory.USER, | |
| details: { name: workspace.name }, | |
| text: `Workspace name "${workspace.name}" produces an empty id after slugification`, | |
| }); | |
| } | |
| // Check if workspace already exists | |
| const existing = await collection.findOne({ id }); | |
| if (existing) { | |
| throw new MastraError({ | |
| id: createStorageErrorId('MONGODB', 'CREATE_WORKSPACE', 'ALREADY_EXISTS'), | |
| domain: ErrorDomain.STORAGE, | |
| category: ErrorCategory.USER, | |
| details: { id }, | |
| text: `Workspace with id ${id} already exists`, | |
| }); | |
| } | |
| const now = new Date(); | |
| // Create thin workspace record | |
| const newWorkspace: StorageWorkspaceType = { | |
| id, | |
| status: 'draft', | |
| activeVersionId: undefined, | |
| authorId: workspace.authorId, | |
| metadata: workspace.metadata, | |
| createdAt: now, | |
| updatedAt: now, | |
| }; | |
| await collection.insertOne(this.serializeWorkspace(newWorkspace)); | |
| // Extract snapshot config from flat input | |
| const snapshotConfig: Record<string, any> = {}; | |
| for (const field of SNAPSHOT_FIELDS) { | |
| if ((workspace as any)[field] !== undefined) { | |
| snapshotConfig[field] = (workspace as any)[field]; | |
| } | |
| } | |
| // Create version 1 | |
| const versionId = randomUUID(); | |
| try { | |
| await this.createVersion({ | |
| id: versionId, | |
| workspaceId: id, | |
| versionNumber: 1, | |
| ...snapshotConfig, | |
| changedFields: Object.keys(snapshotConfig), | |
| changeMessage: 'Initial version', | |
| } as CreateWorkspaceVersionInput); | |
| } catch (versionError) { | |
| // Clean up the orphaned workspace record | |
| await collection.deleteOne({ id }); | |
| throw versionError; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@stores/mongodb/src/storage/domains/workspaces/index.ts` around lines 150 -
211, The slugification in create (function create in this file) can yield an
empty id from workspace.name (variable id); add a guard immediately after the
slugification block to detect id === '' and throw a MastraError (use
createStorageErrorId with a new code like 'INVALID_WORKSPACE_NAME' or
'INVALID_NAME', ErrorDomain.STORAGE, ErrorCategory.USER) that includes the
original workspace.name in details/text so callers get a clear validation error
instead of creating an empty-id workspace; keep the rest of the create flow
(findOne, insertOne, createVersion) unchanged.
| interface E2BProviderConfig { | ||
| template?: string; | ||
| timeout?: number; | ||
| env?: Record<string, string>; | ||
| metadata?: Record<string, unknown>; | ||
| runtimes?: SandboxRuntime[]; | ||
| domain?: string; | ||
| apiUrl?: string; | ||
| apiKey?: string; | ||
| accessToken?: string; | ||
| } |
There was a problem hiding this comment.
Sensitive fields (apiKey, accessToken) in serializable config may end up in plaintext storage.
The E2BProviderConfig is described as a "serializable subset … for editor storage." If workspace configs are persisted to the database (LibSQL/Postgres/MongoDB), apiKey and accessToken will be stored in plaintext alongside other workspace metadata.
Consider referencing secrets via environment variable names or a secrets manager key rather than embedding the raw values, or at a minimum marking these fields in the schema as writeOnly: true to signal they should not be returned in API responses.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workspaces/e2b/src/provider.ts` around lines 21 - 31, The E2BProviderConfig
currently includes sensitive fields apiKey and accessToken which will be
persisted in plaintext; update the config shape so these are not raw secrets:
replace apiKey and accessToken with secret references (e.g., env var names or
secret manager keys) or remove them from the serializable editor config and
store them only in a secure secrets store, and mark them writeOnly in any
JSON/schema (or explicitly exclude them from read/serialization paths) so they
are never returned in API responses; update all code paths that read/write
E2BProviderConfig (including any persistence/serialization logic) to accept and
persist only the secret reference and to prevent leaking apiKey/accessToken.
There was a problem hiding this comment.
Claude says: @coderabbitai Valid concern about sensitive fields in serializable config. However, this applies broadly to all provider configs (S3, GCS, E2B) and is a design-level decision about secrets management for the entire provider registry system. The current approach mirrors how users typically configure cloud SDKs via environment variables — the E2BProviderConfig is the descriptor shape, and users can populate it from env vars at runtime. A proper secrets management abstraction (env var references, vault keys, etc.) would be a cross-cutting feature across all providers and is better addressed in a dedicated follow-up.
There was a problem hiding this comment.
@DanielSLew You're absolutely right that this is a cross-cutting concern. I can confirm the pattern is consistent across all three provider types introduced in this PR:
- E2B:
apiKey,accessToken - S3:
accessKeyId,secretAccessKey - GCS:
credentials(object or string)
Since this applies uniformly to the provider registry system and fixing it properly would require a secrets management abstraction across all providers, deferring to a dedicated follow-up makes sense.
Recommendation: Consider opening a follow-up issue to track implementing a secrets management strategy (env var references, secret manager keys, or encrypted storage) for all provider configs, so this technical debt is captured and addressed systematically.
For now, users can mitigate by:
- Loading sensitive config values from environment variables at runtime
- Using restrictive database ACLs to protect stored configurations
- Avoiding embedding raw secrets in workspace definitions
The || operator incorrectly treats 0 as falsy, using ?? instead to only fall back when the value is null/undefined. PR comment: #13156 (comment) Co-Authored-By: Mastra Code <noreply@mastra.ai>
Skills and workspaces update() methods now throw MastraError with NOT_FOUND code when the target record doesn't exist, consistent with other storage domain implementations. PR comment: #13156 (comment) PR comment: #13156 (comment) Co-Authored-By: Mastra Code <noreply@mastra.ai>
The publish route now sets activeVersionId to the newly created version, matching the editor's publish flow. Also adds path traversal protection by validating skillPath against a regex. PR comment: #13156 (comment) PR comment: #13156 (comment) Co-Authored-By: Mastra Code <noreply@mastra.ai>
If createVersion() throws during skill creation, the parent skill record is now cleaned up to avoid leaving an inconsistent state. PR comment: #13156 (comment) Co-Authored-By: Mastra Code <noreply@mastra.ai>
invalidateAgentsReferencingSkill now checks for StorageConditionalVariant arrays in agent skill configs, ensuring agents with conditional skills are properly invalidated when a skill is re-published. PR comment: #13156 (comment) Co-Authored-By: Mastra Code <noreply@mastra.ai>
- Skills create() now uses skill.id when provided instead of always deriving from name, consistent with other storage backends. - Workspaces create() falls back to workspace.id when slugification produces an empty string. PR comment: #13156 (comment) PR comment: #13156 (comment) Co-Authored-By: Mastra Code <noreply@mastra.ai>
Binary files (PNG, JPEG, etc.) are now base64-encoded before storage
and decoded back to Buffer on read. This prevents data corruption
that would occur from naive UTF-8 string conversion.
- Added encoding field to SkillVersionTreeEntry ('utf-8' | 'base64')
- walkSkillDirectory detects binary MIME types and preserves Buffers
- hashContent hashes raw bytes for stable binary hashes
- VersionedSkillSource.readFile() decodes base64 back to Buffer
- 8 new tests covering binary round-trip through publish and read
PR comment: #13156 (comment)
Co-Authored-By: Mastra Code <noreply@mastra.ai>
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/core/src/storage/base.ts (1)
215-230:⚠️ Potential issue | 🟠 MajorInclude
blobsin composed domain wiring.When
config.defaultorconfig.domainsis used,blobsis not copied intothis.stores, sogetStore('blobs')returnsundefinedeven if a blob store is provided. Add it to the composed object.🛠️ Suggested fix
this.stores = { memory: domainOverrides.memory ?? defaultStores?.memory, workflows: domainOverrides.workflows ?? defaultStores?.workflows, scores: domainOverrides.scores ?? defaultStores?.scores, observability: domainOverrides.observability ?? defaultStores?.observability, agents: domainOverrides.agents ?? defaultStores?.agents, datasets: domainOverrides.datasets ?? defaultStores?.datasets, experiments: domainOverrides.experiments ?? defaultStores?.experiments, promptBlocks: domainOverrides.promptBlocks ?? defaultStores?.promptBlocks, scorerDefinitions: domainOverrides.scorerDefinitions ?? defaultStores?.scorerDefinitions, mcpClients: domainOverrides.mcpClients ?? defaultStores?.mcpClients, workspaces: domainOverrides.workspaces ?? defaultStores?.workspaces, skills: domainOverrides.skills ?? defaultStores?.skills, + blobs: domainOverrides.blobs ?? defaultStores?.blobs, } as StorageDomains;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/storage/base.ts` around lines 215 - 230, The composed stores object in the Storage base wiring is missing the blobs domain so getStore('blobs') can return undefined; update the composition where this.stores is built (the object literal assigned to this.stores in base.ts, typed as StorageDomains) to include blobs: domainOverrides.blobs ?? defaultStores?.blobs so blobs from config.default or config.domains is propagated into this.stores.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/core/src/storage/domains/skills/inmemory.ts`:
- Around line 69-79: Add the missing import from node:crypto and replace direct
global usage of crypto.randomUUID() with the imported function to match project
conventions: import { randomUUID } from 'node:crypto' and use randomUUID() where
version IDs are generated (the two places that currently call
crypto.randomUUID(), such as the versionId creation before createVersion and the
second occurrence around the later version creation); ensure both usages are
updated to the imported symbol.
In `@packages/editor/src/namespaces/agent.ts`:
- Around line 1134-1140: resolveAgentSkillSource currently returns undefined for
conditional (array) skills, causing pin/latest version resolution to be skipped;
update resolveAgentSkillSource to detect when skills is an array and merge the
variant entries into a single Skill config record (or, alternatively, log a
clear warning) before performing versioned resolution so conditional variants do
not bypass the version strategy. Specifically, inside resolveAgentSkillSource
handle the branch where Array.isArray(skills) by merging array elements into a
single Record<string, StorageSkillConfig> (resolving conflicts
deterministically, e.g., last-wins or explicit merge strategy) and then continue
using skillConfigs for the rest of the logic; reference the symbols
resolveAgentSkillSource, skills, skillConfigs, and StorageSkillConfig to locate
and update the code path.
---
Outside diff comments:
In `@packages/core/src/storage/base.ts`:
- Around line 215-230: The composed stores object in the Storage base wiring is
missing the blobs domain so getStore('blobs') can return undefined; update the
composition where this.stores is built (the object literal assigned to
this.stores in base.ts, typed as StorageDomains) to include blobs:
domainOverrides.blobs ?? defaultStores?.blobs so blobs from config.default or
config.domains is propagated into this.stores.
| const versionId = crypto.randomUUID(); | ||
| try { | ||
| await this.createVersion({ | ||
| id: versionId, | ||
| skillId: skill.id, | ||
| versionNumber: 1, | ||
| ...snapshotConfig, | ||
| changedFields: Object.keys(snapshotConfig), | ||
| changeMessage: 'Initial version', | ||
| }); | ||
| } catch (error) { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd packages/core && find . -name "inmemory.ts" | grep skillsRepository: mastra-ai/mastra
Length of output: 100
🏁 Script executed:
head -n 20 packages/core/src/storage/domains/skills/inmemory.tsRepository: mastra-ai/mastra
Length of output: 599
🏁 Script executed:
sed -n '65,80p' packages/core/src/storage/domains/skills/inmemory.tsRepository: mastra-ai/mastra
Length of output: 614
🏁 Script executed:
sed -n '165,180p' packages/core/src/storage/domains/skills/inmemory.tsRepository: mastra-ai/mastra
Length of output: 548
🏁 Script executed:
rg "^import.*crypto" packages/core/src/storage/domains/skills/inmemory.tsRepository: mastra-ai/mastra
Length of output: 42
🏁 Script executed:
cat packages/core/package.json | grep -A5 '"engines"'Repository: mastra-ai/mastra
Length of output: 140
🏁 Script executed:
cat tsconfig.json | head -30Repository: mastra-ai/mastra
Length of output: 860
🏁 Script executed:
rg "import.*from.*['\"]node:crypto['\"]|import.*from.*['\"]crypto['\"]" packages/core/src --type tsRepository: mastra-ai/mastra
Length of output: 3337
Missing crypto import statement.
crypto.randomUUID() is used at lines 69 and 168 without an explicit import. While Node.js 22.13.0+ provides a global crypto object at runtime, the codebase consistently imports randomUUID from node:crypto for type safety and consistency. Add the import and update both usages.
🛠️ Suggested fix
+import { randomUUID } from 'node:crypto';
import { normalizePerPage, calculatePagination } from '../../base';
...
- const versionId = crypto.randomUUID();
+ const versionId = randomUUID();
...
- const newVersionId = crypto.randomUUID();
+ const newVersionId = randomUUID();Also applies to: 168-175
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/storage/domains/skills/inmemory.ts` around lines 69 - 79,
Add the missing import from node:crypto and replace direct global usage of
crypto.randomUUID() with the imported function to match project conventions:
import { randomUUID } from 'node:crypto' and use randomUUID() where version IDs
are generated (the two places that currently call crypto.randomUUID(), such as
the versionId creation before createVersion and the second occurrence around the
later version creation); ensure both usages are updated to the imported symbol.
| private async resolveAgentSkillSource(skills: StorageResolvedAgentType['skills']): Promise<SkillSource | undefined> { | ||
| if (!skills || typeof skills !== 'object') return undefined; | ||
|
|
||
| // Resolve conditional field to a plain record | ||
| const skillConfigs = Array.isArray(skills) ? undefined : (skills as Record<string, StorageSkillConfig>); | ||
| if (!skillConfigs || Object.keys(skillConfigs).length === 0) return undefined; | ||
|
|
There was a problem hiding this comment.
Claude says: @coderabbitai Intentional — conditional variant arrays represent a future feature where skill configs vary by condition. Merging multiple variant branches into a single CompositeVersionedSkillSource has complex semantics (which variant wins? how to combine?). The current behavior safely returns undefined, which falls back to no versioned skills for those agents. This is tracked for a follow-up when conditional skill variants are fully specified.
- Add workspace/skill/blob table entries to Cloudflare RecordTypes - Fix WorkspaceConfig generic type params in editor workspace hydration - Widen resolveStoredWorkspace return type for mount support Co-Authored-By: Mastra Code <noreply@mastra.ai>
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (2)
packages/editor/src/namespaces/workspace.ts (1)
65-71: Sequential filesystem resolution in mount loop — parallelize withPromise.all.Each mount's filesystem is resolved sequentially. When multiple mounts are configured, this adds unnecessary latency.
⚡ Proposed refactor to parallelize mount resolution
- if (snapshot.mounts) { - const mounts: Record<string, WorkspaceFilesystem> = {}; - for (const [path, fsConfig] of Object.entries(snapshot.mounts)) { - mounts[path] = await this.resolveFilesystem(fsConfig); - } - config.mounts = mounts; - } + if (snapshot.mounts) { + const entries = Object.entries(snapshot.mounts); + const resolved = await Promise.all( + entries.map(([path, fsConfig]) => + this.resolveFilesystem(fsConfig).then(fs => [path, fs] as const), + ), + ); + config.mounts = Object.fromEntries(resolved); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/editor/src/namespaces/workspace.ts` around lines 65 - 71, The loop resolving snapshot.mounts is doing sequential awaits causing latency; instead, in the block that populates config.mounts, map over Object.entries(snapshot.mounts) and kick off all this.resolveFilesystem(fsConfig) promises, use Promise.all to await them, then reconstruct a Record<string, WorkspaceFilesystem> (assign to config.mounts) from the original paths and resolved values; update references to snapshot.mounts, resolveFilesystem, config.mounts and WorkspaceFilesystem accordingly.packages/editor/src/namespaces/agent.ts (1)
1141-1148: Silent early returns inresolveAgentSkillSourcemakepin/latestfailures invisible.When either
skillStoreor — more critically —blobStoreis absent, the method returnsundefinedwith no log output. Agents configured withstrategy: 'latest'or a pinned version will silently fall back to filesystem reads, which is very hard to diagnose in production.🔍 Proposed observability improvement
const skillStore = await storage.getStore('skills'); - if (!skillStore) return undefined; + if (!skillStore) { + this.logger?.warn('[resolveAgentSkillSource] Skills storage domain not available; versioned resolution skipped'); + return undefined; + } const blobStore = await this.editor.resolveBlobStore(); - if (!blobStore) return undefined; + if (!blobStore) { + this.logger?.warn( + '[resolveAgentSkillSource] No blob store configured; agents with pin/latest strategy fall back to filesystem', + ); + return undefined; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/editor/src/namespaces/agent.ts` around lines 1141 - 1148, In resolveAgentSkillSource, avoid silent early returns: when this.mastra?.getStorage() is falsy, when await storage.getStore('skills') returns falsy (skillStore), or when await this.editor.resolveBlobStore() returns falsy (blobStore), emit a clear log/warn including context (function name resolveAgentSkillSource, agent id or name, requested skill pin/latest strategy) before returning undefined; use the existing logger available on this or this.editor (e.g., this.logger.warn or this.editor.logger.warn) and include which store is missing (storage/skills/blobStore) and relevant identifiers so failures to pin/latest are observable.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/editor/src/namespaces/agent.ts`:
- Around line 1115-1118: The inline workspace path currently uses a unique
timestamp ID (`inline-${Date.now()}`) when calling hydrateSnapshotToWorkspace,
which causes resolveStoredWorkspace (called per request when
hasConditionalWorkspace is true) to create and register new Workspace instances
via addWorkspace into the `#workspaces` registry; replace the timestamp with a
stable deterministic ID (e.g., a hash of the serialized workspaceRef.config) so
repeated inline configs reuse the same workspace instance — compute the content
hash of workspaceRef.config and pass that as the workspace ID to
hydrateSnapshotToWorkspace to prevent unbounded growth of `#workspaces`.
In `@packages/editor/src/namespaces/workspace.ts`:
- Around line 181-182: The code dereferences this.mastra using a non-null
assertion in the call to getStorage which can throw if mastra is undefined;
change the logic to explicitly guard mastra before calling getStorage (e.g., if
(!this.mastra) throw new Error('Mastra is not configured'); const storage =
this.mastra.getStorage();), or use optional chaining to safely obtain storage
and handle the undefined case, ensuring the storage null-check that follows is
valid; update references in the workspace methods that use
this.mastra/getStorage accordingly.
- Around line 144-169: Both private resolvers are declared async but do not
await the factory calls; make the intent explicit by awaiting the provider
factory calls and add the missing explicit return type for resolveSandbox.
Specifically, in resolveFilesystem (function resolveFilesystem) change the
provider.createFilesystem(...) call to be awaited so the async wrapper is
meaningful; in resolveSandbox (function resolveSandbox) await
provider.createSandbox(...) and add an explicit return type (e.g.,
Promise<WorkspaceSandbox>) to satisfy strict TypeScript rules.
- Around line 19-27: The async method resolveSandbox is missing an explicit
return type—add : Promise<WorkspaceSandbox> to its signature (match
resolveFilesystem) and ensure the returned value conforms to WorkspaceSandbox;
in the mounts construction loop (the sequential awaits inside the mounts loop in
the same file) replace per-iteration await with mapping to promises and use
Promise.all() to parallelize mount resolution; and remove the non-null assertion
on this.mastra! by guarding the getter result (e.g., store const mastra =
this.mastra and check mastra !== undefined before using it) or rearrange the
logic so you only access mastra when definitely present.
In `@stores/cloudflare/src/storage/types.ts`:
- Around line 148-152: Replace the loose Record<string, any> types with the
concrete storage domain types: import and use StorageWorkspaceType for
TABLE_WORKSPACES, WorkspaceVersion for TABLE_WORKSPACE_VERSIONS,
StorageSkillType for TABLE_SKILLS, and SkillVersion for TABLE_SKILL_VERSIONS in
the file where TABLE_* are declared (leave TABLE_SKILL_BLOBS as Record<string,
any> only if intentional); also fix the breaking Workers binding by changing
CloudflareWorkersConfig.bindings (the { [key in TABLE_NAMES]: KVNamespace }
mapping) to allow missing/new tables (e.g. make the mapping Partial or make the
specific new table keys optional) so existing deployments that don’t provide
those KV namespaces keep compiling.
---
Duplicate comments:
In `@packages/editor/src/namespaces/agent.ts`:
- Around line 1134-1139: The code in resolveAgentSkillSource treats any
array-shaped skills as undefined, which skips versioned resolution for
conditional agents (so "pin"/"latest" strategies are ignored); update
resolveAgentSkillSource to detect when skills is an array of conditional
variants and merge those variant entries into a single Record<string,
StorageSkillConfig> (i.e., iterate the array, extract each variant key/value and
combine them into skillConfigs) so the subsequent logic uses the merged
skillConfigs for pin/latest resolution; keep references to the existing
resolveAgentSkillSource function, the skills parameter, and the skillConfigs
variable when making the change.
---
Nitpick comments:
In `@packages/editor/src/namespaces/agent.ts`:
- Around line 1141-1148: In resolveAgentSkillSource, avoid silent early returns:
when this.mastra?.getStorage() is falsy, when await storage.getStore('skills')
returns falsy (skillStore), or when await this.editor.resolveBlobStore() returns
falsy (blobStore), emit a clear log/warn including context (function name
resolveAgentSkillSource, agent id or name, requested skill pin/latest strategy)
before returning undefined; use the existing logger available on this or
this.editor (e.g., this.logger.warn or this.editor.logger.warn) and include
which store is missing (storage/skills/blobStore) and relevant identifiers so
failures to pin/latest are observable.
In `@packages/editor/src/namespaces/workspace.ts`:
- Around line 65-71: The loop resolving snapshot.mounts is doing sequential
awaits causing latency; instead, in the block that populates config.mounts, map
over Object.entries(snapshot.mounts) and kick off all
this.resolveFilesystem(fsConfig) promises, use Promise.all to await them, then
reconstruct a Record<string, WorkspaceFilesystem> (assign to config.mounts) from
the original paths and resolved values; update references to snapshot.mounts,
resolveFilesystem, config.mounts and WorkspaceFilesystem accordingly.
| if (workspaceRef.type === 'inline') { | ||
| // Hydrate the inline workspace config directly | ||
| return workspaceNs.hydrateSnapshotToWorkspace(`inline-${Date.now()}`, workspaceRef.config, hydrateOptions); | ||
| } |
There was a problem hiding this comment.
❓ Verification inconclusive
Script executed:
rg -n "hydrateSnapshotToWorkspace" --type=ts -A 20Repository: mastra-ai/mastra
Repository: mastra-ai/mastra
Exit code: 0
stdout:
packages/editor/src/editor-workspaces.test.ts:294:describe('editor.workspace — hydrateSnapshotToWorkspace', () => {
packages/editor/src/editor-workspaces.test.ts-295- it('should hydrate a workspace with a local filesystem', async () => {
packages/editor/src/editor-workspaces.test.ts-296- const { editor } = await createSetup();
packages/editor/src/editor-workspaces.test.ts-297-
packages/editor/src/editor-workspaces.test.ts:298: const workspace = await editor.workspace.hydrateSnapshotToWorkspace('ws-hydrate-fs', {
packages/editor/src/editor-workspaces.test.ts-299- name: 'FS Workspace',
packages/editor/src/editor-workspaces.test.ts-300- filesystem: { provider: 'local', config: { basePath: '/tmp/hydrate' } },
packages/editor/src/editor-workspaces.test.ts-301- });
packages/editor/src/editor-workspaces.test.ts-302-
packages/editor/src/editor-workspaces.test.ts-303- expect(workspace).toBeInstanceOf(Workspace);
packages/editor/src/editor-workspaces.test.ts-304- expect(workspace.name).toBe('FS Workspace');
packages/editor/src/editor-workspaces.test.ts-305- expect(workspace.id).toBe('ws-hydrate-fs');
packages/editor/src/editor-workspaces.test.ts-306- });
packages/editor/src/editor-workspaces.test.ts-307-
packages/editor/src/editor-workspaces.test.ts-308- it('should hydrate a workspace with a local sandbox', async () => {
packages/editor/src/editor-workspaces.test.ts-309- const { editor } = await createSetup();
packages/editor/src/editor-workspaces.test.ts-310-
packages/editor/src/editor-workspaces.test.ts:311: const workspace = await editor.workspace.hydrateSnapshotToWorkspace('ws-hydrate-sb', {
packages/editor/src/editor-workspaces.test.ts-312- name: 'SB Workspace',
packages/editor/src/editor-workspaces.test.ts-313- sandbox: { provider: 'local', config: { workingDirectory: '/tmp/sb' } },
packages/editor/src/editor-workspaces.test.ts-314- });
packages/editor/src/editor-workspaces.test.ts-315-
packages/editor/src/editor-workspaces.test.ts-316- expect(workspace).toBeInstanceOf(Workspace);
packages/editor/src/editor-workspaces.test.ts-317- expect(workspace.name).toBe('SB Workspace');
packages/editor/src/editor-workspaces.test.ts-318- });
packages/editor/src/editor-workspaces.test.ts-319-
packages/editor/src/editor-workspaces.test.ts-320- it('should hydrate a workspace with skills (standalone)', async () => {
packages/editor/src/editor-workspaces.test.ts-321- const { editor } = await createSetup();
packages/editor/src/editor-workspaces.test.ts-322-
packages/editor/src/editor-workspaces.test.ts:323: const workspace = await editor.workspace.hydrateSnapshotToWorkspace('ws-hydrate-skills', {
packages/editor/src/editor-workspaces.test.ts-324- name: 'Skills Workspace',
packages/editor/src/editor-workspaces.test.ts-325- skills: ['skill-a', 'skill-b'],
packages/editor/src/editor-workspaces.test.ts-326- });
packages/editor/src/editor-workspaces.test.ts-327-
packages/editor/src/editor-workspaces.test.ts-328- expect(workspace).toBeInstanceOf(Workspace);
packages/editor/src/editor-workspaces.test.ts-329- });
packages/editor/src/editor-workspaces.test.ts-330-
packages/editor/src/editor-workspaces.test.ts-331- it('should hydrate a workspace with mounts', async () => {
packages/editor/src/editor-workspaces.test.ts-332- const { editor } = await createSetup();
packages/editor/src/editor-workspaces.test.ts-333-
packages/editor/src/editor-workspaces.test.ts:334: const workspace = await editor.workspace.hydrateSnapshotToWorkspace('ws-hydrate-mounts', {
packages/editor/src/editor-workspaces.test.ts-335- name: 'Mounts Workspace',
packages/editor/src/editor-workspaces.test.ts-336- mounts: {
packages/editor/src/editor-workspaces.test.ts-337- '/data': { provider: 'local', config: { basePath: '/tmp/mount-data' } },
packages/editor/src/editor-workspaces.test.ts-338- '/logs': { provider: 'local', config: { basePath: '/tmp/mount-logs', readOnly: true } },
packages/editor/src/editor-workspaces.test.ts-339- },
packages/editor/src/editor-workspaces.test.ts-340- });
packages/editor/src/editor-workspaces.test.ts-341-
packages/editor/src/editor-workspaces.test.ts-342- expect(workspace).toBeInstanceOf(Workspace);
packages/editor/src/editor-workspaces.test.ts-343- });
packages/editor/src/editor-workspaces.test.ts-344-
packages/editor/src/editor-workspaces.test.ts-345- it('should hydrate BM25 search config', async () => {
packages/editor/src/editor-workspaces.test.ts-346- const { editor } = await createSetup();
packages/editor/src/editor-workspaces.test.ts-347-
packages/editor/src/editor-workspaces.test.ts:348: const workspace = await editor.workspace.hydrateSnapshotToWorkspace('ws-hydrate-search', {
packages/editor/src/editor-workspaces.test.ts-349- name: 'Search Workspace',
packages/editor/src/editor-workspaces.test.ts-350- filesystem: { provider: 'local', config: { basePath: '/tmp/search' } },
packages/editor/src/editor-workspaces.test.ts-351- search: {
packages/editor/src/editor-workspaces.test.ts-352- bm25: { k1: 2.0, b: 0.8 },
packages/editor/src/editor-workspaces.test.ts-353- searchIndexName: 'test_search_idx',
packages/editor/src/editor-workspaces.test.ts-354- },
packages/editor/src/editor-workspaces.test.ts-355- });
packages/editor/src/editor-workspaces.test.ts-356-
packages/editor/src/editor-workspaces.test.ts-357- expect(workspace).toBeInstanceOf(Workspace);
packages/editor/src/editor-workspaces.test.ts-358- });
packages/editor/src/editor-workspaces.test.ts-359-
packages/editor/src/editor-workspaces.test.ts-360- it('should throw for unregistered filesystem provider', async () => {
packages/editor/src/editor-workspaces.test.ts-361- const { editor } = await createSetup();
packages/editor/src/editor-workspaces.test.ts-362-
packages/editor/src/editor-workspaces.test.ts-363- await expect(
packages/editor/src/editor-workspaces.test.ts:364: editor.workspace.hydrateSnapshotToWorkspace('ws-bad-fs', {
packages/editor/src/editor-workspaces.test.ts-365- name: 'Bad Provider WS',
packages/editor/src/editor-workspaces.test.ts-366- filesystem: { provider: 'nonexistent-fs', config: {} },
packages/editor/src/editor-workspaces.test.ts-367- }),
packages/editor/src/editor-workspaces.test.ts-368- ).rejects.toThrow('Filesystem provider "nonexistent-fs" is not registered');
packages/editor/src/editor-workspaces.test.ts-369- });
packages/editor/src/editor-workspaces.test.ts-370-
packages/editor/src/editor-workspaces.test.ts-371- it('should throw for unregistered sandbox provider', async () => {
packages/editor/src/editor-workspaces.test.ts-372- const { editor } = await createSetup();
packages/editor/src/editor-workspaces.test.ts-373-
packages/editor/src/editor-workspaces.test.ts-374- await expect(
packages/editor/src/editor-workspaces.test.ts:375: editor.workspace.hydrateSnapshotToWorkspace('ws-bad-sb', {
packages/editor/src/editor-workspaces.test.ts-376- name: 'Bad Provider WS',
packages/editor/src/editor-workspaces.test.ts-377- sandbox: { provider: 'nonexistent-sb', config: {} },
packages/editor/src/editor-workspaces.test.ts-378- }),
packages/editor/src/editor-workspaces.test.ts-379- ).rejects.toThrow('Sandbox provider "nonexistent-sb" is not registered');
packages/editor/src/editor-workspaces.test.ts-380- });
packages/editor/src/editor-workspaces.test.ts-381-
packages/editor/src/editor-workspaces.test.ts-382- it('should pass tools/autoSync/operationTimeout through to Workspace', async () => {
packages/editor/src/editor-workspaces.test.ts-383- const { editor } = await createSetup();
packages/editor/src/editor-workspaces.test.ts-384-
packages/editor/src/editor-workspaces.test.ts:385: const workspace = await editor.workspace.hydrateSnapshotToWorkspace('ws-hydrate-opts', {
packages/editor/src/editor-workspaces.test.ts-386- name: 'Options Workspace',
packages/editor/src/editor-workspaces.test.ts-387- filesystem: { provider: 'local', config: { basePath: '/tmp/opts' } },
packages/editor/src/editor-workspaces.test.ts-388- tools: {
packages/editor/src/editor-workspaces.test.ts-389- enabled: true,
packages/editor/src/editor-workspaces.test.ts-390- requireApproval: true,
packages/editor/src/editor-workspaces.test.ts-391- tools: {
packages/editor/src/editor-workspaces.test.ts-392- mastra_workspace_read_file: { enabled: true },
packages/editor/src/editor-workspaces.test.ts-393- mastra_workspace_write_file: { enabled: false },
packages/editor/src/editor-workspaces.test.ts-394- },
packages/editor/src/editor-workspaces.test.ts-395- },
packages/editor/src/editor-workspaces.test.ts-396- autoSync: true,
packages/editor/src/editor-workspaces.test.ts-397- operationTimeout: 10000,
packages/editor/src/editor-workspaces.test.ts-398- });
packages/editor/src/editor-workspaces.test.ts-399-
packages/editor/src/editor-workspaces.test.ts-400- expect(workspace).toBeInstanceOf(Workspace);
packages/editor/src/editor-workspaces.test.ts-401- });
packages/editor/src/editor-workspaces.test.ts-402-});
packages/editor/src/editor-workspaces.test.ts-403-
packages/editor/src/editor-workspaces.test.ts-404-// =============================================================================
packages/editor/src/editor-workspaces.test.ts-405-// Provider Registry Tests
--
packages/editor/src/editor-workspaces.test.ts:482: const workspace = await editor.workspace.hydrateSnapshotToWorkspace('ws-custom', {
packages/editor/src/editor-workspaces.test.ts-483- name: 'Custom FS Workspace',
packages/editor/src/editor-workspaces.test.ts-484- filesystem: { provider: 'mock-fs', config: { endpoint: 'http://localhost:9000' } },
packages/editor/src/editor-workspaces.test.ts-485- });
packages/editor/src/editor-workspaces.test.ts-486-
packages/editor/src/editor-workspaces.test.ts-487- expect(workspace).toBeInstanceOf(Workspace);
packages/editor/src/editor-workspaces.test.ts-488- });
packages/editor/src/editor-workspaces.test.ts-489-});
packages/editor/src/editor-workspaces.test.ts-490-
packages/editor/src/editor-workspaces.test.ts-491-// =============================================================================
packages/editor/src/editor-workspaces.test.ts-492-// Agent + Workspace Integration Tests
packages/editor/src/editor-workspaces.test.ts-493-// =============================================================================
packages/editor/src/editor-workspaces.test.ts-494-
packages/editor/src/editor-workspaces.test.ts-495-describe('editor.agent — workspace integration', () => {
packages/editor/src/editor-workspaces.test.ts-496- it('should create an agent with an inline workspace', async () => {
packages/editor/src/editor-workspaces.test.ts-497- const { editor, storage } = await createSetup();
packages/editor/src/editor-workspaces.test.ts-498-
packages/editor/src/editor-workspaces.test.ts-499- const agentsStore = await storage.getStore('agents');
packages/editor/src/editor-workspaces.test.ts-500- await agentsStore!.create({
packages/editor/src/editor-workspaces.test.ts-501- agent: {
packages/editor/src/editor-workspaces.test.ts-502- id: 'agent-inline-ws',
--
packages/editor/src/namespaces/workspace.ts:40: async hydrateSnapshotToWorkspace(
packages/editor/src/namespaces/workspace.ts-41- id: string,
packages/editor/src/namespaces/workspace.ts-42- snapshot: StorageWorkspaceSnapshotType,
packages/editor/src/namespaces/workspace.ts-43- options?: { skillSource?: SkillSource },
packages/editor/src/namespaces/workspace.ts-44- ): Promise<Workspace<WorkspaceFilesystem | undefined, WorkspaceSandbox | undefined, Record<string, WorkspaceFilesystem> | undefined>> {
packages/editor/src/namespaces/workspace.ts-45- const config: WorkspaceConfig<
packages/editor/src/namespaces/workspace.ts-46- WorkspaceFilesystem | undefined,
packages/editor/src/namespaces/workspace.ts-47- WorkspaceSandbox | undefined,
packages/editor/src/namespaces/workspace.ts-48- Record<string, WorkspaceFilesystem> | undefined
packages/editor/src/namespaces/workspace.ts-49- > = {
packages/editor/src/namespaces/workspace.ts-50- id,
packages/editor/src/namespaces/workspace.ts-51- name: snapshot.name,
packages/editor/src/namespaces/workspace.ts-52- };
packages/editor/src/namespaces/workspace.ts-53-
packages/editor/src/namespaces/workspace.ts-54- // Resolve primary filesystem
packages/editor/src/namespaces/workspace.ts-55- if (snapshot.filesystem) {
packages/editor/src/namespaces/workspace.ts-56- config.filesystem = await this.resolveFilesystem(snapshot.filesystem);
packages/editor/src/namespaces/workspace.ts-57- }
packages/editor/src/namespaces/workspace.ts-58-
packages/editor/src/namespaces/workspace.ts-59- // Resolve sandbox
packages/editor/src/namespaces/workspace.ts-60- if (snapshot.sandbox) {
--
packages/editor/src/namespaces/agent.ts:1112: return workspaceNs.hydrateSnapshotToWorkspace(workspaceRef.workspaceId, resolved, hydrateOptions);
packages/editor/src/namespaces/agent.ts-1113- }
packages/editor/src/namespaces/agent.ts-1114-
packages/editor/src/namespaces/agent.ts-1115- if (workspaceRef.type === 'inline') {
packages/editor/src/namespaces/agent.ts-1116- // Hydrate the inline workspace config directly
packages/editor/src/namespaces/agent.ts:1117: return workspaceNs.hydrateSnapshotToWorkspace(`inline-${Date.now()}`, workspaceRef.config, hydrateOptions);
packages/editor/src/namespaces/agent.ts-1118- }
packages/editor/src/namespaces/agent.ts-1119-
packages/editor/src/namespaces/agent.ts-1120- return undefined;
packages/editor/src/namespaces/agent.ts-1121- }
packages/editor/src/namespaces/agent.ts-1122-
packages/editor/src/namespaces/agent.ts-1123- /**
packages/editor/src/namespaces/agent.ts-1124- * Resolve agent-level skill configurations into a CompositeVersionedSkillSource.
packages/editor/src/namespaces/agent.ts-1125- *
packages/editor/src/namespaces/agent.ts-1126- * For each skill in the agent's `skills` map, checks the resolution strategy:
packages/editor/src/namespaces/agent.ts-1127- * - `pin: '<versionId>'` → reads the specific version's tree from the DB
packages/editor/src/namespaces/agent.ts-1128- * - `strategy: 'latest'` → reads the skill's active version tree
packages/editor/src/namespaces/agent.ts-1129- * - `strategy: 'live'` or no strategy → skips (uses filesystem-based discovery)
packages/editor/src/namespaces/agent.ts-1130- *
packages/editor/src/namespaces/agent.ts-1131- * Returns a CompositeVersionedSkillSource if any versioned skills were resolved,
packages/editor/src/namespaces/agent.ts-1132- * or undefined if all skills use filesystem-based discovery.
packages/editor/src/namespaces/agent.ts-1133- */
packages/editor/src/namespaces/agent.ts-1134- private async resolveAgentSkillSource(skills: StorageResolvedAgentType['skills']): Promise<SkillSource | undefined> {
packages/editor/src/namespaces/agent.ts-1135- if (!skills || typeof skills !== 'object') return undefined;
packages/editor/src/namespaces/agent.ts-1136-
packages/editor/src/namespaces/agent.ts-1137- // Resolve conditional field to a plain record
Script executed:
cat -n packages/editor/src/namespaces/workspace.ts | head -100Repository: mastra-ai/mastra
Repository: mastra-ai/mastra
Exit code: 0
stdout:
1 import { Workspace } from '@mastra/core/workspace';
2 import type { WorkspaceFilesystem, WorkspaceSandbox, SkillSource } from '@mastra/core/workspace';
3 import type { WorkspaceConfig } from '@mastra/core/workspace';
4 import type {
5 StorageCreateWorkspaceInput,
6 StorageUpdateWorkspaceInput,
7 StorageListWorkspacesInput,
8 StorageListWorkspacesOutput,
9 StorageResolvedWorkspaceType,
10 StorageListWorkspacesResolvedOutput,
11 StorageWorkspaceSnapshotType,
12 StorageFilesystemConfig,
13 StorageSandboxConfig,
14 } from '@mastra/core/storage';
15
16 import { CrudEditorNamespace } from './base';
17 import type { StorageAdapter } from './base';
18
19 export class EditorWorkspaceNamespace extends CrudEditorNamespace<
20 StorageCreateWorkspaceInput,
21 StorageUpdateWorkspaceInput,
22 StorageListWorkspacesInput,
23 StorageListWorkspacesOutput,
24 StorageListWorkspacesResolvedOutput,
25 StorageResolvedWorkspaceType,
26 StorageResolvedWorkspaceType
27 > {
28 protected override onCacheEvict(_id: string): void {
29 // Workspaces are not registered in Mastra runtime from the CRUD namespace.
30 // Agent hydration handles runtime workspace registration independently.
31 }
32
33 /**
34 * Hydrate a stored workspace snapshot config into a runtime Workspace instance.
35 * Resolves provider strings to actual instances using the editor's registries.
36 *
37 * This is NOT called from the CrudEditorNamespace flow — it is a public utility
38 * used by EditorAgentNamespace during agent hydration.
39 */
40 async hydrateSnapshotToWorkspace(
41 id: string,
42 snapshot: StorageWorkspaceSnapshotType,
43 options?: { skillSource?: SkillSource },
44 ): Promise<Workspace<WorkspaceFilesystem | undefined, WorkspaceSandbox | undefined, Record<string, WorkspaceFilesystem> | undefined>> {
45 const config: WorkspaceConfig<
46 WorkspaceFilesystem | undefined,
47 WorkspaceSandbox | undefined,
48 Record<string, WorkspaceFilesystem> | undefined
49 > = {
50 id,
51 name: snapshot.name,
52 };
53
54 // Resolve primary filesystem
55 if (snapshot.filesystem) {
56 config.filesystem = await this.resolveFilesystem(snapshot.filesystem);
57 }
58
59 // Resolve sandbox
60 if (snapshot.sandbox) {
61 config.sandbox = await this.resolveSandbox(snapshot.sandbox);
62 }
63
64 // Resolve mounted filesystems
65 if (snapshot.mounts) {
66 const mounts: Record<string, WorkspaceFilesystem> = {};
67 for (const [path, fsConfig] of Object.entries(snapshot.mounts)) {
68 mounts[path] = await this.resolveFilesystem(fsConfig);
69 }
70 config.mounts = mounts;
71 }
72
73 // Search configuration
74 if (snapshot.search) {
75 if (snapshot.search.bm25) {
76 config.bm25 = snapshot.search.bm25;
77 }
78 if (snapshot.search.searchIndexName) {
79 config.searchIndexName = snapshot.search.searchIndexName;
80 }
81 if (snapshot.search.autoIndexPaths) {
82 config.autoIndexPaths = snapshot.search.autoIndexPaths;
83 }
84
85 // Resolve vector store from Mastra's registered vectors
86 if (snapshot.search.vectorProvider && this.mastra) {
87 const vectors = this.mastra.listVectors();
88 const vectorStore = vectors?.[snapshot.search.vectorProvider];
89 if (vectorStore) {
90 config.vectorStore = vectorStore;
91 } else {
92 this.logger?.warn(
93 `Vector provider "${snapshot.search.vectorProvider}" not found in Mastra instance. ` +
94 `Workspace search will be limited to BM25 only.`,
95 );
96 }
97 }
98
99 // Embedder resolution: workspace expects an Embedder function (text: string) => Promise<number[]>.
100 // The stored config has embedderProvider/embedderModel strings but there is no embedder registry
Script executed:
cat -n packages/editor/src/namespaces/workspace.ts | sed -n '100,150p'Repository: mastra-ai/mastra
Repository: mastra-ai/mastra
Exit code: 0
stdout:
100 // The stored config has embedderProvider/embedderModel strings but there is no embedder registry
101 // in Mastra yet. For now, skip — BM25-only search works without an embedder.
102 // Vector search requires both vectorStore and embedder to be configured.
103 if (config.vectorStore && !config.embedder) {
104 this.logger?.warn(
105 `Workspace has a vector store configured but no embedder. ` +
106 `Vector/hybrid search will not be available. Configure an embedder to enable semantic search.`,
107 );
108 }
109 }
110
111 // Skills are stored as entity IDs in the workspace config.
112 // When a versioned skill source is provided (from agent-level resolution),
113 // it takes precedence — skills are served from the blob store.
114 // Otherwise, skill entity IDs are passed as paths for filesystem-based discovery.
115 if (options?.skillSource) {
116 config.skillSource = options.skillSource;
117 // When using a versioned source, the skills resolver points to the root ('.')
118 // since the CompositeVersionedSkillSource mounts each skill as a subdirectory.
119 config.skills = ['.'];
120 } else if (snapshot.skills && snapshot.skills.length > 0) {
121 config.skills = snapshot.skills;
122 }
123
124 // Workspace tool configuration maps directly
125 if (snapshot.tools) {
126 config.tools = snapshot.tools;
127 }
128
129 if (snapshot.autoSync !== undefined) {
130 config.autoSync = snapshot.autoSync;
131 }
132
133 if (snapshot.operationTimeout !== undefined) {
134 config.operationTimeout = snapshot.operationTimeout;
135 }
136
137 return new Workspace(config);
138 }
139
140 /**
141 * Resolve a stored filesystem config to a runtime WorkspaceFilesystem instance.
142 * Looks up the provider by ID in the editor's registry (which includes built-in providers).
143 */
144 private async resolveFilesystem(fsConfig: StorageFilesystemConfig): Promise<WorkspaceFilesystem> {
145 const provider = this.editor.__filesystems.get(fsConfig.provider);
146 if (!provider) {
147 throw new Error(
148 `Filesystem provider "${fsConfig.provider}" is not registered. ` +
149 `Register it via new MastraEditor({ filesystems: [yourProvider] })`,
150 );
Script executed:
rg -n "_cache|CrudEditorNamespace" packages/editor/src/namespaces/base.ts -A 5 | head -80Repository: mastra-ai/mastra
Repository: mastra-ai/mastra
Exit code: 0
stdout:
10: * to a generic CRUD interface. Each CrudEditorNamespace subclass implements
11- * this to map its storage domain's methods.
12- */
13-export interface StorageAdapter<
14- TCreateInput extends { id: string },
15- TUpdateInput extends { id: string },
--
66:export abstract class CrudEditorNamespace<
67- TCreateInput extends { id: string },
68- TUpdateInput extends { id: string },
69- TListInput,
70- TListOutput,
71- TListResolvedOutput,
--
75: protected _cache = new Map<string, THydrated>();
76-
77- /**
78- * Each subclass must provide a storage adapter that maps
79- * entity-specific storage method names to the generic interface.
80- */
--
111: this._cache.set(input.id, hydrated);
112- return hydrated;
113- }
114-
115- async getById(id: string, options?: GetByIdOptions): Promise<THydrated | null> {
116- this.ensureRegistered();
--
121: const cached = this._cache.get(id);
122- if (cached) {
123- this.logger?.debug(`[getById] Cache hit for "${id}"`);
124- return cached;
125- }
126- }
--
137: this._cache.set(id, hydrated);
138- }
139- return hydrated;
140- }
141-
142- async update(input: TUpdateInput): Promise<THydrated> {
--
146: this._cache.delete(input.id);
147- this.onCacheEvict(input.id);
148- const resolved = await adapter.getByIdResolved(input.id);
149- if (!resolved) {
150- throw new Error(`Failed to resolve entity ${input.id} after update`);
151- }
--
153: this._cache.set(input.id, hydrated);
154- return hydrated;
155- }
156-
157- async delete(id: string): Promise<void> {
158- this.ensureRegistered();
--
161: this._cache.delete(id);
162- this.onCacheEvict(id);
163- }
164-
165- async list(args?: TListInput): Promise<TListOutput> {
166- this.ensureRegistered();
--
183: this._cache.delete(id);
184- this.onCacheEvict(id);
185- this.logger?.debug(`[clearCache] Cleared cache for "${id}"`);
186- } else {
187: for (const cachedId of Array.from(this._cache.keys())) {
188- this.onCacheEvict(cachedId);
189- }
190: this._cache.clear();
191- this.logger?.debug('[clearCache] Cleared all cached entities');
192- }
193- }
194-}
Script executed:
cat -n packages/editor/src/namespaces/agent.ts | sed -n '460,480p'Repository: mastra-ai/mastra
Repository: mastra-ai/mastra
Exit code: 0
stdout:
460 // Workspace: resolve stored workspace reference (ID or inline) to a runtime Workspace.
461 // When conditional, wrapped in a DynamicArgument function resolved at request time.
462 const workspace = hasConditionalWorkspace
463 ? async ({ requestContext }: { requestContext: RequestContext }) => {
464 const ctx = requestContext.toJSON();
465 const resolvedRef = this.accumulateObjectVariants(
466 storedAgent.workspace as StorageConditionalVariant<StorageWorkspaceRef>[],
467 ctx,
468 );
469 return this.resolveStoredWorkspace(resolvedRef, skillSource);
470 }
471 : await this.resolveStoredWorkspace(storedAgent.workspace as StorageWorkspaceRef | undefined, skillSource);
472
473 const skillsFormat = storedAgent.skillsFormat;
474
475 // Cast to `any` to avoid TS2589 "excessively deep" errors caused by the
476 // complex generic inference of Agent<TTools, TRequestContext, …>. The
477 // individual field values have already been validated above.
478 const agent = new Agent({
479 id: storedAgent.id,
480 name: storedAgent.name,
Script executed:
rg -n "register|_cache" packages/editor/src/namespaces/workspace.tsRepository: mastra-ai/mastra
Repository: mastra-ai/mastra
Exit code: 0
stdout:
29: // Workspaces are not registered in Mastra runtime from the CRUD namespace.
85: // Resolve vector store from Mastra's registered vectors
148: `Filesystem provider "${fsConfig.provider}" is not registered. ` +
164: `Sandbox provider "${sandboxConfig.provider}" is not registered. ` +
Script executed:
rg -n "class Workspace" packages/core/src/workspace --type=ts -A 30 | head -100Repository: mastra-ai/mastra
Repository: mastra-ai/mastra
Exit code: 0
stdout:
packages/core/src/workspace/workspace.ts:359:export class Workspace<
packages/core/src/workspace/workspace.ts-360- TFilesystem extends WorkspaceFilesystem | undefined = WorkspaceFilesystem | undefined,
packages/core/src/workspace/workspace.ts-361- TSandbox extends WorkspaceSandbox | undefined = WorkspaceSandbox | undefined,
packages/core/src/workspace/workspace.ts-362- TMounts extends Record<string, WorkspaceFilesystem> | undefined = undefined,
packages/core/src/workspace/workspace.ts-363-> {
packages/core/src/workspace/workspace.ts-364- readonly id: string;
packages/core/src/workspace/workspace.ts-365- readonly name: string;
packages/core/src/workspace/workspace.ts-366- readonly createdAt: Date;
packages/core/src/workspace/workspace.ts-367- lastAccessedAt: Date;
packages/core/src/workspace/workspace.ts-368-
packages/core/src/workspace/workspace.ts-369- private _status: WorkspaceStatus = 'pending';
packages/core/src/workspace/workspace.ts-370- private readonly _fs?: WorkspaceFilesystem;
packages/core/src/workspace/workspace.ts-371- private readonly _sandbox?: WorkspaceSandbox;
packages/core/src/workspace/workspace.ts-372- private readonly _config: WorkspaceConfig<TFilesystem, TSandbox, TMounts>;
packages/core/src/workspace/workspace.ts-373- private readonly _searchEngine?: SearchEngine;
packages/core/src/workspace/workspace.ts-374- private _skills?: WorkspaceSkills;
packages/core/src/workspace/workspace.ts-375-
packages/core/src/workspace/workspace.ts-376- constructor(config: WorkspaceConfig<TFilesystem, TSandbox, TMounts>) {
packages/core/src/workspace/workspace.ts-377- this.id = config.id ?? this.generateId();
packages/core/src/workspace/workspace.ts-378- this.name = config.name ?? `workspace-${this.id.slice(0, 8)}`;
packages/core/src/workspace/workspace.ts-379- this.createdAt = new Date();
packages/core/src/workspace/workspace.ts-380- this.lastAccessedAt = new Date();
packages/core/src/workspace/workspace.ts-381-
packages/core/src/workspace/workspace.ts-382- this._config = config;
packages/core/src/workspace/workspace.ts-383- this._sandbox = config.sandbox;
packages/core/src/workspace/workspace.ts-384-
packages/core/src/workspace/workspace.ts-385- // Setup mounts - creates CompositeFilesystem and informs sandbox
packages/core/src/workspace/workspace.ts-386- if (config.mounts && Object.keys(config.mounts).length > 0) {
packages/core/src/workspace/workspace.ts-387- // Validate: can't use both filesystem and mounts
packages/core/src/workspace/workspace.ts-388- if (config.filesystem) {
packages/core/src/workspace/workspace.ts-389- throw new WorkspaceError('Cannot use both "filesystem" and "mounts"', 'INVALID_CONFIG');
--
packages/core/src/workspace/skills/workspace-skills.ts:73:export class WorkspaceSkillsImpl implements WorkspaceSkills {
packages/core/src/workspace/skills/workspace-skills.ts-74- readonly `#source`: SkillSourceInterface;
packages/core/src/workspace/skills/workspace-skills.ts-75- readonly `#skillsResolver`: SkillsResolver;
packages/core/src/workspace/skills/workspace-skills.ts-76- readonly `#searchEngine`?: SkillSearchEngine;
packages/core/src/workspace/skills/workspace-skills.ts-77- readonly `#validateOnLoad`: boolean;
packages/core/src/workspace/skills/workspace-skills.ts-78-
packages/core/src/workspace/skills/workspace-skills.ts-79- /** Map of skill name -> full skill data */
packages/core/src/workspace/skills/workspace-skills.ts-80- `#skills`: Map<string, InternalSkill> = new Map();
packages/core/src/workspace/skills/workspace-skills.ts-81-
packages/core/src/workspace/skills/workspace-skills.ts-82- /** Whether skills have been discovered */
packages/core/src/workspace/skills/workspace-skills.ts-83- `#initialized` = false;
packages/core/src/workspace/skills/workspace-skills.ts-84-
packages/core/src/workspace/skills/workspace-skills.ts-85- /** Promise for ongoing initialization (prevents concurrent discovery) */
packages/core/src/workspace/skills/workspace-skills.ts-86- `#initPromise`: Promise<void> | null = null;
packages/core/src/workspace/skills/workspace-skills.ts-87-
packages/core/src/workspace/skills/workspace-skills.ts-88- /** Timestamp of last skills discovery (for staleness check) */
packages/core/src/workspace/skills/workspace-skills.ts-89- `#lastDiscoveryTime` = 0;
packages/core/src/workspace/skills/workspace-skills.ts-90-
packages/core/src/workspace/skills/workspace-skills.ts-91- /** Currently resolved skills paths (used to detect changes) */
packages/core/src/workspace/skills/workspace-skills.ts-92- `#resolvedPaths`: string[] = [];
packages/core/src/workspace/skills/workspace-skills.ts-93-
packages/core/src/workspace/skills/workspace-skills.ts-94- /** Cached glob-resolved directories and per-pattern resolve timestamps */
packages/core/src/workspace/skills/workspace-skills.ts-95- `#globDirCache`: Map<string, string[]> = new Map();
packages/core/src/workspace/skills/workspace-skills.ts-96- `#globResolveTimes`: Map<string, number> = new Map();
packages/core/src/workspace/skills/workspace-skills.ts-97- static readonly GLOB_RESOLVE_INTERVAL = 5_000; // Re-walk glob dirs every 5s
packages/core/src/workspace/skills/workspace-skills.ts-98-
packages/core/src/workspace/skills/workspace-skills.ts-99- constructor(config: WorkspaceSkillsImplConfig) {
packages/core/src/workspace/skills/workspace-skills.ts-100- this.#source = config.source;
packages/core/src/workspace/skills/workspace-skills.ts-101- this.#skillsResolver = config.skills;
packages/core/src/workspace/skills/workspace-skills.ts-102- this.#searchEngine = config.searchEngine;
packages/core/src/workspace/skills/workspace-skills.ts-103- this.#validateOnLoad = config.validateOnLoad ?? true;
--
packages/core/src/workspace/errors.ts:13:export class WorkspaceError extends Error {
packages/core/src/workspace/errors.ts-14- constructor(
packages/core/src/workspace/errors.ts-15- message: string,
packages/core/src/workspace/errors.ts-16- public readonly code: string,
packages/core/src/workspace/errors.ts-17- public readonly workspaceId?: string,
packages/core/src/workspace/errors.ts-18- ) {
packages/core/src/workspace/errors.ts-19- super(message);
packages/core/src/workspace/errors.ts-20- this.name = 'WorkspaceError';
packages/core/src/workspace/errors.ts-21- }
packages/core/src/workspace/errors.ts-22-}
packages/core/src/workspace/errors.ts-23-
packages/core/src/workspace/errors.ts-24-// =============================================================================
packages/core/src/workspace/errors.ts-25-// Availability Errors
packages/core/src/workspace/errors.ts-26-// =============================================================================
packages/core/src/workspace/errors.ts-27-
packages/core/src/workspace/errors.ts-28-export class FilesystemNotAvailableError extends WorkspaceError {
packages/core/src/workspace/errors.ts-29- constructor() {
packages/core/src/workspace/errors.ts-30- super('Workspace does not have a filesystem configured', 'NO_FILESYSTEM');
packages/core/src/workspace/errors.ts-31- this.name = 'FilesystemNotAvailableError';
packages/core/src/workspace/errors.ts-32- }
packages/core/src/workspace/errors.ts-33-}
packages/core/src/workspace/errors.ts-34-
packages/core/src/workspace/errors.ts-35-export class SandboxNotAvailableError extends WorkspaceError {
packages/core/src/workspace/errors.ts-36- constructor(message?: string) {
packages/core/src/workspace/errors.ts-37- super(message ?? 'Workspace does not have a sandbox configured', 'NO_SANDBOX');
packages/core/src/workspace/errors.ts-38- this.name = 'SandboxNotAvailableError';
packages/core/src/workspace/errors.ts-39- }
packages/core/src/workspace/errors.ts-40-}
packages/core/src/workspace/errors.ts-41-
packages/core/src/workspace/errors.ts-42-export class SandboxFeatureNotSupportedError extends WorkspaceError {
packages/core/src/workspace/errors.ts-43- constructor(feature: 'executeCommand' | 'installPackage') {
--
packages/core/src/workspace/errors.ts:60:export class WorkspaceNotReadyError extends WorkspaceError {
packages/core/src/workspace/errors.ts-61- constructor(workspaceId: string, status: WorkspaceStatus) {
packages/core/src/workspace/errors.ts-62- super(`Workspace is not ready (status: ${status})`, 'NOT_READY', workspaceId);
packages/core/src/workspace/errors.ts-63- this.name = 'WorkspaceNotReadyError';
Script executed:
rg -n "workspaces.*cache|workspace.*Map|workspace.*registry" packages/editor/src --type=ts -B 2 -A 2Repository: mastra-ai/mastra
Repository: mastra-ai/mastra
Exit code: 1
Script executed:
rg -n "this\.workspace" packages/editor/src/namespaces/agent.ts | head -20Repository: mastra-ai/mastra
Repository: mastra-ai/mastra
Exit code: 0
Script executed:
rg -n "workspace" packages/core/src/agent/agent.ts | head -30Repository: mastra-ai/mastra
Repository: mastra-ai/mastra
Exit code: 0
stdout:
60:import type { Workspace } from '../workspace';
61:import { createWorkspaceTools } from '../workspace';
62:import type { SkillFormat } from '../workspace/skills';
156: `#workspace`?: DynamicArgument<Workspace | undefined>;
283: if (config.workspace) {
284: this.#workspace = config.workspace;
312: * Gets the skills processors to add to input processors when workspace has skills.
319: // Check if workspace has skills configured
320: const workspace = await this.getWorkspace({ requestContext: requestContext || new RequestContext() });
321: if (!workspace?.skills) {
333: // Create new SkillsProcessor using workspace
334: return [new SkillsProcessor({ workspace, format: this.#skillsFormat })];
834: * Checks if this agent has its own workspace configured.
839: * const workspace = await agent.getWorkspace();
844: return Boolean(this.#workspace);
848: * Gets the workspace instance for this agent, resolving function-based workspace if necessary.
849: * The workspace provides filesystem and sandbox capabilities for file operations and code execution.
853: * const workspace = await agent.getWorkspace();
854: * if (workspace) {
855: * await workspace.writeFile('/data.json', JSON.stringify(data));
856: * const result = await workspace.executeCode('console.log("Hello")');
863: // If agent has its own workspace configured, use it
864: if (this.#workspace) {
865: if (typeof this.#workspace !== 'function') {
866: return this.#workspace;
869: const result = this.#workspace({ requestContext, mastra: this.#mastra });
876: // Propagate logger to factory-resolved workspace
879: // Auto-register dynamically created workspace with Mastra for lookup via listWorkspaces()/getWorkspaceById()
887: // Fall back to Mastra's global workspace
1565: // Propagate logger to workspace if it's a direct instance (not a factory function)
Script executed:
rg -n "registerWorkspace|listWorkspaces|getWorkspaceById" packages/editor/src --type=ts -B 2 -A 5Repository: mastra-ai/mastra
Repository: mastra-ai/mastra
Exit code: 1
Script executed:
rg -n "registerWorkspace|listWorkspaces|getWorkspaceById" packages/core/src --type=ts -B 2 -A 5 | head -80Repository: mastra-ai/mastra
Repository: mastra-ai/mastra
Exit code: 0
stdout:
packages/core/src/storage/types.ts-1446-/**
packages/core/src/storage/types.ts-1447- * Resolved workspace type that combines the thin record with version snapshot config.
packages/core/src/storage/types.ts:1448: * Returned by getWorkspaceByIdResolved and listWorkspacesResolved.
packages/core/src/storage/types.ts-1449- */
packages/core/src/storage/types.ts-1450-export type StorageResolvedWorkspaceType = StorageWorkspaceType & StorageWorkspaceSnapshotType;
packages/core/src/storage/types.ts-1451-
packages/core/src/storage/types.ts-1452-/**
packages/core/src/storage/types.ts-1453- * Input for creating a new workspace. Flat union of thin record fields
--
packages/core/src/mastra/workspace-registration.test.ts-42- }
packages/core/src/mastra/workspace-registration.test.ts-43- const start = Date.now();
packages/core/src/mastra/workspace-registration.test.ts:44: while (!mastra.listWorkspaces()[id]) {
packages/core/src/mastra/workspace-registration.test.ts-45- if (Date.now() - start > timeout) {
packages/core/src/mastra/workspace-registration.test.ts-46- throw new Error(`Workspace ${id} not registered within ${timeout}ms`);
packages/core/src/mastra/workspace-registration.test.ts-47- }
packages/core/src/mastra/workspace-registration.test.ts-48- await new Promise(r => setTimeout(r, 5));
packages/core/src/mastra/workspace-registration.test.ts-49- }
--
packages/core/src/mastra/workspace-registration.test.ts-82-
packages/core/src/mastra/workspace-registration.test.ts-83- // Should also be in the registry
packages/core/src/mastra/workspace-registration.test.ts:84: const registeredWorkspace = mastra.getWorkspaceById('global-workspace');
packages/core/src/mastra/workspace-registration.test.ts-85- expect(registeredWorkspace).toBe(workspace);
packages/core/src/mastra/workspace-registration.test.ts-86- });
packages/core/src/mastra/workspace-registration.test.ts-87-
packages/core/src/mastra/workspace-registration.test.ts:88: it('should list global workspace in listWorkspaces', () => {
packages/core/src/mastra/workspace-registration.test.ts-89- const workspace = createWorkspace('global-workspace');
packages/core/src/mastra/workspace-registration.test.ts-90-
packages/core/src/mastra/workspace-registration.test.ts-91- const mastra = new Mastra({
packages/core/src/mastra/workspace-registration.test.ts-92- logger: false,
packages/core/src/mastra/workspace-registration.test.ts-93- workspace,
packages/core/src/mastra/workspace-registration.test.ts-94- });
packages/core/src/mastra/workspace-registration.test.ts-95-
packages/core/src/mastra/workspace-registration.test.ts:96: const workspaces = mastra.listWorkspaces();
packages/core/src/mastra/workspace-registration.test.ts-97- expect(Object.keys(workspaces)).toHaveLength(1);
packages/core/src/mastra/workspace-registration.test.ts-98- expect(workspaces['global-workspace']).toBe(workspace);
packages/core/src/mastra/workspace-registration.test.ts-99- });
packages/core/src/mastra/workspace-registration.test.ts-100- });
packages/core/src/mastra/workspace-registration.test.ts-101-
--
packages/core/src/mastra/workspace-registration.test.ts-107- mastra.addWorkspace(workspace);
packages/core/src/mastra/workspace-registration.test.ts-108-
packages/core/src/mastra/workspace-registration.test.ts:109: const retrieved = mastra.getWorkspaceById('dynamic-workspace');
packages/core/src/mastra/workspace-registration.test.ts-110- expect(retrieved).toBe(workspace);
packages/core/src/mastra/workspace-registration.test.ts-111- });
packages/core/src/mastra/workspace-registration.test.ts-112-
packages/core/src/mastra/workspace-registration.test.ts-113- it('should use custom key when provided', () => {
packages/core/src/mastra/workspace-registration.test.ts-114- const mastra = new Mastra({ logger: false });
--
packages/core/src/mastra/workspace-registration.test.ts-117- mastra.addWorkspace(workspace, 'custom-key');
packages/core/src/mastra/workspace-registration.test.ts-118-
packages/core/src/mastra/workspace-registration.test.ts:119: const retrieved = mastra.getWorkspaceById('custom-key');
packages/core/src/mastra/workspace-registration.test.ts-120- expect(retrieved).toBe(workspace);
packages/core/src/mastra/workspace-registration.test.ts-121- });
packages/core/src/mastra/workspace-registration.test.ts-122-
packages/core/src/mastra/workspace-registration.test.ts-123- it('should skip duplicate workspaces', () => {
packages/core/src/mastra/workspace-registration.test.ts-124- const mastra = new Mastra({ logger: false });
--
packages/core/src/mastra/workspace-registration.test.ts-131-
packages/core/src/mastra/workspace-registration.test.ts-132- // Should keep the first one
packages/core/src/mastra/workspace-registration.test.ts:133: const retrieved = mastra.getWorkspaceById('duplicate-id');
packages/core/src/mastra/workspace-registration.test.ts-134- expect(retrieved.name).toBe('First');
packages/core/src/mastra/workspace-registration.test.ts-135- });
packages/core/src/mastra/workspace-registration.test.ts-136-
packages/core/src/mastra/workspace-registration.test.ts-137- it('should throw when adding undefined workspace', () => {
packages/core/src/mastra/workspace-registration.test.ts-138- const mastra = new Mastra({ logger: false });
--
packages/core/src/mastra/workspace-registration.test.ts-148- });
packages/core/src/mastra/workspace-registration.test.ts-149-
packages/core/src/mastra/workspace-registration.test.ts:150: describe('getWorkspaceById', () => {
packages/core/src/mastra/workspace-registration.test.ts-151- it('should return workspace by id', () => {
packages/core/src/mastra/workspace-registration.test.ts-152- const workspace = createWorkspace('test-workspace');
packages/core/src/mastra/workspace-registration.test.ts-153-
packages/core/src/mastra/workspace-registration.test.ts-154- const mastra = new Mastra({
packages/core/src/mastra/workspace-registration.test.ts-155- logger: false,
--
packages/core/src/mastra/workspace-registration.test.ts-157- });
packages/core/src/mastra/workspace-registration.test.ts-158-
packages/core/src/mastra/workspace-registration.test.ts:159: const retrieved = mastra.getWorkspaceById('test-workspace');
packages/core/src/mastra/workspace-registration.test.ts-160- expect(retrieved).toBe(workspace);
packages/core/src/mastra/workspace-registration.test.ts-161- });
Script executed:
rg -n "addWorkspace|getWorkspaceById|listWorkspaces" packages/core/src/mastra/mastra.ts -B 2 -A 10Repository: mastra-ai/mastra
Repository: mastra-ai/mastra
Exit code: 2
stderr:
packages/core/src/mastra/mastra.ts: No such file or directory (os error 2)
Script executed:
find packages/core/src/mastra -name "*.ts" -type f | head -20Repository: mastra-ai/mastra
Repository: mastra-ai/mastra
Exit code: 0
stdout:
packages/core/src/mastra/config-spread.test.ts
packages/core/src/mastra/custom-gateway-integration.test.ts
packages/core/src/mastra/hook.test.ts
packages/core/src/mastra/hooks.ts
packages/core/src/mastra/idgenerator.test.ts
packages/core/src/mastra/index.ts
packages/core/src/mastra/mastra-workflow-types.test-d.ts
packages/core/src/mastra/processor-workflow-registration.test.ts
packages/core/src/mastra/remove-agent.test.ts
packages/core/src/mastra/scorer-registration.test.ts
packages/core/src/mastra/server-app-access.test.ts
packages/core/src/mastra/workspace-registration.test.ts
Script executed:
rg -n "addWorkspace|getWorkspaceById|listWorkspaces" packages/core/src/mastra/index.ts -B 2 -A 10Repository: mastra-ai/mastra
Repository: mastra-ai/mastra
Exit code: 0
stdout:
633- this.#workspace = config.workspace;
634- // Also register in the workspaces registry for direct lookup by ID
635: this.addWorkspace(config.workspace);
636- }
637-
638- if (config?.scorers) {
639- Object.entries(config.scorers).forEach(([key, scorer]) => {
640- if (scorer != null) {
641- this.addScorer(scorer, key);
642- }
643- });
644- }
645-
--
903- .then(workspace => {
904- if (workspace) {
905: this.addWorkspace(workspace);
906- }
907- })
908- .catch(err => {
909- this.#logger?.debug(`Failed to register workspace for agent ${agentKey}:`, err);
910- });
911- }
912-
913- // Register scorers from the agent to the Mastra instance
914- // This makes agent-level scorers discoverable via mastra.getScorer()/getScorerById()
915- mastraAgent
--
1195- * `@example`
1196- * ```typescript
1197: * const workspace = mastra.getWorkspaceById('workspace-123');
1198- * const files = await workspace.filesystem.readdir('/');
1199- * ```
1200- */
1201: public getWorkspaceById(id: string): Workspace {
1202- const workspace = this.#workspaces[id];
1203- if (!workspace) {
1204- const error = new MastraError({
1205- id: 'MASTRA_GET_WORKSPACE_BY_ID_NOT_FOUND',
1206- domain: ErrorDomain.MASTRA,
1207- category: ErrorCategory.USER,
1208- text: `Workspace with id ${id} not found`,
1209- details: {
1210- status: 404,
1211- workspaceId: id,
--
1224- * `@example`
1225- * ```typescript
1226: * const workspaces = mastra.listWorkspaces();
1227- * for (const [id, workspace] of Object.entries(workspaces)) {
1228- * console.log(`Workspace ${id}: ${workspace.name}`);
1229- * }
1230- * ```
1231- */
1232: public listWorkspaces(): Record<string, Workspace> {
1233- return { ...this.#workspaces };
1234- }
1235-
1236- /**
1237- * Adds a new workspace to the Mastra instance.
1238- *
1239- * This method allows dynamic registration of workspaces after the Mastra instance
1240- * has been created. Workspaces are keyed by their ID.
1241- *
1242- * `@example`
--
1247- * filesystem: new LocalFilesystem({ rootPath: './workspace' })
1248- * });
1249: * mastra.addWorkspace(workspace);
1250- * ```
1251- */
1252: public addWorkspace(workspace: Workspace, key?: string): void {
1253- if (!workspace) {
1254- throw createUndefinedPrimitiveError('workspace', workspace, key);
1255- }
1256- const workspaceKey = key || workspace.id;
1257- if (this.#workspaces[workspaceKey]) {
1258- const logger = this.getLogger();
1259- logger.debug(`Workspace with key ${workspaceKey} already exists. Skipping addition.`);
1260- return;
1261- }
1262-
Script executed:
rg -n "#workspaces" packages/core/src/mastra/index.ts -B 2 -A 5 | head -100Repository: mastra-ai/mastra
Repository: mastra-ai/mastra
Exit code: 0
stdout:
325- `#memory`?: TMemory;
326- `#workspace`?: Workspace;
327: `#workspaces`: Record<string, Workspace> = {};
328- `#server`?: ServerConfig;
329- `#serverAdapter`?: MastraServerBase;
330- `#mcpServers`?: TMCPServers;
331- `#bundler`?: BundlerConfig;
332- `#idGenerator`?: MastraIdGenerator;
--
1200- */
1201- public getWorkspaceById(id: string): Workspace {
1202: const workspace = this.#workspaces[id];
1203- if (!workspace) {
1204- const error = new MastraError({
1205- id: 'MASTRA_GET_WORKSPACE_BY_ID_NOT_FOUND',
1206- domain: ErrorDomain.MASTRA,
1207- category: ErrorCategory.USER,
--
1210- status: 404,
1211- workspaceId: id,
1212: availableIds: Object.keys(this.#workspaces).join(', '),
1213- },
1214- });
1215- this.#logger?.trackException(error);
1216- throw error;
1217- }
--
1231- */
1232- public listWorkspaces(): Record<string, Workspace> {
1233: return { ...this.#workspaces };
1234- }
1235-
1236- /**
1237- * Adds a new workspace to the Mastra instance.
1238- *
--
1255- }
1256- const workspaceKey = key || workspace.id;
1257: if (this.#workspaces[workspaceKey]) {
1258- const logger = this.getLogger();
1259- logger.debug(`Workspace with key ${workspaceKey} already exists. Skipping addition.`);
1260- return;
1261- }
1262-
1263: this.#workspaces[workspaceKey] = workspace;
1264- }
1265-
1266- /**
1267- * Retrieves a registered workflow by its ID.
1268- *
inline-${Date.now()} as workspace ID leaks workspace instances in the conditional-workspace path.
When hasConditionalWorkspace is true, resolveStoredWorkspace runs inside an async closure at line 469 on every request. Each invocation with a unique timestamp ID creates a new Workspace instance and registers it with the Mastra instance via addWorkspace() (line 905), accumulating indefinitely in the #workspaces registry.
Use a stable, deterministic ID for inline workspaces — e.g., a content hash of the serialized config — to ensure the same workspace is reused across requests.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/editor/src/namespaces/agent.ts` around lines 1115 - 1118, The inline
workspace path currently uses a unique timestamp ID (`inline-${Date.now()}`)
when calling hydrateSnapshotToWorkspace, which causes resolveStoredWorkspace
(called per request when hasConditionalWorkspace is true) to create and register
new Workspace instances via addWorkspace into the `#workspaces` registry; replace
the timestamp with a stable deterministic ID (e.g., a hash of the serialized
workspaceRef.config) so repeated inline configs reuse the same workspace
instance — compute the content hash of workspaceRef.config and pass that as the
workspace ID to hydrateSnapshotToWorkspace to prevent unbounded growth of
`#workspaces`.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
workspaces/s3/src/blob-store/index.test.ts (1)
316-340: No test for paginateddangerouslyClearAllThe implementation in
index.tshas ado…while (continuationToken)loop that handles truncated S3 list responses. The mock always returnsIsTruncated: false, so the multi-page path (settingNextContinuationTokenand looping) is never exercised.✅ Suggested additional test case
it('should handle paginated listing and delete all pages', async () => { const store = createStore(); // First page mockBucket.set('mastra_skill_blobs/hash1', { Body: 'a', Metadata: {}, ContentType: 'text/plain' }); mockBucket.set('mastra_skill_blobs/hash2', { Body: 'b', Metadata: {}, ContentType: 'text/plain' }); let callCount = 0; const originalImpl = mockSendFn.getMockImplementation()!; mockSendFn.mockImplementation((cmd: any) => { if (cmd._type === 'List' && callCount++ === 0) { // Simulate a truncated first page returning only hash1 return Promise.resolve({ Contents: [{ Key: 'mastra_skill_blobs/hash1' }], IsTruncated: true, NextContinuationToken: 'page2token', }); } return originalImpl(cmd); }); await store.dangerouslyClearAll(); expect(mockBucket.size).toBe(0); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspaces/s3/src/blob-store/index.test.ts` around lines 316 - 340, Add a unit test that exercises the paginated branch of dangerouslyClearAll by mocking the S3 list response to return IsTruncated: true on the first List call and a NextContinuationToken, then ensure subsequent List calls return the remaining keys; update the existing test file to wrap this behavior by using createStore() and mockBucket, override mockSendFn to return a truncated first page (including NextContinuationToken) on the first List command and delegate to the original implementation thereafter, call store.dangerouslyClearAll(), and assert mockBucket.size is 0 to verify all pages were processed and deleted.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@workspaces/s3/src/blob-store/index.test.ts`:
- Around line 205-218: The computed size in S3BlobStore.get() incorrectly uses
Number(metadata.size) ?? fallback, which yields NaN and bypasses the
nullish-coalescing fallback; update the size calculation in the get method of
workspaces/s3/src/blob-store/index.ts (look for S3BlobStore.get or the get(...)
function) to first check metadata.size for != null before calling Number(), or
call Number(...) and then guard with Number.isFinite/Number.isNaN to use
Buffer.byteLength(body, 'utf-8') when the numeric conversion is invalid,
ensuring the fallback is used whenever metadata.size is missing or not a valid
number.
---
Duplicate comments:
In `@packages/editor/src/namespaces/workspace.ts`:
- Around line 71-77: The loop over snapshot.mounts performs sequential awaits
causing slow startup; parallelize resolution by mapping
Object.entries(snapshot.mounts) to an array of promises that call
this.resolveFilesystem(fsConfig) and then await Promise.all, then reconstruct
the mounts Record and assign to config.mounts so resolveFilesystem is executed
concurrently for all mounts. Reference: snapshot.mounts, this.resolveFilesystem,
mounts, config.mounts.
- Around line 187-188: The code uses the non-null assertion this.mastra! before
calling getStorage, which can throw a TypeError if this.mastra is undefined;
change it to explicitly guard this.mastra first (e.g., capture const mastra =
this.mastra and throw if !mastra) and then call mastra.getStorage(), or use
optional chaining (this.mastra?.getStorage()) followed by the existing if
(!storage) throw, referencing the this.mastra property and getStorage method so
the storage guard is effective.
- Around line 150-175: Both resolver methods are declared async but don't use
await and resolveSandbox lacks an explicit return type; either remove async if
provider.createFilesystem/createSandbox are synchronous, or keep async and add
explicit awaits and types. Update resolveFilesystem (method resolveFilesystem)
to either remove the async keyword or add "await
provider.createFilesystem(config)" so the intent is clear and keep its return
type Promise<WorkspaceFilesystem>. Update resolveSandbox (method resolveSandbox)
to include an explicit return type (e.g., Promise<WorkspaceSandbox>), and either
remove async if createSandbox is synchronous or use "return await
provider.createSandbox(sandboxConfig.config)" to properly await the factory;
reference provider.createFilesystem and provider.createSandbox when making the
change.
---
Nitpick comments:
In `@workspaces/s3/src/blob-store/index.test.ts`:
- Around line 316-340: Add a unit test that exercises the paginated branch of
dangerouslyClearAll by mocking the S3 list response to return IsTruncated: true
on the first List call and a NextContinuationToken, then ensure subsequent List
calls return the remaining keys; update the existing test file to wrap this
behavior by using createStore() and mockBucket, override mockSendFn to return a
truncated first page (including NextContinuationToken) on the first List command
and delegate to the original implementation thereafter, call
store.dangerouslyClearAll(), and assert mockBucket.size is 0 to verify all pages
were processed and deleted.
Replace Record<string, any> with proper types for workspace/skill/blob tables in RecordTypes: - TABLE_WORKSPACES → StorageWorkspaceType - TABLE_WORKSPACE_VERSIONS → WorkspaceVersion - TABLE_SKILLS → StorageSkillType - TABLE_SKILL_VERSIONS → SkillVersion - TABLE_SKILL_BLOBS → StorageBlobEntry Co-Authored-By: Mastra Code <noreply@mastra.ai>
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (2)
workspaces/s3/src/blob-store/index.ts (2)
19-27: Mandatory credentials block IAM role / default credential provider chain
accessKeyIdandsecretAccessKeyare required, so the class cannot be used on EC2/ECS/Lambda where credentials come from the instance/task role via IMDS. When both fields are absent fromS3Clientconfig, AWS SDK v3 automatically walks the full default credential provider chain (environment variables →~/.aws/credentials→ IMDS → container roles).♻️ Suggested fix
export interface S3BlobStoreOptions { bucket: string; region: string; - accessKeyId: string; - secretAccessKey: string; + /** Omit to use the SDK default credential provider chain (IAM roles, env vars, etc.) */ + accessKeyId?: string; + secretAccessKey?: string; endpoint?: string; forcePathStyle?: boolean; prefix?: string; }private getClient(): S3Client { if (this._client) return this._client; + const credentials = + this.accessKeyId && this.secretAccessKey + ? { accessKeyId: this.accessKeyId, secretAccessKey: this.secretAccessKey } + : undefined; this._client = new S3Client({ region: this.region, - credentials: { - accessKeyId: this.accessKeyId, - secretAccessKey: this.secretAccessKey, - }, + ...(credentials ? { credentials } : {}), endpoint: this.endpoint, forcePathStyle: this.forcePathStyle, }); return this._client; }Also applies to: 113-121
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspaces/s3/src/blob-store/index.ts` around lines 19 - 27, The S3BlobStoreOptions interface currently requires accessKeyId and secretAccessKey which prevents using the AWS default credential provider chain (IMDS/container roles); make accessKeyId and secretAccessKey optional on S3BlobStoreOptions, and change the S3 client construction (in the S3BlobStore class/constructor or wherever new S3Client(...) is created) to only pass credentials when both values are provided (otherwise omit the credentials field so the AWS SDK v3 can use the default provider chain); apply the same optionality/conditional-passing fix for the duplicate block referenced around the S3BlobStore methods at the other location (lines ~113-121) so both places allow instance/task role credentials.
172-172:createdAtfalls back to "now" when S3 metadata is missing — considerLastModifiedinsteadIf
createdatmetadata is absent (e.g., blobs written externally or metadata stripped), the returnedcreatedAtsilently equals the time of retrieval, not the time of creation.response.LastModifiedfromGetObjectCommandis a more reliable fallback because S3 sets it to the upload time.💡 Suggested improvement
- createdAt: metadata.createdat ? new Date(metadata.createdat) : new Date(), + createdAt: metadata.createdat + ? new Date(metadata.createdat) + : (response.LastModified ?? new Date()),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspaces/s3/src/blob-store/index.ts` at line 172, The createdAt fallback currently uses new Date() when metadata.createdat is missing; change this to prefer the S3 object's response.LastModified value before falling back to now. Update the assignment that builds createdAt (where metadata.createdat is read) to parse metadata.createdat if present, otherwise use response.LastModified (converted to a Date) and only use new Date() as the final fallback; reference the metadata.createdat check and response.LastModified in your fix.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/editor/src/namespaces/skill.ts`:
- Around line 66-69: The code calls this.editor.resolveBlobStore() and
immediately passes the result (blobStore) into publishSkillFromSource without
guarding for null/undefined; add a null-check after calling resolveBlobStore
(e.g., if (!blobStore) { throw or return an informative error) so
publishSkillFromSource is never invoked with a null blobStore. Locate the call
site around resolveBlobStore / blobStore and ensure you either early-return or
throw a clear error before calling publishSkillFromSource(source, skillPath,
blobStore).
- Around line 72-90: The two-step publish can leave status:'published' without
updating activeVersionId; change the publish flow to perform the version
creation and activation atomically or at least fail fast: after calling
skillStore.update({...snapshot, tree, status:'published'}) immediately call
getLatestVersion(skillId) and if it returns null throw an error (do not silently
skip), then perform the second update to set activeVersionId and wrap both
operations in a single storage-level atomic operation if available (add/use a
new skillStore.publish(skillId, snapshot, tree) that creates the version and
sets activeVersionId in one transaction); also ensure any transient update
errors bubble up so callers can rollback or retry rather than leaving a partial
published state.
In `@workspaces/s3/src/blob-store/index.ts`:
- Around line 197-209: The delete(hash: string) method currently performs a
has(hash) pre-check before sending DeleteObjectCommand, which introduces a
TOCTOU race but is intentional because S3 returns 204 for both deleting an
existing or non-existent object and the caller expects a meaningful boolean; add
a concise inline comment above the has() check in delete explaining that the
pre-check is deliberate (acknowledging the TOCTOU/race possibility), that for
content-addressed blobs this is benign, and that the DeleteObjectCommand call
using this.toKey(hash) and this.bucket intentionally does not re-check the
result because S3's semantics make the server response ambiguous.
- Around line 245-255: The code passes possible undefined keys from
listResponse.Contents into DeleteObjectsCommand via objects.map(obj => ({ Key:
obj.Key })); filter out entries where obj.Key is undefined first (e.g.,
objects.filter(o => o.Key != null)) and then map to DeleteObjectsCommand's
Delete.Objects (or use a non-null assertion when mapping) so only { Key: string
} entries are sent; target the block using listResponse.Contents, objects.map,
and DeleteObjectsCommand in this change.
- Around line 211-230: putMany and getMany currently call Promise.all on the
entire input which can overwhelm S3 and cause SlowDown errors; limit concurrency
by implementing a simple in-file semaphore or batching loop (no new dependency)
and process at most N concurrent requests (e.g., CONCURRENCY = 10) when calling
this.put(entry) in putMany and this.get(hash) in getMany, ensuring you await
completion of each batch or acquire/release the semaphore per operation; update
the methods putMany and getMany to use that concurrency cap so large batches are
processed safely without changing their signatures.
---
Duplicate comments:
In `@packages/editor/src/namespaces/agent.ts`:
- Around line 1115-1118: The current code generates a fresh ID using
`inline-${Date.now()}` when `workspaceRef.type === 'inline'`, causing
`hydrateSnapshotToWorkspace` (which triggers `mastra.addWorkspace()`) to create
a new workspace on every request and leak into `#workspaces`; replace the
volatile timestamp ID with a stable deterministic ID derived from the serialised
`workspaceRef.config` (e.g., JSON.stringify + a content hash like SHA-256 or an
existing stableHash util) so repeated calls from `resolveStoredWorkspace` / the
per-request lambda when `hasConditionalWorkspace` is true reuse the same
workspace instead of accumulating entries. Ensure you compute the hash
synchronously before calling `hydrateSnapshotToWorkspace` and use that
hash-based string as the workspace id.
- Around line 1134-1140: resolveAgentSkillSource currently returns undefined for
array-typed conditional skills (StorageConditionalVariant[]), silently bypassing
versioned resolution; change it to detect Array.isArray(skills) and either
(preferred) merge the conditional variants into a single
Record<string,StorageSkillConfig> before continuing or (minimum) emit a warning
so the fallback is observable. Implement merging by iterating the
StorageConditionalVariant[] and composing/overwriting per-skill config entries
into skillConfigs (later variants win), then proceed with the existing
resolution logic in resolveAgentSkillSource; if merging fails, log a clear
warning referencing the agent skills input. Ensure you reference
resolveAgentSkillSource, StorageResolvedAgentType['skills'], StorageSkillConfig
and StorageConditionalVariant in your changes.
In `@packages/editor/src/namespaces/workspace.ts`:
- Around line 73-75: The loop is awaiting resolveFilesystem sequentially; change
it to run all resolves in parallel by collecting entries =
Object.entries(snapshot.mounts), building an array of promises via
entries.map(([path, fsConfig]) => this.resolveFilesystem(fsConfig)), awaiting
Promise.all on that array, and then re-associating results back into
mounts[path]; update the code around snapshot.mounts, resolveFilesystem, and
mounts to use this Promise.all pattern so mount resolutions happen concurrently.
- Around line 150-175: Both resolver methods are marked async but never
await—remove the unnecessary async keyword from resolveFilesystem and
resolveSandbox; also add an explicit return type to resolveSandbox (e.g.
Promise<WorkspaceSandbox>) to satisfy strict TypeScript. Keep the existing
returns from provider.createFilesystem(config) and
provider.createSandbox(sandboxConfig.config), and ensure types reference
WorkspaceFilesystem and WorkspaceSandbox respectively.
In `@stores/cloudflare/src/storage/types.ts`:
- Around line 99-101: The bindings mapped type is currently exhaustive and will
break existing deployments when TABLE_NAMES grows; update
CloudflareWorkersConfig.bindings and CloudflareDomainBindingsConfig.bindings to
allow missing/new keys by making the mapping optional (e.g., use a
Partial/optional mapped variant instead of a fully required [key in
TABLE_NAMES]: KVNamespace) so new table constants from TABLE_NAMES do not force
every deployment to include those KV namespaces.
---
Nitpick comments:
In `@workspaces/s3/src/blob-store/index.ts`:
- Around line 19-27: The S3BlobStoreOptions interface currently requires
accessKeyId and secretAccessKey which prevents using the AWS default credential
provider chain (IMDS/container roles); make accessKeyId and secretAccessKey
optional on S3BlobStoreOptions, and change the S3 client construction (in the
S3BlobStore class/constructor or wherever new S3Client(...) is created) to only
pass credentials when both values are provided (otherwise omit the credentials
field so the AWS SDK v3 can use the default provider chain); apply the same
optionality/conditional-passing fix for the duplicate block referenced around
the S3BlobStore methods at the other location (lines ~113-121) so both places
allow instance/task role credentials.
- Line 172: The createdAt fallback currently uses new Date() when
metadata.createdat is missing; change this to prefer the S3 object's
response.LastModified value before falling back to now. Update the assignment
that builds createdAt (where metadata.createdat is read) to parse
metadata.createdat if present, otherwise use response.LastModified (converted to
a Date) and only use new Date() as the final fallback; reference the
metadata.createdat check and response.LastModified in your fix.
| const blobStore = await this.editor.resolveBlobStore(); | ||
|
|
||
| // Collect and store blobs | ||
| const { snapshot, tree } = await publishSkillFromSource(source, skillPath, blobStore); |
There was a problem hiding this comment.
Missing null guard on blobStore before calling publishSkillFromSource.
resolveBlobStore() can return null/undefined (see resolveAgentSkillSource in agent.ts lines 1147–1148 for the same call with an explicit guard). Without this check, a missing blob store configuration silently passes null into publishSkillFromSource, causing a runtime error inside that function.
🛡️ Proposed fix
const blobStore = await this.editor.resolveBlobStore();
+if (!blobStore) throw new Error('No blob store is configured. Register one via new MastraEditor({ blobStores: [...] })');
// Collect and store blobs
const { snapshot, tree } = await publishSkillFromSource(source, skillPath, blobStore);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const blobStore = await this.editor.resolveBlobStore(); | |
| // Collect and store blobs | |
| const { snapshot, tree } = await publishSkillFromSource(source, skillPath, blobStore); | |
| const blobStore = await this.editor.resolveBlobStore(); | |
| if (!blobStore) throw new Error('No blob store is configured. Register one via new MastraEditor({ blobStores: [...] })'); | |
| // Collect and store blobs | |
| const { snapshot, tree } = await publishSkillFromSource(source, skillPath, blobStore); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/editor/src/namespaces/skill.ts` around lines 66 - 69, The code calls
this.editor.resolveBlobStore() and immediately passes the result (blobStore)
into publishSkillFromSource without guarding for null/undefined; add a
null-check after calling resolveBlobStore (e.g., if (!blobStore) { throw or
return an informative error) so publishSkillFromSource is never invoked with a
null blobStore. Locate the call site around resolveBlobStore / blobStore and
ensure you either early-return or throw a clear error before calling
publishSkillFromSource(source, skillPath, blobStore).
| await skillStore.update({ | ||
| id: skillId, | ||
| ...snapshot, | ||
| tree, | ||
| status: 'published', | ||
| }); | ||
|
|
||
| // Point activeVersionId to the newly created version | ||
| const latestVersion = await skillStore.getLatestVersion(skillId); | ||
| if (latestVersion) { | ||
| await skillStore.update({ | ||
| id: skillId, | ||
| activeVersionId: latestVersion.id, | ||
| }); | ||
| } | ||
|
|
||
| // Fetch and return the resolved skill | ||
| const resolved = await skillStore.getByIdResolved(skillId); | ||
| if (!resolved) throw new Error(`Failed to resolve skill ${skillId} after publish`); |
There was a problem hiding this comment.
Two-step non-atomic publish leaves the skill in a partial state on failure.
The publish workflow performs two separate update() calls:
update({ ...snapshot, tree, status: 'published' })— creates the new version record.update({ activeVersionId: latestVersion.id })— points the skill at the new version.
If getLatestVersion returns null (line 81 branch not taken), or if the second update fails (e.g., transient error), the skill is left with status: 'published' but activeVersionId still pointing to the previous (or no) version. The getLatestVersion fallback in resolveAgentSkillSource partially masks this for strategy: 'latest', but it breaks explicit rollback semantics — after a rollback, a failed publish would cause getLatestVersion to serve the new (failed) version instead of the rolled-back one.
Consider whether the storage layer can expose an atomic publish(skillId, snapshot, tree) method that creates the version and updates activeVersionId in a single operation, or at minimum throw an error if latestVersion is null rather than silently skipping the pointer update.
🛡️ Minimal guard to prevent silent partial state
const latestVersion = await skillStore.getLatestVersion(skillId);
-if (latestVersion) {
+if (!latestVersion) {
+ throw new Error(`Failed to retrieve version after publishing skill "${skillId}"`);
+}
+{
await skillStore.update({
id: skillId,
activeVersionId: latestVersion.id,
});
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/editor/src/namespaces/skill.ts` around lines 72 - 90, The two-step
publish can leave status:'published' without updating activeVersionId; change
the publish flow to perform the version creation and activation atomically or at
least fail fast: after calling skillStore.update({...snapshot, tree,
status:'published'}) immediately call getLatestVersion(skillId) and if it
returns null throw an error (do not silently skip), then perform the second
update to set activeVersionId and wrap both operations in a single storage-level
atomic operation if available (add/use a new skillStore.publish(skillId,
snapshot, tree) that creates the version and sets activeVersionId in one
transaction); also ensure any transient update errors bubble up so callers can
rollback or retry rather than leaving a partial published state.
| async delete(hash: string): Promise<boolean> { | ||
| const existed = await this.has(hash); | ||
| if (!existed) return false; | ||
|
|
||
| const client = this.getClient(); | ||
| await client.send( | ||
| new DeleteObjectCommand({ | ||
| Bucket: this.bucket, | ||
| Key: this.toKey(hash), | ||
| }), | ||
| ); | ||
| return true; | ||
| } |
There was a problem hiding this comment.
delete() has an inherent TOCTOU between has() and the actual delete
DeleteObjectCommand returns a 204 for both a successful delete and a delete of a non-existent object, so a pre-check is necessary to return a meaningful boolean. However, the two-call sequence means a concurrent delete could cause both callers to read existed = true and both return true. For content-addressable blobs this is benign, but a brief comment would help the next reader understand why the pre-check is intentional.
📝 Suggested inline comment
async delete(hash: string): Promise<boolean> {
+ // S3 DeleteObject returns 204 for both existing and non-existing objects,
+ // so we must HEAD-check first to return an accurate boolean.
+ // Inherent TOCTOU: concurrent deletions may both observe existed=true.
const existed = await this.has(hash);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async delete(hash: string): Promise<boolean> { | |
| const existed = await this.has(hash); | |
| if (!existed) return false; | |
| const client = this.getClient(); | |
| await client.send( | |
| new DeleteObjectCommand({ | |
| Bucket: this.bucket, | |
| Key: this.toKey(hash), | |
| }), | |
| ); | |
| return true; | |
| } | |
| async delete(hash: string): Promise<boolean> { | |
| // S3 DeleteObject returns 204 for both existing and non-existing objects, | |
| // so we must HEAD-check first to return an accurate boolean. | |
| // Inherent TOCTOU: concurrent deletions may both observe existed=true. | |
| const existed = await this.has(hash); | |
| if (!existed) return false; | |
| const client = this.getClient(); | |
| await client.send( | |
| new DeleteObjectCommand({ | |
| Bucket: this.bucket, | |
| Key: this.toKey(hash), | |
| }), | |
| ); | |
| return true; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workspaces/s3/src/blob-store/index.ts` around lines 197 - 209, The
delete(hash: string) method currently performs a has(hash) pre-check before
sending DeleteObjectCommand, which introduces a TOCTOU race but is intentional
because S3 returns 204 for both deleting an existing or non-existent object and
the caller expects a meaningful boolean; add a concise inline comment above the
has() check in delete explaining that the pre-check is deliberate (acknowledging
the TOCTOU/race possibility), that for content-addressed blobs this is benign,
and that the DeleteObjectCommand call using this.toKey(hash) and this.bucket
intentionally does not re-check the result because S3's semantics make the
server response ambiguous.
| async putMany(entries: StorageBlobEntry[]): Promise<void> { | ||
| if (entries.length === 0) return; | ||
| // S3 doesn't have a batch PUT, so we parallelize individual puts. | ||
| // Content-addressable means duplicate writes are idempotent. | ||
| await Promise.all(entries.map(entry => this.put(entry))); | ||
| } | ||
|
|
||
| async getMany(hashes: string[]): Promise<Map<string, StorageBlobEntry>> { | ||
| const result = new Map<string, StorageBlobEntry>(); | ||
| if (hashes.length === 0) return result; | ||
|
|
||
| // Parallelize individual gets | ||
| const entries = await Promise.all(hashes.map(hash => this.get(hash))); | ||
| for (const entry of entries) { | ||
| if (entry) { | ||
| result.set(entry.hash, entry); | ||
| } | ||
| } | ||
| return result; | ||
| } |
There was a problem hiding this comment.
Claude says: @coderabbitai Valid concern in the general case, but premature for this use case. Skill version trees typically contain <20 files (a SKILL.md, a few source files, maybe some images). The parallelism is bounded by the tree size, not by external input. Adding a concurrency limiter would introduce complexity without practical benefit. If we ever need to store thousands of blobs per skill, we can add batching then.
| const objects = listResponse.Contents; | ||
| if (objects && objects.length > 0) { | ||
| await client.send( | ||
| new DeleteObjectsCommand({ | ||
| Bucket: this.bucket, | ||
| Delete: { | ||
| Objects: objects.map(obj => ({ Key: obj.Key })), | ||
| Quiet: true, | ||
| }, | ||
| }), | ||
| ); |
There was a problem hiding this comment.
obj.Key may be undefined — filter before passing to DeleteObjectsCommand
The AWS SDK v3 types _Object.Key as string | undefined. Passing { Key: undefined } to DeleteObjects will produce a malformed request. Add a filter to guard against this.
🛡️ Proposed fix
- Objects: objects.map(obj => ({ Key: obj.Key })),
+ Objects: objects.filter(obj => obj.Key != null).map(obj => ({ Key: obj.Key! })),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const objects = listResponse.Contents; | |
| if (objects && objects.length > 0) { | |
| await client.send( | |
| new DeleteObjectsCommand({ | |
| Bucket: this.bucket, | |
| Delete: { | |
| Objects: objects.map(obj => ({ Key: obj.Key })), | |
| Quiet: true, | |
| }, | |
| }), | |
| ); | |
| const objects = listResponse.Contents; | |
| if (objects && objects.length > 0) { | |
| await client.send( | |
| new DeleteObjectsCommand({ | |
| Bucket: this.bucket, | |
| Delete: { | |
| Objects: objects.filter(obj => obj.Key != null).map(obj => ({ Key: obj.Key! })), | |
| Quiet: true, | |
| }, | |
| }), | |
| ); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workspaces/s3/src/blob-store/index.ts` around lines 245 - 255, The code
passes possible undefined keys from listResponse.Contents into
DeleteObjectsCommand via objects.map(obj => ({ Key: obj.Key })); filter out
entries where obj.Key is undefined first (e.g., objects.filter(o => o.Key !=
null)) and then map to DeleteObjectsCommand's Delete.Objects (or use a non-null
assertion when mapping) so only { Key: string } entries are sent; target the
block using listResponse.Contents, objects.map, and DeleteObjectsCommand in this
change.
Replace global crypto.randomUUID() with imported randomUUID() from node:crypto to match codebase conventions for type safety. PR: #13156 (comment) Co-Authored-By: Mastra Code <noreply@mastra.ai>
… in publish - Guard against undefined blobStore before calling publishSkillFromSource - Throw an error if latestVersion is null after publish instead of silently skipping activeVersionId update PR: #13156 (comment) PR: #13156 (comment) Co-Authored-By: Mastra Code <noreply@mastra.ai>
Add explicit Promise<WorkspaceSandbox> return type and await the provider call for clarity. Also add await to resolveFilesystem for consistency. PR: #13156 (comment) Co-Authored-By: Mastra Code <noreply@mastra.ai>
- Filter out objects with undefined Key in dangerouslyClearAll to avoid malformed DeleteObjectsCommand requests - Add comment explaining intentional TOCTOU pre-check in delete() PR: #13156 (comment) PR: #13156 (comment) Co-Authored-By: Mastra Code <noreply@mastra.ai>
Replace inline-${Date.now()} with a content-hash-based ID to prevent
unbounded workspace instance accumulation in the registry when
resolveStoredWorkspace is called repeatedly with the same inline config.
PR: #13156 (comment)
Co-Authored-By: Mastra Code <noreply@mastra.ai>
Adds workspace and skill storage domains to the editor with a git-like versioning model for skills.
Skills live on a filesystem (draft) and get snapshotted into content-addressable blob storage on publish. Agents resolve skills via
strategy: 'latest',pin: '<versionId>', orstrategy: 'live'(direct filesystem reads). Publishing a skill auto-invalidates cached agents that reference it.New storage domains (
workspaces,skills,blobs) implemented for LibSQL, Postgres, and MongoDB. S3 blob store added to@mastra/s3. Server endpoints for workspace/skill CRUD and skill publish.~500 tests across core unit tests (blob store, versioned skill sources, composite sources, publish flow) and editor integration tests (publish→agent→skill discovery, draft vs published, pin vs latest, live strategy, version rollback, agent cache invalidation).
Summary by CodeRabbit
New Features
Tests