-
-
Notifications
You must be signed in to change notification settings - Fork 109
Description
I've been looking at how you structured the social media data extraction—the tokenization approach is clever, but I'm curious whether you considered the trade-offs between flexibility and performance when handling rate-limited APIs.
Links:
The TL;DR
You're at 71/100, solidly in C territory. This is based on Anthropic's best practices for agentic skills. Your Utility pillar is strong (16/20)—the ROI calculations and multi-platform support actually solve real problems. But Progressive Disclosure Architecture (19/30) and Ease of Use (17/25) are dragging you down. The good news? These are fixable with some structural changes.
What's Working Well
- Utility is legit (16/20) - You're addressing a real marketing need with concrete metrics (engagement rate, ROI, CTR). The scripts have actual calculation logic and benchmarks that add value.
- Supporting files exist - You've got
HOW_TO_USE.md,sample_input.json, and actual Python scripts. That's more than most skills have. - Navigation is clean - SKILL.md has clear section headers and logical flow. No confusing structure here.
- Validation thinking - Your scripts include safeguards like
safe_dividefor zero-division errors. That's defensive coding done right.
The Big One: Missing Reference Architecture
Here's what's holding back your PDA score: SKILL.md doesn't reference the supporting files that exist in your skill package. Users reading SKILL.md have no idea that HOW_TO_USE.md, sample_input.json, and the scripts are available. They discover them by accident, if at all.
Why it matters: Progressive Disclosure means guiding users from overview → details → implementation. Right now you're dumping everything in one place.
Concrete fix: Add a references section to SKILL.md that links everything together:
## References
- **[How to Use](./HOW_TO_USE.md)** - Step-by-step examples
- **[calculate_metrics.py](./calculate_metrics.py)** - Core calculation engine
- **[analyze_performance.py](./analyze_performance.py)** - Performance analysis
- **[sample_input.json](./sample_input.json)** - Expected input format
- **[expected_output.json](./expected_output.json)** - Output structureThis alone gets you +4 points toward that PDA pillar.
Other Things Worth Fixing
-
Add trigger phrases to the description - Right now it's vague. Add: "Use when asked to 'analyze campaign', 'calculate engagement rate', 'measure ROI', or 'compare platform performance'." That's a +2 point swing and helps discoverability.
-
Document a workflow, not just examples - Your "How to Use" section shows phrases like "Analyze this Facebook campaign data" but no numbered steps. Add a 4-5 step workflow (Prepare data → Invoke → Review metrics → Apply recommendations → Track changes). This fixes your Ease of Use scoring.
-
Trim the marketing language - Phrases like "comprehensive analysis," "actionable insights," and "data-driven marketing decisions" sound like sales copy. Keep it instructional: "Calculates engagement metrics and ROI across platforms" is cleaner.
-
Terminology consistency - You mix "engagement rate" vs "engagement metrics," "ROI analysis" vs "ROI calculations." Pick one per concept and stick with it.
Quick Wins
- Add reference links to supporting files (+4 points)
- Include explicit trigger phrases (+2 points)
- Document workflow steps (+2 points)
- Remove marketing language, tighten writing (+1-2 points)
These changes could push you toward 80+ without major refactoring. The foundation is solid—it's just about connecting the dots and clarifying how users actually invoke this thing.
Checkout your skill here: [SkillzWave.ai](https://skillzwave.ai) | [SpillWave](https://spillwave.com) We have an agentic skill installer that install skills in 14+ coding agent platforms. Check out this guide on how to improve your agentic skills.