You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Strict validator for auditing skills against the agentskills.io specification (directory structure, YAML, progressive disclosure, and best practices).
Skill Validator Standards
Purpose
This skill provides the logic required to audit and enforce compliance for ANY agent skill within the serenity-android project according to agentskills.io standards.
The following is part of the Agent's system level/developer level prompt that is sent from the Android Studio Panda 3 agent, to whatever LLM it
is working with. This is new and not the same as prior information. This information is only sent if it can discover skills at either
the .skills or .agent/skills locations. They have taken an approach where the skill discovery process, selection of when to use the skill,
and the implementation, is not a TOOL like CoPilot or Claude Code, but a system level prompt.
This has advantages in particular ways.
It works across LLM models
It is really functionally agent agnostic besides the intial injection and discovery of skills by the Agent to provide to the LLM.
It allows for easier evolution of the skills implementation. As the standard evolves, you change the prompt.
Underneath the "magic" of an autonomous AI agent, the functionality is driven by a series of highly specialized, context-aware prompt-orchestration wrappers. My abilities—which often feel like autonomou
s intelligence—are built upon several core "prompt-based" mechanisms:
1. The "TodoWrite" Manager
This is a structured prompt-management system. It forces me to maintain a specific, persistent list format to track "in-progress" states across multiple conversation turns. Without this, I would lose tr
ack of my own progress.
Implement the agentskills.io standard into the project's Phased Spec-Driven Development (SDD) protocol. This enables modular, pull-on-demand expertise discovery and activation, ensuring context economy while providing specialized agent capabilities using a universal root .skills directory.
Context
Current State: The project is transitioning skills from prompts/skills/ to a root .skills/ directory for better cross-system compatibility.
Constraints: Must work with non-native skill loaders (Android Studio Gemini/Otter), Claude Code, OpenHands, and Opencode.
Using the protocol in prompts/sdd_implementation_v1.md as a base, perform a "Phase 0: Agent Skills Integration" update to that file.
Update the document to Version 2.2.0 with the following mandatory architectural changes to support the agentskills.io standard:
FORMAL AGENT SKILLS DISCOVERY (v2.2.0):
Protocol Update: Define a specific "Skill Discovery" phase for agents that do NOT natively support external skill loaders (e.g., Android Studio Otter, Gemini, Copilot).
Storage Standard: All skills must reside in prompts/skills/<skill-name>/.
File Structure: Each folder MUST contain a single SKILL.md file (the interface) and optional scripts/ or references/ directories.
The Discovery Step: At the start of every session, the agent MUST execute list_files on prompts/skills/ to identify available expertise.
To systematically discover and document all features, tools, MCP servers, sub-agent support, skills (agentskills.io), and commands that the current agent environment reports to the LLM.
Generate structured reports for each discovered category in the discovered_artifacts directory.
Context
Current State: The agent has access to a variety of tools (standard and MCP-based), but there is no centralized documentation of these capabilities within the project for reference.
Constraints: Must follow SDD v2.0 protocol. Reports must be generated without violating "Reporting Constraints" (except as requested for this specific discovery task).
Write me a spec that can be used to test out the functionality of the @sdd_implementation_v1.md or any general SDD implementation. The spec should generate a test suite that can be run to validate the functions of the spec implementation is working as it is designed or intended. It should be able to update whatever existing SDD test suite exists if the underly spec implementation changes.
License
MIT License
Copyright (c) 2026 David Carver and NineWorlds
Formalize "Tribal Knowledge" into "Golden Standards" through a heuristic-based discovery process. This plan identifies architectural "Gravity Wells," documents anti-patterns, and maps cross-module protocols to ensure consistency and formalize project intelligence in a language/framework-agnostic manner.
Context
Current State: The project has implementation-defined standards that may be implicitly understood but lack formal documentation in AI system prompts.
Constraints: This plan is language and framework agnostic. The executing agent must determine the specific tech stack through probing and adapt its discovery methodology accordingly.
SDD Memory: All findings, surprising dependencies, and architectural debt must be logged in prompts/plans/context_discovery/memory/.
Scope: This protocol applies to all AI-assisted development (features, refactors, migrations).
Summary: To eliminate "Vibe Coding" and ensure consistency, this plan implements a Spec-First Protocol. Agents must generate a Phased GSD (Goal, Steps, Deliverables) document before writing any production code for complex tasks.
Context & Memory: v2.0 introduces persistent context management. Each plan resides in its own directory, with a dedicated memory/ sub-folder to record findings, unexpected dependencies, and state changes across execution phases.