AI Agent Skills

Living instruction files that AI coding agents (Claude Code, OpenClaw, Cline) can load for source-verified, actionable guidance. Each skill is built automatically from verified Twitter signals and updates itself as new sources arrive.

How it works

Signal matching

PureFeed watches your Twitter signals for tweets that match a skill template's topic.

Fact-checked

Each tweet passes community trust scoring and a three-model research consensus (GPT, Claude, Gemini) before it's accepted.

Skill format

Claude Opus writes the verified facts as imperative rules in standard SKILL.md format — ready for any AI agent to load.

Versioned + API

Every update creates a new version. Fetch via GET /api/v1/skills/{slug} in JSON, Markdown, Claude, or OpenClaw format.

All Skills

Stock market trading intelligence
No summary yet
0 sources · v0
Prompt engineering for AI and LLM models
Use this skill when the user asks about practical prompting techniques and patterns for AI and LLM models — chain-of-thought, few-shot, system prompt design, structured output, model-specific quirks and tips, token optimization, prompt injection mitigations, community-validated templates, and tool-specific prompt shortcuts (e.g., Claude Code slash commands). Covers what works and what doesn't across Claude, GPT, Gemini, and open-source models. Includes version-specific guidance when prompting behavior changes between model releases. ALWAYS distinguish between model-level prompting (applicable in any interface) and tool-specific commands (applicable only in a particular IDE or CLI).
1 sources · v1
Openclaw security measures
Use this skill when the user asks about OpenClaw security topics — runtime sandboxing, permission and capability models, secrets handling, supply-chain protections for skills, vulnerability reports, hardening practices, security-focused features and patches, audit mechanisms, incident reports related to OpenClaw and its agent runtime, or enterprise-secure alternatives that position against OpenClaw's security model. OpenClaw has been involved in multiple supply-chain incidents: the Clinejection attack via Cline CLI (February 2026), a broader axios npm compromise weaponized via OpenClaw-impersonating packages (March 2026), and malicious skills discovered on ClawHub. OpenClaw also has a documented reasoning-leak vulnerability where internal chain-of-thought text is exposed to end users on messaging channels unless explicitly suppressed. OpenClaw's exec-approval system operates as a two-layer policy: both `exec-approvals.json` and `tools.exec.*` in `openclaw.json` must agree — the stricter of the two wins. Always verify provenance of any package that installs or depends on OpenClaw, audit transitive dependency trees — not just top-level packages, lock down reasoning/thinking visibility defaults for production deployments, explicitly configure exec-approval policies after any upgrade, and evaluate whether OpenClaw's broad-permission design is appropriate for the user's threat model. When advising on OpenClaw updates, new security-relevant features, or enterprise alternatives, verify against official release notes and documentation.
11 sources · v11