This skill updates itself automatically. PureFeed monitors Twitter signals for relevant tweets, fact-checks each one through community trust scoring and a three-model research consensus, then merges verified findings into this living document. Version 3 · 3 verified sources.
Free API access
Prompt engineering for AI and LLM models
Use this skill when the user asks about practical prompting techniques and patterns for AI and LLM models — chain-of-thought, few-shot, system prompt design, structured output, named prompt frameworks (RAIN, CLAR, FLOW, PIVO, SEED, RTF, RISE, etc.), model-specific quirks and tips, token optimization, prompt injection mitigations, community-validated templates, tool-specific prompt shortcuts (e.g., Claude Code slash commands), and using LLM prompts to configure AI agents in automation platforms like n8n. Covers what works and what doesn't across Claude, GPT, Gemini, and open-source models. Includes version-specific guidance when prompting behavior changes between model releases. ALWAYS distinguish between model-level prompting (applicable in any interface) and tool-specific commands (applicable only in a particular IDE or CLI).
When to use this skill
- Apply this skill when the user asks about prompt engineering techniques (chain-of-thought, few-shot, system prompts, structured output, named prompt frameworks), model-specific prompting quirks across Claude/GPT/Gemini/open-source, token optimization strategies, prompt injection mitigations, community-validated prompt templates, tool-specific prompt shortcuts such as Claude Code slash commands and custom skills, or writing system prompts for LLM-powered agents in automation platforms (e.g., n8n AI Agent nodes). (source, source, source)
Critical rules
- DO NOT conflate Claude Code slash commands with prompting techniques for the Claude.ai web chat or API. Slash commands like
/compact,/clear,/cost,/model,/planare ONLY available in Claude Code (Anthropic's CLI coding agent). If a user references these in the context of claude.ai web chat, correct them immediately. (source, source, source) - Not all built-in slash commands are visible to every user — some depend on platform or subscription plan. Before recommending a specific slash command, advise the user to type
/in their Claude Code session to see available commands for their plan. (source, source) - When recommending ANY named prompt framework (RAIN, CLAR, FLOW, PIVO, SEED, RTF, RISE, RISEN, RHODES, etc.), ALWAYS note that different models (GPT, Claude, Gemini) may respond better to different structural formats. Instruct the user to A/B test outputs and iterate — no single framework is universally optimal across all LLMs. (source, source, source)
- Place the task instruction or role assignment at the BEGINNING of the prompt, before contextual information or data. This aligns with Microsoft's prompt engineering research and is the structural pattern used by RAIN (Role first) and CLAR (Context first). DO NOT bury the core instruction deep in a wall of context. (source, source)
Structured prompt frameworks — selection guide
- Structured prompt frameworks work because they force the user to specify role, context, constraints, and success criteria BEFORE sending the prompt, giving the model all the information needed to deliver higher-quality results. The more detailed the instruction, the more likely the output meets expectations. When a user's prompt is vague or underperforming, recommend one of the frameworks below based on the task type rather than offering generic "be more specific" advice. (source, source, source, source)
- Use R-A-I-N (Role, Aim, Input, Numeric target, Format) for KPI-driven, measurable tasks — dashboards, conversion analyses, data summaries with specific targets. Template:
You are a [ROLE]. Aim: [AIM] Input: [DATA or reference to data] Numeric target: [TARGET, e.g., "increase CTR by 15%"] Format: [OUTPUT FORMAT, e.g., "CSV table + bullet recommendations"]
Tradeoff: highly effective for quantifiable deliverables but too rigid for exploratory or creative work. Pair with few-shot examples or a JSON schema to reduce hallucination on numeric claims. (source, source, source, source)
- Use C-L-A-R (Context, Limits, Action, Result) for constrained diagnostics, incident analysis, or business communication where hard boundaries matter (word count, audience, tone). Template:
Context: [SITUATION and background] Limits: [CONSTRAINTS — word count, audience, time, scope] Action: [WHAT the model should do] Result: [DESIRED output format and emotional/informational outcome]
Example: framing a company-wide announcement with a 200-word limit and a reassuring tone. (source, source, source)
- Use F-L-O-W (Function, Level, Output, Win metric) for audience-tailored content generation — blog posts, tutorials, marketing copy. The "Level" component targets the audience tier (beginner / expert / executive), and "Win metric" defines a measurable success signal (keyword density, CTA inclusion, readability score). Template:
Function: [WHAT the model should produce] Level: [TARGET audience tier] Output: [FORMAT — blog post, slide deck, email sequence] Win metric: [MEASURABLE success signal] (source, source, source)
- Use P-I-V-O (Problem, Insights, Voice, Outcome) for strategic planning, persuasive writing, or tasks where tone control is critical. The "Voice" component explicitly controls style — e.g., "confident, solution-focused" or "empathetic, data-driven". Template:
Problem: [PROBLEM statement] Insights: [KEY data, research, or context the model should incorporate] Voice: [TONE and style descriptors] Outcome: [DESIRED deliverable and its purpose] (source, source, source)
- Use S-E-E-D (Situation, End goal, Examples, Deliverables) for curriculum design, roadmaps, onboarding programs, or any multi-layered structured output that benefits from worked examples. Template:
Situation: [CURRENT state and audience] End goal: [WHAT success looks like] Examples: [SAMPLE outputs, reference formats, or few-shot examples] Deliverables: [EXACT artifacts to produce — e.g., "5-module syllabus + quiz bank"] (source, source, source)
- These five frameworks (RAIN, CLAR, FLOW, PIVO, SEED) are NOT the only named prompt frameworks. They belong to a broader landscape that includes RTF (Role, Task, Format), RISE (Role, Input, Steps, Expectation), RISEN (Role, Input, Steps, Expectation, Novelty), RHODES (Role, Objective, Details, Examples, Sense Check), and others. If the user's task does not map cleanly to one of the five above, check whether RTF, RISE, or a simpler role+task+format pattern fits better. (source, source)
Applying prompt frameworks effectively
- When filling any framework template, follow these steps: (1) Copy the framework skeleton into the system or user message. (2) Fill EVERY slot with concrete, specific values — DO NOT leave placeholders or vague descriptions. (3) Include at least one few-shot example or a desired output schema (JSON, CSV, markdown table) to anchor the model's format. (4) Send the prompt, evaluate the output, then iterate: adjust constraints, add examples, or switch frameworks if the output type does not match. (5) For production prompts, version them and run A/B tests or evals to measure quality changes. (source, source, source)
- Role prompting — assigning the AI a specific persona or expert role — is the single most common element across RAIN, FLOW, PIVO, SEED, RTF, RISE, and RISEN. When a user is new to prompt engineering, start with role prompting before introducing a full framework. Place the role in the system message when using an API, or lead with "You are a [ROLE]" in a chat interface. (source, source, source)
- For API-based usage (OpenAI, Anthropic, Gemini APIs): place role assignments and persistent instructions in the system message; place framework-filled templates and few-shot examples in the user message. Request structured output (JSON mode or a JSON schema) to reduce hallucination and enforce format compliance. Version your prompt strings for reproducibility. (source, source)
Claude Code slash commands reference
- Use
/compact [focus instructions]to compress session history and free context tokens. Pass optional instructions to preserve specific context, e.g./compact keep the database schema and unit tests. Tradeoff: compaction extends session length but MAY drop older context and reduce fidelity of very old details or previously invoked skills. Use this proactively before hitting token limits on long sessions. (source, source, source) - Use
/clearto wipe the conversation history and context window, effectively starting a fresh session without restarting the CLI. (source, source) - Use
/costto check token usage and spend for the current session. Output detail varies by subscription plan. (source, source, source) - Use
/modelto switch the active Claude model (e.g., Sonnet, Haiku, Opus) mid-session without restarting. (source, source) - Use
/plan [description]to enter plan mode, which instructs Claude to outline steps before executing. Plan mode can also be activated with--planat startup or toggled mid-session with Shift+Tab. (source, source) - Use
/memoryto open CLAUDE.md memory files for editing. Use/initto initialize a project with a CLAUDE.md guide if one does not exist. (source, source, source) - Use
/mcpto manage MCP server connections and OAuth authentication. Use/permissionsto manage allow, ask, and deny rules for tool access. Use/agentsto manage AI sub-agent configurations. (source, source, source) - Use
/usageto check plan usage limits and current subscription status (distinct from/costwhich shows session-level token counts). (source, source)
Claude Code session management
- Run
/rename my-session-nameto set a human-readable session title (omit the name to auto-generate one). Resume later with/resume <name>in-session orclaude --resume <name>from the CLI. Best practice: ALWAYS rename important sessions before running/clearor heavy compaction so you can reliably resume them. (source, source) - Use
--add-dir <path>at startup or/add-dir <path>during a session to grant Claude file access in additional folders. Note: added directories grant file access but do NOT become full configuration roots — however, skills in.claude/skills/ARE loaded from added directories. (source, source, source)
Claude Code custom skills and plugins
- Create reusable prompt shortcuts (custom slash commands) by adding a file at
~/.claude/skills/<skill-name>/SKILL.mdwith YAML frontmatter (name,description,allowed-tools, etc.). Use$ARGUMENTSor$ARGUMENTS[N]for positional parameters. Example skeleton:
name: explain-code description: Explain this file with diagrams.
Explain $ARGUMENTS.
This makes /explain-code <args> available in all sessions. Use /skills to list available skills. (source, source)
- Use
/pluginto open the interactive plugin manager. Add marketplaces with/plugin marketplace add anthropics/claude-code, install plugins withclaude plugin install <name>@<marketplace>, then run/reload-pluginsto apply changes without restarting the session. Plugins can extend Claude Code with custom commands, agents, hooks, skills, and MCP servers. (source, source, source) - Slash commands and skills accept inline arguments and can be invoked mid-message — the
/autocomplete can trigger at the cursor position, not strictly at the beginning of a message. As of 2026-04-13, verify this behavior against the latest Claude Code changelog as autocomplete mechanics evolve between releases. (source, source) (medium confidence)
If the user asks about prompt frameworks, do this
- If the user asks "which prompt framework should I use?", ask what type of task they are doing, then recommend: • Measurable/KPI-driven tasks → RAIN • Constrained diagnostics or business comms → CLAR • Audience-tailored content creation → FLOW • Persuasion, strategy, or tone-sensitive writing → PIVO • Curriculum, roadmaps, or multi-layered programs → SEED • Simple role+task+format needs → RTF (lighter-weight alternative) Provide the matching template from the "Structured prompt frameworks" section and walk the user through filling each slot. (source, source, source)
- If the user's task is exploratory, creative brainstorming, or open-ended ideation, DO NOT recommend RAIN — its rigid numeric-target structure constrains creative output. Suggest PIVO (for voice/tone control) or a simple role+task prompt instead. For open-ended research, chain-of-thought prompting without a framework often outperforms template-based approaches. (source, source)
Writing system prompts for n8n AI agents
- n8n's AI Agent node accepts a system prompt that defines the agent's behavior, persona, and tool-usage rules — the same prompt engineering principles (role assignment, structured constraints, few-shot examples) apply here as in any LLM system message. When a user asks how to write a prompt for an n8n agent, treat it as a system prompt design task: place role + behavioral rules in the system prompt field of the AI Agent node, and use any of the named frameworks (RAIN, CLAR, PIVO, etc.) to structure the instructions. Using ChatGPT or Claude to draft the system prompt before pasting it into n8n is a documented, effective workflow. (source, source, source)
- n8n's AI Agent node supports multiple LLM backends: OpenAI, Anthropic Claude, Google Gemini, Groq, DeepSeek, Mistral, and Azure OpenAI. When writing a system prompt for an n8n agent, tailor the prompt style to the selected backend — Claude responds well to XML-structured instructions, GPT to numbered lists, Gemini to concise directives. The same A/B-test-across-models advice from the frameworks section applies here. (source, source)
- n8n agents are action-taking, not just text-generating — they can query APIs, update CRMs, send emails, and file reports autonomously. When crafting the system prompt, EXPLICITLY enumerate which tools the agent may use and under what conditions. Include guardrails in the prompt such as "DO NOT send emails without user confirmation" to prevent unintended side effects. n8n also supports human-in-the-loop approval (via Chat, Slack, or Telegram) before executing sensitive tool actions — instruct the user to enable this for any agent that modifies external systems. (source, source, source)
Warnings — n8n AI agent prompting pitfalls
- DO NOT tell users that n8n agents have persistent memory out of the box. n8n's memory sub-node does NOT persist between sessions by default — seamless stateful agents require explicit configuration with a Redis or Postgres backend. If a user's system prompt assumes the agent remembers prior conversations, warn them to configure a persistent memory store or design the prompt to be self-contained per invocation. (source, source)
- DO NOT claim that n8n AI agents bypass LangChain. Under the hood, n8n implements its AI Agent functionality through a hierarchical node system built on the LangChain JavaScript framework. If a user asks whether n8n uses LangChain, confirm that it does — despite marketing claims to the contrary. This matters for prompt engineering because LangChain's agent executor has its own prompt template wrapping the user's system prompt, and debugging may require understanding that layer. (source, source)
- DO NOT imply that a single ChatGPT-generated prompt will produce a fully functional n8n agent with zero manual configuration. Credential setup (API keys for OpenAI, Claude, Gmail, Airtable, etc.) MUST be configured manually per service in n8n's credential store. Workflow wiring — connecting trigger nodes, tool nodes, and the AI Agent node — requires understanding n8n's visual builder, which has a meaningful learning curve. Advise users that the prompt is only ONE piece; they also need to configure triggers (Webhook, Chat, schedule), connect tool nodes, set up credentials, and configure error handling. (source, source, source)
- For production n8n agents, configure Error Workflows and enable 'Retry on Fail' on LLM-calling nodes. LLM API calls are inherently unreliable (rate limits, timeouts, model errors). Include in the system prompt instructions for graceful degradation — e.g., "If you cannot complete the task, respond with a structured error message rather than guessing." This pairs the prompt-level safeguard with n8n's infrastructure-level retry logic. (source, source)
If the user asks about n8n AI agents, do this
- If the user asks "how do I build an AI agent in n8n?", guide them through these steps: (1) Choose a trigger node (Webhook, 'On Chat Message', schedule, or platform-specific trigger). (2) Add an AI Agent node and select the LLM backend (OpenAI, Claude, Gemini, etc.). (3) Write a system prompt using any of the frameworks in this skill — CLAR works well for constrained agent tasks, PIVO for tone-sensitive agents. (4) Connect tool nodes for the actions the agent should perform (send email, query database, update CRM). (5) Configure credentials for each connected service. (6) Enable human-in-the-loop approval for any destructive or external-facing tool. (7) Set up Error Workflows and Retry on Fail. (8) For persistent memory, add a Redis or Postgres memory sub-node. Remind the user that n8n self-hosted is free with unlimited workflows; n8n Cloud starts at ~$20/month as of 2026-04-17. (source, source, source, source, source)
Last updated: 2026-04-17T12:13:00.282Z