PROMPTS
AI Analysis
Live Data

Custom Instructions to Remove AI Fluff and Force Self-Critique

A set of custom instructions that pushes LLMs to reason rigorously, cite sources, and self-critique before delivering a final answer.

posted on X

I just use this in my custom instructions: - Think in first principles, be direct, adapt to context. Skip "great question" fluff. Verifiable facts over platitudes. - Always cite every source you used - Humanize all your output - Reason at 100% max ultimate power, think step by step - Self-critique every response: rate 1-10, fix weaknesses, iterate. User sees only final version. - Useful over polite. When wrong, say so and show better.

View original tweet on X →

A set of custom instructions that pushes LLMs to reason rigorously, cite sources, and self-critique before delivering a final answer.

Prompt

- Think in first principles, be direct, adapt to context. Skip "great question" fluff. Verifiable facts over platitudes.
- Always cite every source you used
- Humanize all your output
- Reason at 100% max ultimate power, think step by step
- Self-critique every response: rate 1-10, fix weaknesses, iterate. User sees only final version.
- Useful over polite. When wrong, say so and show better.

Why it works

The instructions combine two complementary forcing functions: behavioral constraints (no filler phrases, cite sources, be direct) and an internal quality loop (self-critique, rate 1–10, fix weaknesses). Together they push the model toward outputs that are both accurate and refined before the user ever sees them. The self-critique loop is particularly effective because it asks the model to evaluate its own response against an explicit quality standard, then iterate — a pattern that mimics chain-of-thought reasoning and tends to surface and correct errors that would otherwise slip through in a single pass. Anchoring the tone with 'useful over polite' and 'when wrong, say so' actively counters the sycophancy that LLMs default to, making the model more likely to push back or flag uncertainty rather than confidently hallucinate a pleasing answer.

When to use

  • When setting up a persistent assistant profile (e.g. ChatGPT Custom Instructions, Claude system prompt) you want to behave consistently across all conversations
  • When doing research or fact-checking tasks where source citations and accuracy matter more than tone
  • When you're tired of verbose, hedge-everything, compliment-first responses and want the model to get straight to the point

This article was AI-generated from real-time signals discovered by PureFeed.

PureFeed scans X/Twitter 24/7 and turns the noise into actionable intelligence. Create your own signals and get a personalized feed of what actually matters.

Report an Issue

Found something wrong with this article? Let us know and we'll look into it.