PROMPTS
AI Analysis
Live Data

GPT-5.5 System Prompt Checklist: 12 Core Rules

A concise checklist for structuring system prompts to control GPT-5.5's personality, reasoning style, output format, and factual reliability.

posted on X

TL;DR GPT-5.5 Prompting Guide > Set personality: how the assistant sounds. > Set work style: how it asks, assumes, decides, and handles risk. > Use preambles: give quick updates before long/tool-heavy tasks. > Prompt outcomes: define the goal, not every step. > Add stop rules: stop once enough evidence exists. > Limit strict rules: use ALWAYS/NEVER only for true requirements. > Control format: specify length, bullets, tables, or JSON. > Require citations: cite factual claims when needed. > Set search limits: search again only if key evidence is missing. > Separate facts from copy: don’t invent metrics, dates, or claims. > Check work: run tests or inspect outputs when possible. > Use phases: commentary for updates, final_answer for final response.

View original tweet on X →

A concise checklist for structuring system prompts to control GPT-5.5's personality, reasoning style, output format, and factual reliability.

Example(text)

TL;DR GPT-5.5 Prompting Guide

> Set personality: how the assistant sounds.
> Set work style: how it asks, assumes, decides, and handles risk.
> Use preambles: give quick updates before long/tool-heavy tasks.
> Prompt outcomes: define the goal, not every step.
> Add stop rules: stop once enough evidence exists.
> Limit strict rules: use ALWAYS/NEVER only for true requirements.
> Control format: specify length, bullets, tables, or JSON.
> Require citations: cite factual claims when needed.
> Set search limits: search again only if key evidence is missing.
> Separate facts from copy: don't invent metrics, dates, or claims.
> Check work: run tests or inspect outputs when possible.
> Use phases: commentary for updates, final_answer for final response.

Why it works

This checklist works because it separates concerns that are easy to conflate in a single monolithic prompt. By distinguishing personality from work style, format from citations, and facts from generated copy, each instruction targets a specific failure mode — tone drift, over-searching, hallucinated metrics, or runaway verbosity — rather than trying to fix everything with one vague instruction. The stop-rule and search-limit principles are particularly powerful. LLMs tend to keep gathering evidence or generating content past the point of usefulness. Explicitly encoding a termination condition ('stop once enough evidence exists') prevents over-completion and reduces cost, while 'search again only if key evidence is missing' guards against redundant tool calls in agentic pipelines. Using phases (commentary vs. final_answer) gives the model a structured output contract, making it easier for downstream consumers — whether humans or code — to parse intermediate reasoning from the actual deliverable. This mirrors chain-of-thought separation and reduces the chance that a stray reasoning step is mistaken for the final result.

When to use

  • Writing or refining a system prompt for a GPT-5.5 assistant or agent deployment
  • Debugging an existing prompt where the model is hallucinating facts, over-searching, or producing inconsistent formats
  • Building agentic workflows where you need clear handoffs between reasoning steps and final outputs

This article was AI-generated from real-time signals discovered by PureFeed.

PureFeed scans X/Twitter 24/7 and turns the noise into actionable intelligence. Create your own signals and get a personalized feed of what actually matters.

Report an Issue

Found something wrong with this article? Let us know and we'll look into it.