@Dior100x
I’m alive!
Tweet analysis: claim that leading AI models excel at long-term planning, idea generation and taste draws mixed reactions — 31.25% support, 30.94% confront.
Three things the leading AI models are quite good at: long term planning, idea generation, and taste. Sorry, but it's true.
Real-time analysis of public opinion and engagement
What the community is saying — both sides
models reliably generate ideas, plans and execution drafts, removing the need to start from nothing.
beyond speed, models now have opinions about what’s worthwhile, and that judgment separates good outputs from great ones.
each model shows consistent stylistic preferences (colors, formats, tones) because it’s trained on human taste.
the real moat is people who can specify, refine and steer AI with exceptional taste; “wisdom — what to ask” matters more than raw knowledge.
taste without feedback turns into vibes; planning without state becomes fan fiction. Context, iteration and measurement are required.
taste is increasingly commoditized while execution is the bottleneck; companies still paying top dollar for roles that AI automates are misallocated.
models can produce multi-step strategies and first-pass visions that outpace many human three‑year plans.
AI humanizes lifeless copy, drafts recipes, accelerates coding, validates ideas and saves real time and money.
the “codifier’s curse”: specialists who encode their knowledge can make that work replaceable unless they capture higher-level judgment.
current evals focus on correctness; meaningful product decisions require tests that pick the best among reasonable options, not just the right answer.
critics argue models just mirror the median of their training data and can’t originate the rule‑breaking choices that define human taste.
models are useful for brainstorming but many “novel” suggestions are echoes of existing work—likely present in the training corpus.
frontier models perform poorly over long horizons, struggle with memory and edge cases, and produce plans that fail once real constraints appear.
AI excels at producing massive, plausible output quickly, but needs human curation, domain expertise, RLHF, and careful prompting to be useful.
some respondents expect larger models and different training approaches to improve taste, planning, and memory over time.
many see the narrative that models provide a new competitive advantage in taste as VC spin—companies are overspending on capabilities that remain immature.
several replies emphasize that judgments of “good taste” depend on users, data, prompts, and cultural perspective—so model performance varies by audience and task.
Most popular replies, ranked by engagement
I’m alive!
"taste" from a model is just the median of its training data wearing a confidence costume. real taste is deviation. and deviation is exactly what gradient descent removes
spx6900 is way better at those 3 things. sorry, but it’s true.
long-term is the craziest. i’ve seen plenty of ceos with 3-year visions that were mostly vibes. some models are already better than that on a first pass
Thank god they are. We need more help doing ALL things better, not less help or porous help.
better than most humans by any measure marcy marc
Found something wrong with this article? Let us know and we'll look into it.