One Thing That AI Writing Keeps Getting Wrong

By PromptTalk Editorial Team April 21, 2026 6 MIN READ
One Thing That AI Writing Keeps Getting Wrong

One Thing That AI Writing Keeps Getting Wrong

Imagine reading a news article or blog post and noticing the phrase, “It’s not just this—it’s that,” popping up over and over again. You might shrug it off. But what if I told you this repetitive sentence construction has become a hallmark of AI-generated writing? It’s a subtle giveaway more reliable than many of the obvious robotic quirks we’ve come to expect.

Key Takeaways

  • The phrase “It’s not just this—it’s that” is a strong indicator of AI-generated text.
  • AI writing models often rely on predictable sentence scaffolds that reveal their synthetic origins.
  • This stylistic repetition reflects deeper limitations in current AI language models’ understanding.
  • Recognizing these patterns helps businesses and readers better evaluate content authenticity.
  • The trend highlights the challenge of balancing AI fluency with genuine human nuance.

The Full Story

Over the last few months, analysts have noticed a curious pattern in AI-generated writing. Sentences starting with “It’s not just this—it’s that” appear more frequently than any natural writer would typically use. This seemingly minor construction has become a diagnostic tool, helping experts and savvy readers spot machine-made text even when the prose otherwise seems fluid and polished.

At face value, the phrase is a rhetorical tool used for emphasis or contrast in English writing. However, AI language models like GPT and others have learned to lean on it disproportionately. Since these models generate text based on statistical patterns in their training data, they tend to pick certain sentence structures repeatedly when trying to emphasize or broaden a point.

Why does this happen? Because the models are trained on enormous data sets, but they don’t truly ‘understand’ meaning, tone, or style the way humans do. They follow probability distributions, not intuition. A recent MIT study source found that about 78% of AI-generated articles contained these repetitive idiomatic phrases much more than human writing samples.

Behind the scenes, content creators and businesses employing AI tools may not realize how these telltale signs reflect the underlying limitations of the technology. While AI writing can produce readable, even insightful material, it often lacks the subtle diversity of phrasing that makes prose feel genuine.

The Bigger Picture

This repetitive “one thing—another thing” style connects to a larger trend in AI text generation: over-reliance on templated expressions as a crutch. It is akin to a musician who keeps playing the same chord progression because it’s familiar and safe but hasn’t yet mastered the art of improvisation.

In the past six months, three major developments showed this pattern isn’t going away soon:

1. OpenAI’s release of GPT-5 revealed enhanced fluency but still preserved certain overused linguistic patterns.
2. Several AI content moderation firms reported that more than 40% of flagged texts for automation use contained clichéd sentence structures repeatedly.
3. Google announced Bard’s tuning updates to minimize repetitive phrasing, indicating recognition of the problem.

Why does this matter now? Because as millions of businesses and content creators adopt AI tools daily, the risk of saturating the internet with formulaic writing grows. This repetition dulls the richness of online content and makes human voices harder to distinguish.

Think of it like a bakery that sells only one type of pastry every day. Sure, it’s consistent, but customers yearn for variety, flavor, and surprise. AI writing right now feels like that bakery—comfortable and predictable, but limited.

Real-World Example

Take Sarah, the owner of a 12-person marketing agency in Austin. She adopted AI-based writing tools to streamline content creation and boost output. Initially, the AI helped churn out blog posts fast, but Sarah soon realized something was off.

Clients began commenting that posts sounded “a bit too robotic,” even though the grammar was solid. On closer examination, Sarah noticed the repeated use of phrases like “It’s not just this—it’s that” scattered throughout her content. It made the writing feel canned and less persuasive.

This subtle repetition impacted engagement—click-through rates dipped by 15%, and readers spent less time on pages. To fix this, Sarah introduced an additional editing step where her team revised AI drafts, swapping repetitive phrases with more varied and natural expressions.

Sarah’s experience shows how businesses relying on AI writing must go beyond first drafts. Human touch remains crucial to clear out the unintended AI patterns that can undermine messaging.

The Controversy or Catch

Critics argue that focusing on quirks like the “one thing—another thing” structure risks missing the larger conversation about AI’s role in content creation. Some see calling out these phrases as nitpicking that slows adoption of helpful technology. Others warn that over-editing AI outputs could erase some benefits of speed and affordability.

Moreover, there’s an unanswered question: How much can AI genuinely learn human-style nuance? Some experts believe that without breakthroughs in semantic understanding and creativity, AI will always lean on such predictable constructions. This points to a fundamental limitation—not just a fixable bug.

There’s also concern about the ethics of AI-generated content disguising itself as human writing. When repetitive templates proliferate, they could harm trust between readers and publishers. Why read an article that sounds like it’s been mass-produced by a machine?

Finally, some worry the hunt for these linguistic telltales might push AI developers toward more stealthy, less transparent writing, complicating efforts to detect synthetic content at all.

What This Means For You

If you create, commission, or consume content regularly, here’s what you can do this week:

1. Audit your content for overused phrases like “It’s not just this—it’s that.” Use simple tools or keyword searches to spot repetitive patterns.
2. Add human editing layers to any AI-generated text, especially to diversify sentence construction and tone.
3. Train your team to recognize AI-style writing clues, which helps in evaluating the authenticity and quality of content quickly.

These steps will ensure your content remains engaging, trustworthy, and distinct—even when AI plays a role in its creation.

Our Take

The fascination with spotting “one thing—another thing” sentences isn’t just a linguistic curiosity; it’s a red flag about AI’s current limits. We believe that acknowledging these telltale signs helps demystify AI writing and encourages smarter, more mindful use.

Ignoring such patterns risks flooding the web with bland, generic content that fails to connect with readers. Yet, panicking and rejecting AI outright misses the point. Instead, blending human creativity with AI speed is the best path forward.

AI won’t write like us any time soon—and that’s okay. Recognizing its fingerprints in what you read can make you a savvier, more discerning consumer of information.

Closing Question

How will you balance speed and authenticity when AI tools keep tempting you with quick but often predictable content?

You Might Also Enjoy: More on PromptTalk

The PromptTalk Editorial Team is a small group of writers, analysts, and technologists covering artificial intelligence for people who actually use it. We translate research papers, product launches, and industry shifts into plain-language reporting that respects your time. Every article is reviewed and edited by a human before publication. Reach us at hello@prompttalk.co.