Course → Module 1: What Makes Slop, Slop
Session 7 of 10

In 1970, roboticist Masahiro Mori described a phenomenon he called bukimi no tani, the uncanny valley. As a robot becomes more human-like, people's comfort increases, until it reaches a point where the robot is almost human but not quite. At that point, comfort plunges into revulsion. A cartoon robot is charming. A clearly mechanical robot is interesting. A robot that almost looks human but has dead eyes and slightly wrong movements is disturbing.

The same phenomenon applies to text.

The Text Uncanny Valley

Clearly automated text is honest. A product listing that reads "4-pack, stainless steel, dishwasher safe" does not pretend to be anything other than data. Nobody is unsettled by it. At the other end, well-written human text feels natural. The reader does not think about the writing. They think about the ideas.

The uncanny valley sits between these poles. AI text that almost sounds human, that reads fluently for a paragraph or two before something feels off, that uses personal pronouns and tells what appear to be anecdotes but never quite commit to a specific detail. This text is more unsettling than either clearly automated output or clearly human writing.

graph LR A["Clearly automated
(product specs, data)"] -->|"Comfort: neutral"| B["Getting more
human-like"] B -->|"Comfort: increasing"| C["Almost human
(uncanny valley)"] C -->|"Comfort: drops sharply"| D["Actually human
(natural, specific)"] D -->|"Comfort: high"| E["Reader trusts
the text"]

The closer AI gets to sounding human without actually being human, the more unsettling the result. The uncanny valley of text is where fluency and emptiness coexist.

What Triggers the Uncanny Response

The uncanny valley in text is triggered by specific mismatches between surface fluency and underlying substance. The reader processes these mismatches subconsciously before they can articulate what is wrong.

Surface Signal Expected Substance AI Reality Reader Response
Personal pronoun ("I") A specific person with experiences No actual person behind the pronoun Unease: who is "I"?
Anecdote-shaped text A real event with specific details Generic scenario with no verifiable detail Suspicion: this did not happen
Confident claims Evidence, sources, expertise No citation, no evidence trail Distrust: says who?
Emotional language Genuine feeling from lived experience Simulated emotion with no history Revulsion: this is performance
Perfect grammar Careful editing Machine-generated default Suspicion: too clean

The Specificity Gap

The most reliable trigger for the uncanny response is the absence of specificity in contexts where specificity is expected. Human writers who describe their experience include details: dates, places, names, quantities, outcomes. "I rewired the panel in my garage last summer and tripped the breaker twice before I figured out the neutral bus was overloaded." That sentence has a location, a timeframe, a specific technical detail, and a sequence of events that implies real experience.

AI writing the same topic produces: "Many homeowners find electrical work challenging but rewarding. With proper preparation and safety precautions, it's possible to tackle basic electrical projects." No location. No timeframe. No specific detail. No sequence. No evidence of experience. The fluency is identical. The substance is absent.

Readers detect this gap even when they cannot name it. The sensation is "something is off" or "this feels fake" or "I don't trust this." These are accurate assessments. The reader is correctly identifying the mismatch between human-like surface and machine-like depth.

Why Almost-Human Is Worse Than Obviously-Machine

A clearly automated FAQ page does not pretend to have experiences. It presents information in a format that matches its nature. There is no deception, no mismatch, no uncanny response. The reader knows what they are getting and evaluates it accordingly.

An AI blog post written in first person, with paragraph-length "anecdotes" that contain no verifiable details, with emotional language that has no emotional source, creates a different experience. The reader starts by assuming they are reading a human author. As mismatches accumulate, the assumption breaks. The moment of realization, when the reader shifts from "I'm reading a person's thoughts" to "I'm reading a machine's output," produces a reaction stronger than simple disappointment. It produces a feeling of having been deceived, even if no intentional deception was involved.

graph TD A["Reader begins
assumes human author"] --> B["Reads first paragraph
fluent, well-structured"] B --> C["Second paragraph
personal pronoun, anecdote shape"] C --> D["Third paragraph
missing details, generic claims"] D --> E{"Mismatch accumulates"} E --> F["'Something is off'
Cannot articulate what"] F --> G["Continues reading
with increasing skepticism"] G --> H["Identifies pattern
stops trusting"] H --> I["Bounces
Will not return"]

This is the practical cost of the uncanny valley. It is not just an aesthetic problem. It is a trust problem. A reader who has the uncanny experience with your content will not simply dismiss that article. They will dismiss your entire site. The trust damage extends beyond the individual piece to the brand that published it.

Avoiding the Valley

There are two ways to avoid the uncanny valley. The first is to stay clearly on the machine side: automated content labeled as automated, structured data presented as structured data, no pretense of human authorship. The second is to cross the valley entirely: content that is genuinely informed by human experience, with specific details, verifiable claims, and an identifiable author. The middle ground, where AI pretends to be human and almost succeeds, is the worst place to be.

Further Reading

Assignment

  1. Find a piece of AI writing that initially fooled you. Something you thought was human-written until you looked closer.
  2. Identify the exact moment the illusion broke. What specific element triggered your suspicion? Was it a missing detail, a too-perfect structure, an anecdote that felt hollow?
  3. Write a 300-word analysis of that moment. What was the mismatch between surface and substance?
  4. If you cannot find a piece that fooled you, generate one deliberately: give AI a personal topic with instructions to write in first person. Read it aloud. Note where it feels wrong.