Course → Module 8: The Pipeline
Session 9 of 10

The Permanent Boundaries

Every session in this module has discussed where AI adds value. This session discusses where it must not be used. These are not temporary limitations that will improve with better models. They are architectural boundaries that exist because certain decisions require things AI does not have: judgment, ethics, accountability, and lived experience.

Ignoring these boundaries does not just produce bad content. It produces content that is fundamentally dishonest, because it presents machine-generated judgments as human ones.

Five Categories of Exclusion

flowchart TD A["AI Exclusion Zones"] --> B["Topic Selection"] A --> C["Ethical Judgments"] A --> D["Self-Verification"] A --> E["Personal Experience"] A --> F["Publication Approval"] B --> B1["Requires audience knowledge"] C --> C1["Requires moral reasoning"] D --> D1["Cannot detect own hallucinations"] E --> E1["Has no lived experience"] F --> F1["Has no standards to apply"] style A fill:#222221,stroke:#c8a882,color:#ede9e3 style B fill:#222221,stroke:#c47a5a,color:#ede9e3 style C fill:#222221,stroke:#c47a5a,color:#ede9e3 style D fill:#222221,stroke:#c47a5a,color:#ede9e3 style E fill:#222221,stroke:#c47a5a,color:#ede9e3 style F fill:#222221,stroke:#c47a5a,color:#ede9e3 style B1 fill:#222221,stroke:#8a8478,color:#ede9e3 style C1 fill:#222221,stroke:#8a8478,color:#ede9e3 style D1 fill:#222221,stroke:#8a8478,color:#ede9e3 style E1 fill:#222221,stroke:#8a8478,color:#ede9e3 style F1 fill:#222221,stroke:#8a8478,color:#ede9e3

1. Topic Selection

AI does not know your audience. It does not know what they struggle with, what they have already read, what keeps them up at night, or what they will pay attention to. It can suggest topics based on search volume or trending keywords. Those suggestions are generic by definition, because they come from aggregate data, not from understanding your specific readers.

Choosing what to write about is a strategic decision. It determines what your body of work looks like in aggregate. Let AI make that decision, and your content catalog becomes indistinguishable from every other AI-suggested content plan in your niche.

2. Ethical Judgments

Should you publish this piece? Is this claim fair to the person or company being discussed? Does this recommendation account for the risk the reader might face if they follow it? Is this the right time to publish on this topic?

These are ethical questions. AI models are trained to be "helpful, harmless, and honest," which in practice means they avoid controversy and default to safe, generic answers. That is not ethics. Ethics requires weighing competing values and making a decision you can defend. AI cannot do that. It can only follow patterns in its training data.

Decision Why AI Fails What You Do Instead
Publishing timing Cannot assess social or market context Apply your judgment about audience and circumstances
Fairness of criticism Defaults to vague balance rather than honest assessment Decide what is fair based on evidence and your values
Risk to readers Cannot model real-world consequences of advice Consider what happens if readers follow the advice and it fails
Handling sensitive topics Over-qualifies to the point of saying nothing Address the topic directly with appropriate care

3. Self-Verification

AI cannot reliably detect its own hallucinations. When a model generates a plausible-sounding statistic, it has no internal mechanism to verify whether that statistic is real. Asking the same model "Is this fact correct?" produces a confidence assessment based on the same training data that generated the fact, not an independent verification.

This is why the "AI checks AI" approach fails at the fact-checking stage. Use search APIs for automated fact-checking. Use human judgment for final verification. Never let the same model that generated a claim also verify it.

4. Representing Personal Experience

AI has no experiences. It has text about experiences. The difference matters enormously in content. "I spent three years building supply chain software and the hardest part was not the technology" is a statement only a person who built supply chain software can make. AI can generate sentences that look like personal experience. Every one of them is fabricated.

Your perspective bank from Module 6 exists for this reason. Personal experience is injected into content from your own documented observations, not generated by AI. If a piece of content includes personal anecdotes, those anecdotes must be yours.

5. Publication Approval

The final decision to publish is a human decision. It incorporates everything upstream: Is it accurate? Is it well-written? Is it on-brand? Is it timely? Does it meet your standards? AI cannot answer these questions because AI does not have standards. It has parameters. Standards require caring about the outcome, which requires being a person who has something at stake.

These five boundaries are not limitations of current models. They are limitations of the architecture. Better models will generate better prose, but they will not acquire judgment, ethics, accountability, or lived experience. These remain human territory permanently.

Documenting Your Boundaries

Write your "No AI" list. Post it where you work. When the pressure to ship faster tempts you to let AI make a decision it should not make, the list is your anchor. Standards only work if they hold when it is inconvenient to follow them.

Further Reading

Assignment

Write a "No AI" list for your pipeline. Include at least 5 specific tasks where AI is permanently excluded. For each item:

  1. Name the task
  2. Explain in one sentence why AI is excluded
  3. Describe what you do instead

Post this list in your workspace. If you work with others who use your pipeline, share it with them. A boundary that exists only in your head is a boundary that will be crossed.