FROM PROMPT TO PLUGIN
AI Communication in Plain Language
As I have struggled to learn the value of Claude skills, workflows, and automation, I often found myself drowning in engineering terminology that made a simple concept feel like a foreign language.
Then, during a recent AI class that I was teaching to beginners, something gelled for me. A student asked a question I couldn’t answer in the language I’d been given. So I answered it in my own. And it worked.
It occurred to me, sitting there, that we never needed the engineering lingo in the first place. We can communicate with AI, and direct it, build workflows with it, by using words we already know. We don’t need markdown files; we need Word docs. We don’t need metadata; we need names and descriptions. We don’t need to learn programming and coding terms to give good directions and get good responses from AI. We need clarity in a common language.
Most people learn to prompt. They get good at asking AI the right questions, framing the right requests, and giving enough context to get useful output. That’s the first layer. Call it what it is: providing direction. You tell it what you need. It responds. Simple.
But then the session ends. And the next time you open the tool, it remembers nothing. You start over. You re-explain everything. You provide the same direction again, slightly differently, hoping this time the output is closer to what you need.
That’s not a workflow. That’s a wheel spinning in the same rut.
The people who’ve moved past that frustration figured out something the documentation rarely says plainly: you can write your instructions down once, save them, and load them whenever you need them. Not code. Not a program. A plain text file. A Word document. Something a ten-year-old could read and understand.
That’s a common task. What the engineers call a skill. What it actually is: a set of instructions you write once so you never have to repeat yourself again.
Here’s what that looks like in practice.
Say you plan your family’s dinners every week. Same problem, every time. What do we eat? Who won’t eat what? What goes on the grocery list? You open AI, explain the family, the budget, the preferences — again — and hope the output is useful.
Or you write this down once:
COMMON TASK: Weekly Family Dinner Planning
Family: Two adults, two kids.
Jake won’t eat shellfish. Sara is a vegetarian.
Budget: $150 per week for five dinners.
Format: Give me a weekly menu and a grocery list
together, organized by store section.
That’s it. That’s a common task file. Plain language. Saved in a folder on your desktop. Loaded when you need it. Every week, one instruction: load dinner planning. No re-explaining. No starting over. The wheel stops spinning.
Now compare that to what most documentation shows you:
## SKILL: Dinner Planning
**Constraints:**
- Jake: no shellfish
- Sara: vegetarian
**Parameters:**
- budget: $150
- nights: 5
**Output format:** menu + grocery list
Same information. Completely different language and format. One is written for a developer. One is written for a regular person. For the tasks regular people actually need, Claude reads both effectively. The format was never the point. The clarity of the instructions is the point.
I am not claiming plain language is superior to markdown for all use cases. Format carries weight in complex workflows. But for most of what regular people actually need, plain language is sufficient — and far more likely to get written and used.
This is the thing the engineering world forgot to tell you: the tool doesn’t require their language. It requires yours.
Once you understand common tasks, common workflows follow naturally.
A workflow is just common tasks running in sequence. You load the dinner planning task. You load the grocery budget task. You give one instruction — run the dinner workflow — and both run together. Menu, grocery list, budget breakdown. One prompt. Complete output.
That’s what the engineers call a plugin. What it actually is: a set of tasks you’ve already written, chained together so the whole thing runs at once.
The three tiers are this simple:
Providing direction is what you do in the moment.
Common tasks are what you write once and reuse.
Common workflows are common tasks running together.
You already do all three in your regular life. You give directions. You follow routines. You run processes. AI configuration is the same thing, written down.
I know this works because I tested it.
Over the course of a single working session, I built a content repurposing system using nothing but plain text files and plain English instructions. A file that captures my writing voice. A file that governs my brand. A file that knows the rules of LinkedIn. A file that knows how Substack Notes work. Workflows that take a finished essay and produce LinkedIn posts and Substack Notes without me managing every step.
When the system produced output with too many em dashes — a tell-tale sign of AI-generated writing — I caught it, named the rule, and the system revised its own instruction file to enforce it going forward.
It didn’t just work. It learned. Not because the files were sophisticated. Because the instructions were clear.
I am not using markdown files, pound signs, asterisks, code blocks, or God forbid, YAML to communicate with AI. I will use folders, files, and plain English. And you can too.
The gap between people who use AI and people who configure it is not a skill gap. It’s a language gap.
We already have the language. We just forgot we were allowed to use it.
— Tim Moon



This discussion about AI communication and the impact of dominant voices in shaping truth really resonates. It draws similar concerns raised in my article on the influence of hyperactive minorities over community perspectives, which highlights how narratives can shift in unexpected ways: https://theuncomfortableidea.substack.com/p/democracys-fact-checkers-are-neither.