A Step-by-Step Guide to Teaching AI Systems How You Actually Think
This guide walks you through the process of creating a personalized operating manual for AI systems, based on the method I used to build "How to Work With John Lovett."
Most people treat AI like a search engine: ask a question, get an answer, start over.
But AI systems have memory. They accumulate context. They learn patterns in how you think and work.
The problem: They learn implicitly, without structure or validation.
The solution: Teach them explicitly how you think, what you value, and how they should fail.
Start with the simplest possible prompt in your primary AI tool (ChatGPT, Claude, Gemini, etc.):
Who am I and what do you know about me? Give me all the gory details.
What you're looking for:
Why this matters: The AI will surprise you. It knows things about how you work that you've never explicitly stated. This is your baseline.
Don't trust a single AI system. Run the same prompt in at least two different models:
Example:
What you're doing:
This isn't about speed. You're thinking through what each model tells you, identifying when it's right and when it's wrong, then refining through conversation.
Once you have insights from multiple models, ask them to convert their understanding into a structured format you can reuse:
Can you give this (and more) to me in markdown format that I can use to train other AI systems and have them learn as much as you know about me?
What you get:
Pro tip: Save this as a baseline. You'll iterate on it.
This is where most people stop. Don't.
The structured profile tells AI who you are. Now teach it how to work with you.
Create a second document: "How to Work With [Your Name]"
Not what sounds good, what you actually prioritize in practice.
Examples:
Where do AI systems consistently get it wrong when working with you?
Examples:
How should AI prove it's not hallucinating or making things up?
Examples:
What should AI never say or assume in your field?
Examples from analytics:
What language do you hate? Be explicit.
Examples:
Once you've drafted your operating manual, test it:
Process:
I've created an operating manual for how AI should work with me. Read it and tell me:
1. What's unclear or contradictory
2. What's missing that would be valuable
3. Where I should be more specific
4. What would make this more actionable
Synthesize the feedback: Take the best suggestions from each model and refine your manual.
Your operating manual means nothing until you use it.
Test protocol:
Iteration is the goal. Your first version will be wrong. That's fine.
When you're done, you should have:
Purpose: Provides context so AI doesn't start from zero
Purpose: Aligns AI behavior with your actual needs
Purpose: Tactical reference for specific workflows
One AI system will miss things. Always battle at least two.
Your operating manual should reflect how you actually work, not how you wish you worked.
"I value accuracy" is useless. "I value methodology over conclusions" is actionable.
If you don't test your manual in practice, it's just creative writing.
Your operating manual is a living document. Update it as you learn.
Reason 1: Accumulated Context Has Value
Every conversation you've had with an AI system has taught it something about you. Capture that before it's lost.
Reason 2: Competitive Outputs Reduce Error
No single AI is correct. Multiple models stress-testing each other surface blind spots.
Reason 3: Explicit Beats Implicit
AI will learn your patterns anyway. Teaching it explicitly gives you control.
Reason 4: Validation Prevents Hallucination
AI systems will confidently make things up. Your operating manual defines how they should prove they're not.
Reason 5: Systems Beat Tools
The "best" AI tool changes every 6 months. A good system works regardless of which tool you're using.
Don't just battle models for validation—design workflows that use their different strengths:
Example workflow:
Your operating manual isn't just for new conversations, it's for switching tools.
When to use:
If your team works with AI, create team-level operating manuals:
Team manual includes:
You'll know your operating manual is working when:
Most people optimize for fast answers.
You should optimize for sound reasoning.
Your operating manual should make AI systems:
Speed is not the goal. Correctness is.
Who am I and what do you know about me? Give me all the gory details.
Can you give this to me in markdown format that I can use to train other AI systems?
I've created an operating manual for working with me. What's missing? What's unclear? Where should I be more specific?
[Paste your operating manual]
Now help me with [real task]. Follow the operating manual strictly and tell me when you're not sure how to apply it.
This guide is based on my process of building "How to Work With John Lovett", an operating manual created by battling ChatGPT and Claude with identical prompts, then synthesizing their best outputs.
The full manual includes:
If you want to see the full example, reach out. I'm happy to share it as a template.
Building an AI operating manual isn't about controlling AI systems.
It's about building systems that don't depend on any single AI being correct.
It's about capturing what you've already taught AI implicitly and making it explicit.
It's about validation through competitive outputs, not trust in a single source.
And it's about recognizing that these systems know more about how you work than most people you've worked with, so you might as well teach them intentionally.