Why Your Smartest People Are Using AI the Worst Way Possible
The CEO of a mid-size technology firm told me something that stopped me mid-sentence.
"I never give it context upfront. I don't want it to enter my biases."
He was describing his process with AI. When he needed strategic analysis, he'd open a chat, type a bare question, then have the AI ask him fifty to eighty clarifying questions before producing anything. The whole exchange could take hours.
He was proud of this method. It felt rigorous. Scientific, even, like controlling for variables in an experiment.
It was also the single biggest reason his AI outputs were mediocre.
The Sophistication Trap
Here's the pattern I see across sessions with experienced AI users: the more someone has experimented with these tools on their own, the more likely they've developed habits that feel smart but produce worse results.
The most common one is context withholding. The logic sounds reasonable: if I give AI my perspective upfront, I'll just get my own ideas reflected back. Better to let it work from a blank slate and see what it comes up with independently.
This misunderstands how language models work at a fundamental level. An LLM without context is like a consultant you hired but refused to brief. They'll produce something, it'll sound professional, and it'll be completely generic. You wouldn't fly a strategy firm to your offices and then refuse to tell them what industry you're in. But that's exactly what context withholding does to AI.
The CEO I mentioned? When I showed him the same task done with rich context upfront, his role, his company's position, the specific strategic question, the constraints he was operating under, the output quality jumped immediately. His reaction: "That's... actually useful."
The fifty-question process he'd been running for months was a workaround for a problem that didn't need to exist.
Where the Instinct Comes From
This isn't stupidity. It's a perfectly rational behavior imported from the wrong domain.
When you use Google, you keep queries short and generic because the search engine matches keywords, not context. "Best CRM for small business" works better than a paragraph explaining your specific situation. We've been trained over two decades to strip context from our information requests.
Language models work the opposite way. They don't match keywords. They generate responses by predicting what should come next given everything you've told them. The richer the input, the more specific the output. A vague prompt gets a vague answer, not because the AI is limited, but because you've given it nothing to work with.
One executive I coached, a professor with thousands of research documents, had tested four different AI models on a niche academic question. All four produced surface-level responses. He concluded AI wasn't ready for serious intellectual work.
The issue wasn't the models. He was using free tiers with cold prompts, no context about his field, his theoretical framework, or what would constitute a good answer in his domain. When we set up a properly calibrated research environment with his documents loaded and a system prompt that defined expertise in his specific area, the same question that had produced generic results now surfaced connections and terminology he hadn't thought to search for.
His reaction went from "AI is limited" to "this changes how I do research."
Same tools. Radically different input. Completely different output.
The Three Levels
I demonstrate this in every session because seeing it is the only way to believe it. The progression is simple:
Level one: the cold prompt. Ask your question with no context. "Explain the competitive dynamics in enterprise software." You get a Wikipedia-level answer. Accurate, broad, useless for actual work.
Level two: the contextual prompt. Add who you are, what you're working on, why you're asking, what a good answer looks like for your specific purpose. The output sharpens dramatically. One executive told me at this stage: "That's impressive, and we haven't even started yet."
Level three: the expert prompt. Have the AI generate system instructions for an expert in your exact domain, then iterate on those instructions. Now you're not just asking a question. You've hired a specialist who understands your field, your vocabulary, your standards. The outputs at this level consistently surprise even skeptical users.
The gap between level one and level three isn't incremental. It's a different category of output entirely. And most people, including sophisticated users, are stuck at level one because nobody showed them the other levels exist.
Why Beginners Sometimes Get Better Results Than Experts
There's an irony here that's worth sitting with.
A junior employee who watched one YouTube tutorial on prompting and now includes basic context in every request ("I'm a marketing coordinator preparing a competitive brief for our Q2 planning meeting, here's what I know so far...") is getting better AI outputs than the C-suite executive who's been "experimenting" for months but treats every interaction like a test of the AI's independent capabilities.
The beginner's advantage is naivety. They don't have a theory about how AI should work. They just describe their situation and ask for help, which happens to be exactly the right approach.
The expert's disadvantage is overthinking. They've built mental models (avoid bias, test independence, don't lead the witness) that make sense for managing humans but are counterproductive with language models. An AI doesn't have biases to protect or independence to test. It has a context window that performs better when you fill it with relevant information.
The Fix
This is where the article gets practical.
Stop withholding context. Tell the AI who you are, what you're working on, what constraints you're operating under, and what a good output looks like. You're not biasing it. You're calibrating it.
Stop testing it with trick questions. If you ask an AI a question you already know the answer to, just to see if it gets it right, you're doing QA testing, not working. Use it for the things you actually need help with. You'll learn its capabilities faster from real tasks than from quizzes.
Think hiring, not searching. The mental model shift that works best: you're not Googling something. You're briefing a very capable new hire on their first day. What would you tell a brilliant person who just joined your team and needed to help you with this specific task? Tell the AI that.
I use an analogy in sessions that tends to land: imagine you walk into your company's HR department and say "I need an expert." They're not going to find you one. They're going to ask you to describe what you need so they can write a job description. The context you provide is the job description. Without it, you'll get a generalist. With it, you get a specialist.
The Bigger Implication
The gap between naive AI usage and calibrated AI usage is enormous. It's not a 10-20% improvement. It's the difference between a tool you use for drafting emails and a tool that fundamentally changes how you approach complex work.
And the uncomfortable truth is that this gap correlates with seniority. The most senior people in an organization, the ones whose AI adoption matters most strategically, are often the ones with the most ingrained habits from twenty years of search engine use.
This is why self-directed AI learning fails for executives. You can watch every YouTube tutorial on prompting and still come away with the wrong mental model. What you need is someone who can sit across from you, watch how you interact with these tools, and say: "You're doing this thing that feels smart but is actually costing you."
That's the conversation most executives haven't had. And it's the one that changes everything.
Written by
Sacha Windisch
Sacha Windisch is the founder of Inference Associates, providing personalized AI coaching for executives and business leaders. 20+ years in technology transformation. MIT AI Product Design. Based in Montreal, working globally.
Ready to See How AI Applies to Your Work?
Every session starts with understanding your specific context. No sales pitch. Just an honest conversation about whether this makes sense for you.
Book Your Session