Asking AI to Think with You
Tapping into AI’s reasoning is more powerful than just asking it to write
“AI is quite stupid.”
That was a common response when I told people how excited I was about AI just a few months ago. Mostly intelligent folks who thought they knew. They tried ChatGPT, asked a few questions, got the wrong answer, and that was that.
Those same people are now trying to catch up on learning to use AI because…AI has moved up quite a few notches since.
PromptCraft is quite outdated.
When I wrote PromptCraft, I was teaching people to work around AI.
OK. That’s not fair, but in some ways, it was true. PromptCraft, just 6 months ago, was mostly about managing gaps: Keep inputs within the context window. Fact-check everything, because hallucination was a problem. Construct prompts that are specific enough so the output is useful, not generic. AI was capable. It organised your thoughts and drafted outlines from anything you gave it. It helped you find structure in tangled material. Research, though, was a bit of a gamble. Confidence rarely matched accuracy. You might trust the reasoning, but always, always verify...
Then it changed. Web search for almost all AI tools has become standard. Accuracy improved…a lot. The hallucination risk didn’t completely disappear, but it stopped being the main problem. Gemini and Perplexity started quoting sources. It was easier to verify.
I found myself prompting quite differently. I used less “engineering” and more conversation. Even discussion. Sometimes I ask Claude or Gemini to help me design a prompt after specifying the outcomes I am looking for. I even get suggestions on how to improve the outcome I am looking for.
I also asked it to question my assumptions. Stress-test. Tear it down like you are my worst opponent with a PHD in Logic. Although still useful, the C.A.S.T. scaffolding I had built seemed less necessary.
AI has levelled up indeed.
It started in a conversation.
I was working through an idea and getting the usual polite (but reasonable) answers. They were helpful, but lacked depth. So I asked, “Be brutally honest with me.” Tell me what might be wrong with this idea. I want deep thinking, not run-of-the-mill answers.
The response was different. A little too real for me, initially. And a lot less reserved and polite.
I now know what the real constraint was: AI defaults to being agreeable rather than rigorous. That’s just a design choice, but I think it limits the real value AI can offer as a thinking companion.
And I kind of stumble into it.
Understanding AI better
Here’s what I’ve learned.
AI, at its best, can be a thinking companion with no agenda. “Objective” is not quite the right word, since AI still carries biases, like any tool shaped by human input. I think a more accurate description is disinterest. AI (hopefully) has no ego to protect and no relationships to manage. When a friend reviews your plan, their feedback is filtered through wanting to be kind, old assumptions, and the value of the friendship. AI, when you’ve given it permission to be honest, carries no such weight.
This quality is genuinely rare. And can be extremely valuable. Most feedback we receive is tempered by social friction. AI’s feedback doesn’t have to be. But it will be, by design, until you tell it otherwise.
With that in mind, this is how it changed the way I work.
First, approach with an open mind. What if I am wrong? What if the whole plan really sucks? Maybe this idea is not so original after all.
Before asking for answers, I brainstorm (using AI) with a genuinely open mind. What if I’m approaching this the wrong way? What if my audience isn’t who I think they are? What would a completely unconventional take look like? When I begin the exploration out of genuine curiosity rather than cocksureness, I get ideas I wouldn’t have found on my own. Or wouldn’t have dared to consider.
Once you have a viable idea or hypothesis, the second stage is to apply pressure. Stress-test is the phrase I use with Claude and Gemini. Stress-test this idea, concept, or system until it fails. If you are my nemesis, how would you destroy me in this presentation? I always ask for the honest version. Be brutal. Tell me where the weaknesses are. What would a smart sceptic say? What am I not seeing? This is where AI can be really useful.
One creates ideas. The other sharpens them (or tears them down).
What this actually looks like
When I was developing the CoT Accumulation System, a position-trading methodology built on Commitment of Traders data, I was working from a set of assumptions. Some were quite solid. Others were wishful thinking. And a few were completely wrong. Ideas surfaced from my fear, greed, and impatience. From my experience and knowledge. Some were nuggets of gold. Others were just shite. The problem is that you can’t always tell which is which, especially when you are too close to it.
I used Claude to filter them. Challenge each of these assumptions. Show me the evidence, and where you think I am likely to be wrong. Be direct. So I back-test against historical data. I performed walk-forward optimisation and Monte Carlo stress testing to assess whether the results were genuinely robust, or just an artifact of a specific sequence of market conditions. Claude helped frame what to test and, more importantly, what the results would mean.
This type of feedback is hard to find elsewhere. Anywhere. Colleagues mostly aim to please. A competitor may want you to fail. AI, however, operates without agenda or social friction.
Useful Prompts
These work in any context where assumptions, sometimes hidden, are driving a plan or project:
Filtering assumptions: “Here are my working assumptions about [X]. Separate them into: well-supported, worth testing, and likely wrong. For each, explain your reasoning. Be direct.”
The pre-mortem: “Imagine this plan failed completely in six months. Walk me through the most likely failure scenarios. What went wrong and why?”
Finding the unconventional angle: “Give me perspectives on this that I haven’t considered. Include at least one that would make me uncomfortable.”
The devil’s advocate: “Argue against this idea as strongly as you can. Make the best case for why this is wrong.”
Stress-testing your reasoning: “Here is my reasoning: [X]. Where is it weakest? What assumptions am I making that I may not be aware of?”
None of these requires technical knowledge. They require honesty about the fact that your thinking, like all thinking, has gaps.
Mirror, mirror, on the wall…
Remember that the point isn’t just better writing, but also sharper thinking. That’s what changes when you ask for honest, challenging feedback from AI. It is also what makes AI a truly valuable tool.
AI is like a mirror. A good one, if used properly. It can show you what actually is, not what you want to see.
Honest reflection requires something most of us instinctively resist: the willingness to consider that we might be wrong.
So, ask for it anyway. Be direct. Tell me where the holes are. Tell me the truth, even if it hurts. And look at it.
Switch off the nice default. Tell AI to stop being polite. Stop the flattery. Be truthful because I can take it. That instruction will change the quality of responses you get.
Remember that AI has no agenda. Nothing to protect, nothing to gain. No ego that gets in the way. It can offer something even a trusted advisor sometimes cannot: an honest opinion with no strings attached.
Ask for an honest answer.
Try it this week
Take something you’re working with. Struggling with, maybe. A decision, an idea, a plan that feels suspiciously too right. Open with “what if.” Then, if the answers start feeling too agreeable, tell AI: Be brutally honest with me. I want to test this thinking, not receive encouragement.
The real opportunity with using AI is no longer ‘Can you write…’ but ‘Can you help me think through this?’ That shift unlocks the true value of AI.
For now.
Who knows what else will change in the next few months?


