The Thinking Trail
Why outputs prove nothing anymore - and what to do about it.
I wrote about this on LinkedIn recently. The short version: a client dismissed my well-thought-out work because he couldn’t see the thinking behind it. The output was clean. The reasoning was invisible. He had no way to evaluate what was real and what was assembled with AI.
Neither would I, in his position.
That’s not a client problem. It’s a structural one.
AI systems produce outputs that look the same regardless of the reasoning behind them. A deck built on three weeks of challenged assumptions and rejected directions looks identical to one built in an afternoon on the first plausible path the model offered. Output quality is no longer a signal of reasoning quality. We’ve decoupled them - and we haven’t noticed yet.
The difference shows up later. When the strategy meets the market. When someone asks a question the deck can’t answer. When the assumptions break. The deck that was thought through survives. The one that was assembled doesn’t.
Doctors document differential diagnosis - not just what they concluded, but what they ruled out and why. Engineers document design rationale so the next person can modify the work without invalidating the thinking behind it. Finance has investment memoranda that separate the decision from the reasoning that produced it.
In software, this already has a name. Traces - the record of what happened, in what order, and why. Every serious system has them. Knowledge work still doesn’t.
We haven’t built this for AI-assisted knowledge work. And we’re moving fast enough that the cost is already here - we’re just not accounting for it.
I started calling this the Thinking Trail.
Not a product. Not a platform. A practitioner standard. Five elements. Concise enough to actually use.
The five elements of a Trail:
Context used - what inputs and constraints shaped the reasoning.
Alternatives considered - what directions were explored and why they were rejected.
Assumptions - what has to be true for the output to hold.
Points of challenge - where the reasoning was questioned, revised, or contradicted.
Confidence and known gaps - where the output is reliable, and where it isn’t.
This is not the model explaining itself. It’s you making the work auditable.
If you're working with AI, you don't have to build this from scratch. The AI was in the room. You can ask it to generate a first version of the trail from your conversation - but treat it as a draft, not ground truth.
Trail should accompany every deck, every strategy, every recommendation that came out of a prompt.
Trail, not certificate. The distinction matters. A certificate asks: Who approved this? A trail asks: Can I follow how you got here?
I don’t know if this makes sense. I just know that afternoon, something was missing.
The work was real. The thinking was real. I just couldn’t prove it.
About SG
I run Dobby Ads, an AI Creative Agency. I am an Over-thinker. This is where that overthinking goes. Connect with me on LinkedIn.

