I Wish My AI Could Talk to Your AI
What if the trail was already being recorded.
I recently wrote about something I called the Thinking Trail - a way to document the reasoning behind AI-assisted work. Five elements. Context, alternatives, assumptions, challenges, gaps.
It’s useful. I still do it.
But it has a problem. People can fake it.
A well-written trail looks the same whether the thinking was real or performed. It’s self-reported. And anything self-reported can be gamed.
So the question that kept nagging me: what if the trail didn’t depend on the person at all?
What if AI at work was designed to keep track?
In software, this problem was solved a long time ago. It even has a name - traces. The system-level record of what actually happened.
Nobody argues about who wrote what code. The system knows. Git tracks every commit, every change, every decision. Server logs record what happened and when. The trace is automatic. Much harder to game. You can’t easily fake a history of wrestling with a problem if you didn’t.
Knowledge work has no equivalent. Yet.
But think about where AI is going. It’s moving from a tool you visit to the environment you work in. When that happens - when the work itself happens inside AI - the system will keep the trace.
Not a journal you write about your thinking. Not a performance review someone fills out once a quarter. Not what you say happened. What actually happened - an automatic record of how you actually worked.
What questions did you ask? Did you challenge the first answer or accept it? Did you bring your own data, your own context - or just say “make me a strategy”? How many times did you push back? Where did you override the AI because something felt wrong?
That trace will look completely different for someone who thought versus someone who assembled. Not perfect. But directionally obvious.
And here’s where it gets uncomfortable. The trace doesn’t just make good thinking visible.
It makes the absence of thinking visible too.
The person who asks shallow questions, accepts the first output, never pushes back - that’s not a suspicion anymore. It’s a data pattern. Permanent. Reviewable.
With the old proxies - degrees, titles, years of experience - you could hide. Everyone got to fake it a little. That’s bad for the system but it’s protective for the individual.
With traces, there’s less room to hide.
And who owns that data? Not you. Your employer. The platform. The system.
We wanted a world where real thinking gets recognized. We might get a world where every gap in your reasoning is logged and scored by a system you don’t control.
That line I said as a wish - “I wish my AI could tell his AI” - it won’t stay a wish for long.
The question is whether we’ll like what it reveals.
About SG
I run Dobby Ads, an AI Creative Agency. I am an Over-thinker. This is where that overthinking goes. Connect with me on LinkedIn.

