AI – The Infinite Intern
All around us, we see that AI tools are growing sharper by the day, and are now able to handle more and more complexity with ease. Artificial Intelligence has always offered the promise of being able to handle the heavy lifting, and free-up our time for high-level strategic and creative pursuits. Yet, some folks (like myself) are discovering that, the more they integrate these systems into deep analytical work, the more exhausting it can feel!
Of course, if you use these tools for low-stakes activities like generating a fun image or summarizing a meeting transcript, the experience can be quite seamless. The real friction begins when you ask a bot to step into the role of a collaborator in a complex problem-solving situation or for deep analysis.
The Context Gap
One way to think about an AI model is that of an incredibly fast, remarkably well-read, but ultimately context-blind “intern”. This intern never sleeps, and can draft an email in seconds, but has no real ‘skin in the game’. And that means, they do not understand the politics of your office, the history of your brand, or the long-term impact of a specific business decision.
Because the tool lacks this “soul”, the burden of context remains entirely on you. Only you can feed it the right information, set the boundaries it needs to follow, and constantly correct its course of action.
When we think through a problem on our own, we typically maintain a single thread of logic. When we work with a bot, we are required to manage a continuous loop of prompt, result, critique, and refinement. This ‘supervisory effort’ can often be more taxing than simply doing the work yourself.
The Verification Tax
Secondly, a significant portion of this exhaustion also stems from the Trust Deficit that is part of today’s systems. Unlike a human colleague whose reliability you can (somewhat accurately) gauge over time, an AI model can be brilliantly right one moment and confidently wrong the next. This can create a state of hyper-vigilance that we, as users, need to compensate for. We can end up spending more mental energy verifying the work, than we would have spent generating the ideas ourselves.
As Yuval Noah Harari noted in a talk on the nature of these systems, “AI is the first technology in history that can make decisions and create new ideas by itself. It’s not a tool, it’s an agent.” When your agent requires constant auditing to ensure it hasn’t hallucinated a fact, or missed a crucial detail, the efficiency gains can soon begin to evaporate.
The Choice Paradox
Thirdly, there is the fatigue of ‘infinite choice’. In the interest of user-stickiness (and probably wanting to learn from user behavior), these tools are designed to offer us a loop of endless possibilities. If you do not like a result, you can simply regenerate it. And again. And again. Even if you find that the response is spot on, it tempts you with a “Would like me to…?” invitation at the end of every iteration!
This “one more click” mentality leads to the paradox of choice. We may find ourselves losing sight of the original objective, and branch into more and more detail, only because the tool makes it so easy to do so. The only way to manage this shift without burning out, is to change how we engage with AI – we will need to learn to keep the goal in mind at all times and establish clear boundaries.
The crazy part is that, even if you are able to surpass these three challenges by giving it a robust context, sticking to high-quality credible sources of information, and defining clear limits in sync with your overall goals, there is the actual task of processing the output that your bot threw up in minutes by scouring through 300+ websites! With each iteration in the thread, the task of making sense of all that data and analyses, falls squarely on you.
It is no secret that modern AI models are evolving at a scorching pace, and feel like almost-magical, multi-purpose tools that can do anything with ease. The question is: “Will we be able to keep up?!”