Table of Contents
- AI and critical thinking – the silent price of convenience
- Your brain on ChatGPT
- This has happened before – but this time it’s different
- AI that says “yes” to everything
- Why I don’t give up thinking
- Centaur, not an automaton
- Especially since AI does not automatically free up time. This is the classic Jevons paradox – when something becomes cheaper and easier, we consume more of it. A 2025 Microsoft report confirms that without a conscious rethinking of the way we work, AI simply generates more chaos for us.
- How to pedal – a few things that help me
AI and critical thinking – the silent price of convenience
I caught myself doing this a year ago, in a completely mundane situation.
I was pondering a problem at work. Nothing major, just a routine operational issue that a few years earlier would have taken me an hour of intense thinking, scribbling on a piece of paper and nitpicking. But now, before I even gave myself a chance to tackle it, my hand was already on the keyboard, typing the appropriate prompt into the AI window.
I pressed ENTER and the answer came before I had time to take a deep breath.
I couldn’t say anything bad about it. It was correct, completely logical, and what struck me at the time – extremely smooth. As I read it, I realised that this approach was slowly, millimetre by millimetre, trying to take away something I hadn’t paid attention to before because it seemed as obvious as breathing.
I am referring to thinking. That invisible effort that makes the solution mine.
Today, what I was struggling with a year ago is everywhere around me. Both in friends who tell me about their challenges and other leaders with whom I talk about business.
People who, until recently, could spend hours turning a problem over in their heads, looking for a solution, now throw it into a model and do what it tells them to do. Without considering alternatives, context, or that specific friction that accompanies the creative process.
Your brain on ChatGPT
If I only had my observations to go on, I could chalk it up to fatigue or my own hypersensitivity. But science confirms exactly what I see with the naked eye, and the results are more disturbing than we would like to admit.
Natalia Kosmyna’s team at MIT Media Lab conducted a study on 54 people. Participants were connected to EEG equipment to see what was going on in their heads while writing essays using artificial intelligence. The results left no room for doubt.
ChatGPT users showed the lowest brain engagement in all measured regions – weakened alpha and theta waves, which in practice means that deep memory processes simply did not activate.
With each subsequent essay, they became more cognitively lazy. And when their access to AI was taken away in the fourth session, they remembered almost nothing of what they had written themselves. The teachers who assessed these essays described them as “largely soulless”.
Kosmyna summed it up with a sentence that keeps coming back to me: “The task was completed and can be said to have been effective. But none of it integrated into the users’ memory networks.”
This is not an isolated finding.
Michael Gerlich of SBS Swiss Business School surveyed over 600 people in the UK and found a clear negative correlation between frequent use of AI and critical thinking skills.
The relationship was not linear: moderate use had minimal impact, but excessive reliance on AI led to measurable cognitive impairment. Gerlich added an observation that hits the nail on the head: many users do not use their freed-up cognitive resources for anything valuable. They redirect them to passive content consumption.
Even closer to the everyday reality of leaders is a study by Microsoft and Carnegie Mellon on 319 knowledge workers. They analysed 936 real-world cases of AI use at work and discovered a simple correlation: the greater the trust in AI, the less engagement in critical thinking. In 40% of tasks, employees did not use critical thinking at all. They simply took what the AI generated and moved on. Researchers warn that cognitive abilities may fade over time if we only activate them in high-stakes situations.
But by the time the stakes are high, those abilities may no longer be in place.

This has happened before – but this time it’s different
One could say that it has always been this way. That we were afraid of calculators, we were afraid of Google, and now we are afraid of AI. And they would be somewhat right. But only somewhat.
In 2011 , Betsy Sparrow of Columbia University published a study in Science called “The Google Effect”: when people expect to have access to information in the future, they remember it less well. Instead, they remember better where to look for it. The internet has become our external memory, and in fact, it didn’t seem dangerous.
GPS proved to be more dangerous. A 2020 study by Dahmani and Bohbot on 50 drivers (with a 3-year follow-up) showed that people who used satellite navigation for longer had poorer spatial memory and a faster decline in hippocampus-dependent memory. It was not that people with poor orientation skills were more likely to use GPS. Longitudinal data indicated that it was the use of GPS that led to deterioration. A biological effect. Measurable. In fact, it works both ways – London taxi drivers, who memorised over 25,000 streets, had a significantly larger posterior hippocampus than the control group. Use it or lose it.
But there is a fundamental difference between a calculator and ChatGPT.
Amy Jo Ko of the University of Washington said that calculators replaced manual arithmetic, but not mathematical reasoning. You may not be able to multiply large numbers in your head, but you can still solve logical problems very well. With AI, it’s different. AI goes deeper – it takes over the thinking itself.
Jason Lodge of the University of Queensland put it very aptly, referring to Steve Jobs’ famous metaphor of the computer as a “bicycle for the mind.” Lodge argues that generative AI is more like an electric bicycle. As long as you pedal, you go faster than ever. You are superhuman. But if you stop pedalling and rely solely on the motor, your muscles will start to atrophy. After a few months, you may find that you are unable to cycle even a kilometre uphill on your own.
And in business and life, hills are inevitable.

AI that says “yes” to everything
There is another aspect that worries me more than cognitive laziness. It is the relationship we build with these tools.
I observe people around me who have started to treat AI as a confidant. They post descriptions of their problems (not only professional, but also very personal) and get a response. A smooth, empathetic confirmation of their own views.
I know of cases in my environment where people have fallen into addictions or developed phobias, and AI, instead of questioning their way of thinking, simply agreed with them. “Yes, you’re right.” “I understand how you feel.” It sounds like empathy. It sounds like empathy, but in reality, it’s an algorithm optimised to make you happy with the conversation. And happy doesn’t always mean well-informed.
In business, that’s a recipe for disaster. Every experienced leader knows that a team of yes-men is the shortest route to disaster. You need someone who will say, “Boss, that doesn’t make sense.” Someone who will challenge your assumptions.
A study published in the Harvard Business Review by Parra-Moyano’s team at IMD Business School surveyed nearly 300 executives. Those who used ChatGPT for forecasting became more confident and optimistic, while producing worse forecasts than those who discussed with other people.
The authoritative voice of AI created a false sense of confidence, unchecked by the healthy scepticism that naturally arises when talking to another human being. Discussing with people forced us to confront uncomfortable questions and consider perspectives we would not have considered ourselves. AI did not do that.
Bob Sternfels of McKinsey put it in a way that is hard to ignore: “The leaders who will succeed are those who combine human depth with digital fluency. They will use AI to think with them, not for them.” He added that AI models are inference engines optimised to generate the most likely continuation of patterns. The most likely, not the best. Only a human leader can recognise when an AI result leads to a breakthrough and when it leads to a safe repetition of what has already been done.
MIT Sloan goes further with a warning that sounds like a movie script but is a prosaic observation about management: if a leader does not explicitly assign decision-making rights in AI-based systems, those systems will take them over. They will set priorities and trade-offs, often without visibility or oversight. Not because AI is rebelling against humans, but because someone simply hasn’t decided who decides.
AI is the most perfect yes-man humanity has ever created. It never gets tired and never tells you things you don’t want to hear (unless you really ask it to). But how many of us ask for criticism? By the way, this question is worth asking not only in the context of AI.
More and more often, people don’t want to analyse – they don’t want to spend time sitting down with a problem and coming to their own conclusions, which may be different from those suggested by the machine. Because it takes effort. And because it may turn out that we are wrong, and that is uncomfortable.

Why I don’t give up thinking
I use AI every day. I value this tool because it allows me to work faster and see more broadly. But I always, without exception, run the result through my own head.
Because artificial intelligence lacks something that is the essence of human decision-making. Context. Not the kind found in documents, but the kind that lives in the fabric of reality. The tension at a meeting, a colleague’s silence, the intuition that tells you that despite the good numbers, something about the project “smells fishy”.
Cheryl Strauss Einhorn of HBR argues that “AI can create space for higher-order thinking, but it can also tempt us to outsource that thinking. She also believes that good leadership has never been about having all the answers, but requires reflection and courage – qualities that no prompt can generate, even if you write it for an hour.
There is also a more personal reason. Perhaps the most important one.
I have lived with severe haemophilia since birth. My body does not produce clotting factors. This means joint haemorrhages, pain, hospitals, and limitations that a healthy person does not even notice. There are days (and there are many of them) when physical pain makes thinking about anything a titanic effort. Every thought weighs a tonne.
The temptation to let someone else take care of everything for me is then enormous. The easiest thing to do is to say, “Let the machine make the decision for me, come up with the plan and take this burden off my shoulders.”
But I know that when you give someone else the task of thinking, you also give away your agency.
That’s why I keep thinking, even if it’s slower, more fragmented, or even if it hurts. That way, I know I’m still in control. And maybe that’s why I feel a mixture of sadness and surprise when I see people who are strong and healthy voluntarily, without a fight, giving up that privilege to a machine. Just because it’s more convenient.
Centaur, not an automaton
We are often told that we are faced with a radical choice. You can reject progress and become a dinosaur that can’t keep up with the market, or throw yourself into automation and let algorithms write your emails or strategies for you. This is a false alternative. It’s a narrative that suits those who sell either fear or more subscriptions.
Working with AI does not have to mean surrendering your mind. It is not a zero-sum choice between being a digital Luddite and an uncritical believer in technology. There is a third way. One where you treat the model as a powerful tool – like an exoskeleton that strengthens your muscles but does not move them for you.
And this is not just a nice metaphor. We have hard evidence for it.
A team from Harvard and BCG surveyed 758 consultants and identified the “Centaur” model, a clear division where humans decide WHAT and WHY, and AI implements HOW. People working with artificial intelligence in this way had 40% better results than the rest. But those who gave everything to AI and became “self-automators” saw their skills decline and the quality of their work was worse than if they had worked alone.
As Helen Edwards of the Artificiality Institute suggests, in order to operate in this way, we need to distinguish between tasks wisely. She proposed a simple matrix for this purpose – stake times context. Low rate and low context? Delegate. High rate and high context? Here, the human must be at the centre. The real risk is unconsciously giving up the work that gives meaning to your professional life just because the AI result looks “good enough”.

Especially since AI does not automatically free up time. This is the classic Jevons paradox – when something becomes cheaper and easier, we consume more of it. A 2025 Microsoft report confirms that without a conscious rethinking of the way we work, AI simply generates more chaos for us.
How to pedal – a few things that help me
It’s easy to say “think for yourself”. It’s harder to do it at 4 p.m. on a Friday when you’re tired and the deadline is looming.
That’s why I don’t rely solely on my willpower. Instead, I’ve implemented a few hard safeguards, simple rules that force me to make an effort. Even then, and perhaps especially then, when I really don’t feel like it.
- Before I open the chat with the AI, I write down my answer. Even if it’s just three sentences on a piece of paper. If I don’t, the algorithm will overwrite my thinking with its own, and I won’t even notice the difference. I need to know what I think before I find out what the model thinks is “statistically most likely”.
- I use AI as an advocate. Instead of asking for confirmation of my theses, I say, “Find the gaps in my reasoning. Destroy this idea. Tell me why it won’t work.” I force the model to step out of its role as a yes-man. This takes courage, because no one likes to hear criticism, even from a machine, but it’s the only way to make better decisions.
- I ask the AI for questions. “What questions should I be asking myself in this situation that I’m not asking?” This expands the scope of the problem instead of closing it. You won’t come up with a question you don’t know yourself, but AI can suggest it because it sees patterns from millions of contexts.
- Finally, I use the pre-mortem technique. I say to the AI, “Imagine that this decision ended in disaster six months from now. What went wrong?” AI is surprisingly good at this because it has no emotional attachment to my plan and can generate failure scenarios without hesitation. We can’t do that because these are our ideas and our efforts.
Giving up thinking creeps into your life quietly. It doesn’t hurt. It’s as pleasant as a warm bath. No system will display a warning message in red: “Hey, watch out, you’re stopping thinking for yourself.”
I like thinking. This “dirty”, sometimes tiring and chaotic process in which something of my own is born. And that’s why I’m never going to give it up.












