I watched forty videos about how to use AI in business. ChatGPT appeared in nearly every one, and n8n and Make showed up in every third. The main topics were remarkably similar: how to automate invoices, how to do follow-ups, how to transcribe meetings, and how to prepare a proposal in twenty minutes instead of three hours. There was, admittedly, a lot of substance in them and plenty of sensible advice. I use some of these things myself, and I learn from the people who demonstrate them.

The thing is, across those forty videos I heard almost nothing about what AI does to the role of the person who actually runs the company. If you lead a business today and you’re implementing artificial intelligence, this is exactly where the harder part of the conversation begins. The model gives you a market synthesis before you’ve had a chance to review the data yourself, and it very quickly becomes unclear what’s still your own judgment. On top of that, there’s the question of accountability for results that came out of a machine, and the question of your people’s competencies when an increasing share of the thinking is being handed off to a tool.

AI in Business Management

A cat with a question mark on the conference podium; in the audience, identical robots in suits holding badges featuring automation icons - AI in Business Management

I could go on listing what I found missing in those videos. But the most honest thing to say is simply that the public discourse about AI for business is, right now, overwhelmingly an operational discourse. It’s about how to do faster what you’re already doing. Of course, that’s not nothing — and it often delivers real results.

The easiest explanation would be that the creators of those videos overlooked something. But I don’t think that would be true. They’re simply responding to what people are searching for.

When I first started working with AI, my earliest questions were about tools too. I was mainly interested in which model was better and how to speed up coding. I wondered how many hours we’d reclaim in proposal writing or architecture design. I spent weeks comparing ChatGPT with Claude and testing automation tools. I was pretty pleased with myself.

It was only after I’d gotten somewhat comfortable with the tools that I decided it was worth taking them further and using them to work on strategy. I gathered data, asked the model for a synthesis, and then went back to the conclusions. When I got my first such report, a thought quickly surfaced in my mind: I wasn’t sure where my own reasoning ended and where I was merely repeating an elegantly presented synthesis.

Bellary in the California Management Review argues that the way we talk about AI doesn’t just describe reality — it co-creates it. If we consistently discuss artificial intelligence as an operational tool, leaders start treating it that way too.

They ask about hours saved and it never occurs to them to ask whether their strategic decisions are actually better now than they were before the rollout. Because when a business owner types “how to automate invoices” into a search engine, someone else sees that query in their creator analytics and makes a video designed to capture that attention. When someone types “how to speed up proposals,” they get a proposal tool. Very few people search for the harder stuff. Not many type: “how is my decision-making role changing now that AI is in my company” or “what happens to a leader’s accountability when part of the work is handed to a model.” So those materials barely exist. And I think that’s precisely why I keep coming back to the idea that digital transformation — and AI in particular — is above all a business change. Technology is just one piece of it. The trouble is, the business part is much harder to show in a video, because it concerns things that break quietly.

If AI really is a business change, then by definition it enters the areas that previously built our judgment. And at that point, the question of how to work faster should stop being the main thing that interests us. We should become far more interested in how to distinguish our own understanding from a well-packaged synthesis. In business, a bad decision rarely surfaces immediately anyway. Artificial intelligence only makes that problem more treacherous, because it lets such a decision look reasonable for longer.

The loop that lies

On the left, a cat misses a shot at the basket on the court; on the right, the same cat calmly looks at a dashboard displaying nothing but green indicators — a contrast between honest failure and false success

A few years ago I was at Web Summit in Lisbon, listening to an NBA player talk about risk, mistakes, and decisions that cost him millions. I don’t remember exactly who it was, but I remember one thought that appeared in my head. On the court, the ball either goes through the hoop or bounces off the rim. The result comes instantly, and you know right away whether it’s good or bad. An athlete can’t hide behind a PowerPoint presentation and explain that the shot was strategically sound, it just didn’t happen to go in.

In business, everything plays out far less clearly. You make a hiring decision today, and you only learn its quality six months later. A strategy looks sensible for a while too. People nod along at first, the data lines up, but only after months does it turn out that the flaw was in the assumption itself. During that long stretch, you can live quite comfortably in the belief that everything is going well.

There was a time when, working on the company’s strategy, I sat with data for two days straight. I didn’t just read it — I argued with it. I went back to the numbers several times because something didn’t sit right. I’d leave a comment in the file: “this doesn’t make sense, check it” or “this looks too clean, something’s being hidden.” It was precisely in those moments that my judgment was being built.

When today’s model gives me a finished synthesis in ten minutes, those rough patches disappear. Everything is coherent, logical, and well-written. And that, I think, is what makes it dangerous. Because when the model gets it wrong or oversimplifies reality, it does so in a way that can look convincing for a long time. And real judgment isn’t built on smooth summaries — it’s built in the places where something pushes back.

And I think that’s exactly why artificial intelligence doesn’t so much shorten the feedback loop as make it more treacherous. It lets you be wrong faster and for longer, in a form that looks convincing. And if that mechanism repeats long enough, it starts changing not just the quality of the outcome but also the person evaluating it. And if you lead people, that loop no longer concerns just you.

Competency prosthetics

The cat stands unsteadily in the office corridor after discarding the glowing AI spheres — it realises that without technological support, it can barely keep its balance.

In orthopedics, this mechanism has been known for years. When an orthotic brace stabilizes a damaged joint, the muscles around it start doing less work. The body conserves energy and the joint may function better thanks to the brace, but at the same time, after a few months, when the person removes it, they turn out to be weaker than before.

I think about AI in similar terms. It can make up for our gaps and speed up our work, especially in places where we’d get stuck on our own. At Inwedo, we use these tools every day. If you work with them daily, you probably see how easily they start going deeper than you originally planned. The trouble begins when, out of convenience, we hand over not just part of the work but also part of the competencies we should still be developing ourselves.

Lisa Messeri from Yale and Molly Crockett from Princeton described this mechanism in a paper published in Nature. They called it the “illusion of understanding.” When AI processes information on our behalf and delivers ready-made conclusions, we start overestimating the depth of our own understanding. Worse still, our field of vision narrows, even though it subjectively feels like we’re exploring the topic more broadly.

A similar problem was described by Harvard researchers in collaboration with BCG, who studied over seven hundred consultants. On routine tasks, AI improved the quality of their work. But when a problem required combining hard data with human nuance and making a non-obvious decision, the consultants who used artificial intelligence performed notably worse than those who managed on their own. They trusted the machine precisely where its competencies ended.

I think I understand where this mechanism comes from, because I see it in myself. I remember learning to program on a Commodore 64 as a kid. I didn’t have Stack Overflow, I sometimes had documentation, and the rest was on me. Honestly, I wouldn’t want to go back to that. I prefer programming today. But what was better back then was the thinking. Before I fired up the compiler, I had to mentally walk through the scenarios where the program would break. I didn’t take the first solution that looked correct. First, I looked for the cracks in it.

And I think that’s exactly why, when AI offers me a solution or a recommendation today, I almost automatically generate scenarios in my head where that reasoning might fall apart. That “muscle” exists because I built it over years, in conditions where nothing came easy. Thanks to that, I have my own frame of reference, and it’s that frame that determines whether I accept the suggestion or reject it. Far more than productivity itself, what concerns me is what happens to people who haven’t yet had the chance to build that kind of filter or muscle.

I have no doubt that AI can bring us a range of benefits. The question that occupies me more today isn’t “does it help?” — because of course it helps. What interests me more is where the line is, beyond which it starts replacing a muscle we should be exercising ourselves. If we miss that line, we’ll very quickly stop being able to tell the difference between convenience and competence.

Questions almost nobody asks

A cat peers intently at the person opposite across a table littered with question marks — an intimate scene of an encounter in which questions are asked that nobody usually asks.

For a long time, I thought about this exclusively as my own problem. My judgment, my own decisions, my own accountability. But in a company, these same mechanisms spread across dozens of people and hundreds of daily decisions, which makes them even harder to catch.

That’s why it bothers me that while watching those forty YouTube videos, not a single one of the questions was asked that, in my view, should be keeping every CEO implementing AI up at night. These are questions that are more human and more about leadership than they are about technology. And I think that’s exactly why they’re almost absent — because they don’t have simple answers, and simple answers sell best.

Do I even know how my people feel about this?

I think this might be the most uncomfortable question of all. Mainly because you can’t easily measure how people in a company feel, and it’s even harder to draw out honest answers on difficult topics. And a transformation toward AI certainly qualifies as one.

Writer and Workplace Intelligence documented the scale of employee resistance to AI strategies. BCG has long repeated the 10-20-70 rule, which says that AI success is 10% algorithms, 20% technology, and 70% people and processes. Yet in most videos, the human shows up only at the implementation stage, usually in the role of someone who needs to be trained — preferably on prompts.

Of course, it’s easy to say that 70% of AI success is about people. It’s harder to sit across from someone who’s been with your company for eight years and hear them say they’re afraid that in six months they might no longer be needed. People don’t resist this kind of change out of spite. More often, they do it because nobody had an honest conversation with them about what’s actually changing for them. Not about the tools — about their work, their place in the company, and whether they’ll still be needed here.

BCG and Columbia Business School show a large perception gap between teams and leaders. And I think that’s the hardest part. From a leader’s perspective, it’s very easy not to notice it at all.

I also have a sense that something else is at play here, though this is my own thesis. AI has become so loud that a lack of enthusiasm has started to sound like a competency problem. It’s hard for a leader today to say: “I’m not sure,” “I have doubts,” “I don’t see the point yet,” because it’s too easy to read that as a lack of awareness or a lack of courage. So when you ask someone directly what they think about AI, you very often get the socially safe answer, not the fully honest one.

If I were to offer you one piece of advice from my own practice, I wouldn’t start with the question: “What do you think about AI?” I’d start with: “What in your work is harder today than it was six months ago?” It’s a different question, and it usually takes you to a completely different place. When I asked about AI, I mostly got opinions, declarations, or enthusiasm. When I started asking about what had gotten harder, much more important things began surfacing — things connected to artificial intelligence too, like blurred lines of accountability, pressure on speed because “if AI does it in ten minutes, why do you need two days,” and plain fatigue from the pace of change. I don’t always have ready answers or solutions for these challenges, but at least I know what we really need to be talking about.

Can my people say “no” to the model?

When implementing AI systems that analyze data and generate reports, technology is rarely the biggest problem. The system analyzes data, prepares documents, and from a technical standpoint, everything usually works perfectly. The trouble starts when people can’t look at a finished result and say: “something’s missing here” or “this looks suspicious, let’s check.” If there’s no room in your company for that kind of doubt, you’ll simply start accepting whatever you’re given. It’s only when you create space to pause and challenge the output that people begin looking for gaps in it, instead of taking it on faith.

Research by Oracle and Seth Stephens-Davidowitz shows that 85% of leaders experience decision fatigue today, and the number of decisions made by managers has increased tenfold over three years. At that pace, the temptation to take a ready-made recommendation and move on becomes very real. I just don’t know if anyone in our companies is still teaching people when they need to resist that temptation.

What is AI doing to my people’s competencies — and can I fix it through hiring?

When a company implements AI without paying attention to its people and how they work and think, sooner or later gaps appear. And there’s nothing specific to artificial intelligence about that. This is what every change looks like when technology starts moving faster than the organization. People get new tools, but nobody checks whether they truly know how to use them in a way that creates value. For a while, everything looks fine. Until the gap comes back to you in the next personnel decision.

I know companies that, in that situation, do what seems simplest. They let go of people who can’t keep up with the new technology and try to hire someone from outside who “gets AI.” The thing is, artificial intelligence in a real business context hasn’t been with us for long. People who truly understand it deeply in a business setting are few and far between. I’d much more commonly expect people who are open to experimenting and learning than ready-made specialists. Because using AI to generate animated cats or chat about recipes is one thing, and working with it inside a company — on projects, operations, and sometimes strategic decisions — is something else entirely.

Hiring itself is no longer straightforward either. When we post a job opening, we increasingly receive profiles that look fine at first glance, but after a moment something starts to feel off. The CV looks good, the experience sounds credible, but the details don’t add up, and the whole profile has something about it that doesn’t come together as a coherent whole. We’re seeing more and more applications with clearly fabricated CVs, and I can’t always tell what intentions are behind them.

Incidentally, Gartner forecasts that by 2028, one in four candidates will be fake. Already, 39% of applicants admit to using AI in the recruitment process, and 41% of organizations confirm they’ve hired someone who turned out to be entirely different from who they claimed to be.

Vidoc Security Lab even described a case where a candidate passed several rounds of interviews before it turned out during a video call that their face was being generated in real time by a deepfake. We haven’t reached that point yet, fortunately, but other companies are already dealing with it.

Even if you filter out the fake profiles, there’s still another problem. How do you verify what a candidate actually knows? Today’s access to AI tools is so broad that a model can carry people on its back for months. A candidate may have spent the past year “managing a project” with the help of a model without ever independently making a truly difficult decision. They’ll come across confidently in an interview, because their confidence doesn’t necessarily have to be fake. The trouble starts only when they hit a situation too human and too deeply embedded in the company’s context for an algorithm to walk them through it. And if the interview is remote, the candidate might have a well-positioned screen with a teleprompter or a tool feeding them ready-made answers. We simply won’t see it.

That’s why in interviews I conduct today, I’m very careful to watch for whether a candidate thinks independently. And today, I can mostly do that by checking whether they have their own opinion. I’m increasingly noticing that this is a bigger problem now than it used to be. People hedge, try to sense what the interviewer wants to hear, and tailor their answer accordingly. Sure, to some extent it was always like that. But today, when more and more people use artificial intelligence that smooths out the way they speak and write, what I hear in those conversations sounds different than it did even four years ago. Today I’m not looking for polished answers. I’m looking for what’s underneath. What this person actually thinks. Because when the truly hard situations come — the kind where I need another human’s thinking and competence — I want to be sure they can handle it without firing up the model.

Am I treating AI as another IT project, or as a strategic change?

This question also goes back to the very top of the company. If you treat AI as another topic to delegate, you’ll very quickly find it coming back through the side door: in costs and work quality, and shortly after in decisions too. BCG in “The Widening AI Value Gap” report states that C-level leaders deeply engaged in AI are far more likely to end up among the companies that generate real value from it. At the same time, Deloitte shows that in many organizations, boards still don’t discuss this topic seriously enough or often enough.

From these two observations, my conclusion is simply that the problem today isn’t about access to tools — it’s about whether someone at the top is genuinely taking ownership of this topic and approaching it with rigor.

I’ve seen this in companies run by people I know. The CEO said “we’re implementing AI,” signed off on the budget, delegated the topic to the CTO or IT director, and went back to his own affairs. Six months later, the board meeting featured nice charts, purchased tools, and information about whether people had completed their training or not. Everything looked as if the topic was under control. Except that in the actual work, very little had changed — apart from rising costs. The company didn’t start making better decisions because of it. Teams didn’t automatically start working smarter, and the only thing that grew was the licensing bill.

But I’ve also seen the opposite. I know a company that implemented AI for demand forecasting of fresh products, and the biggest change turned out not to be better forecasts but the fact that the system forced collaboration between marketing, sales, and the supply chain that hadn’t existed before. Another company went even further — the algorithm analyzes trends, weather, and store traffic, and then effectively decides what the company produces. In both cases, artificial intelligence wasn’t just an IT project. It was a change in how the company plans, and someone at the top had to treat it that way.

In Poland, this is probably even harder. On one hand, people and the market are moving ahead with AI faster than many leaders assume. KPMG reports that as many as 69% of Poles regularly use AI. Polish AI startups attracted 171 million euros in 2024, and the number of companies using the technology grew by 56% in a single year, which according to PIE was the highest growth rate in the entire European Union.

On the other hand, Eurostat still shows a low level of formal AI adoption in Polish companies, PIE describes significant caution among companies with exclusively Polish capital, and PwC in Central and Eastern Europe shows that CEOs in our region spend most of their time on priorities with a horizon of less than a year. Of course, I don’t want to say that short-term thinking is always bad. Sometimes you simply have to put out fires because the company is dealing with a crisis, or the environment is changing so fast that a quarterly horizon is all the space you have. I understand that. I’ve been through it myself and still go through it sometimes. But in those conditions, it’s hard to treat any digital transformation as a change that requires patience, reflection, and leading people through several quarters.

The court and the office

After those forty videos, I don’t feel that what business lacks most today is another set of tools, tutorials, or integrations. There’s already more than enough of that. What’s far more missing is a conversation about what AI does to a leader’s judgment and their teams’ competencies, and ultimately to accountability for decisions that increasingly look smart only because they were well-packaged. That’s essentially why this article exists.

If I had to point to one of the most important tasks of leadership today, it would probably be this: to create working conditions in which people still have to think for themselves and are able to justify a decision without hiding behind the model. A company can look modern and efficient for a long time while slowly losing something that no dashboard will ever show.

That’s where I’d rather start the conversation about AI in business today. Not with the question of what else can be automated. More with the question of what this technology is doing to our judgment. And if you lead people, sooner or later you’ll have to answer that one anyway.

Subscribe
Powiadom o
guest

0 komentarzy
Inline Feedbacks
View all comments

Adam Trojanczyk Books

Join the leaders who think for themselves and lead with humanity in a world of technology, pressure and personal limitations.

My name is Adam Trojańczyk – CEO of a technology company recognised by the Financial Times and Deloitte as one of the fastest-growing in Europe. I am the author of five books and a man living with severe haemophilia. I write about leadership in the age of AI from the perspective of someone who, throughout his life, has had to learn about risk, limitations and responsibility in a different way to most people.

I have over 1,100 readers – CEOs, founders, managers, leaders and people who want to think for themselves and lead without putting on airs.

This isn’t a newsletter about trends, life hacks or motivational slogans. I only write when I have something that’s genuinely worth your attention.

Sign up and receive three of my books in PDF format.