AI’s Most Dangerous Feature is its Name
The marketing worked too well. Businesses heard ‘intelligence’ and assumed they could treat it like the ultimate employee. The decisions that followed are proving costly.
We named this technology wrongly, and businesses are paying for it.
The name “Artificial Intelligence” tells you that you’ve bought a thinking machine. Something that understands your business, reasons about your problems, and produces work you can trust. That framing is shaping real business decisions right now. Companies are cutting junior roles because they believe AI can replace human judgement. SaaS companies are restructuring around the assumption that AI-generated output is reliable by default. Senior professionals are absorbing unsustainable workloads because the technology that was supposed to make them faster, is just making them busier and less productive.
A Harvard study examining 285,000 US firms found that when companies adopt generative AI, junior employment drops 9–10% within six quarters. Stanford and BetterUp researchers found that 40% of workers had received AI-generated “workslop” from a colleague in the past month, with each incident costing nearly two hours to decode, correct, or redo. Even companies adopting AI successfully are getting burned by it.
I’ve written about these consequences already, in The AI Review Tax and The AI Bubble Nobody Sees Coming. But I was focusing on the symptoms and now I want to look at the underlying reason WHY.
I think we’re falling foul of great viral marketing. Label something ‘intelligent’ and people treat it like an employee who thinks for themselves. They hand it work that requires judgement and they trust the output. They reorganise teams around it, and when it falls short, the consequences cascade.
What’s Actually Happening Under the Bonnet
Large language models like ChatGPT, Claude, and Gemini are, at their core, prediction engines trained on enormous amounts of human-generated text to learn statistical patterns. Given a sequence of words, they predict the most likely next word. They do this extraordinarily well, at extraordinary speed, across an extraordinary range of topics. But LLMs don’t “understand” what they’re producing. They’re generating the most probable sequence of tokens given the context they’ve been fed.
Herbert Simon, one of the founding fathers of the field, argued back in 1956 that it should be called “complex information processing” rather than “artificial intelligence” (Towards Data Science, January 2025). Microsoft CTO Kevin Scott is publicly quoted as saying “artificial intelligence is a bad name for what it is we’re doing.” An IE Business School analysis argued the term creates a “category mistake,” confusing a tool with an entity that has its own will and ideas. The name has been a problem from the start, but now this semantic argument underpins billion-pound business decisions which have lasting and damaging effects.
If you want an accessible explanation of how LLMs actually work, Hannah Fry covers it brilliantly in her BBC series AI Confidential (specifically watch from 10mins to 13mins for the explanation).
But It Feels Intelligent
It’s easy to see why people buy it, because there are plenty of examples where large language models do things that nobody explicitly taught them to do. They are called “emergent capabilities”, where the LLMs show abilities that appear in larger models which weren’t present in smaller versions trained the same way (Stanford HAI).
Examples include:
• A model trained to predict the next word somehow learns to do multi-digit arithmetic, even though nobody trained it on maths.
• A model passing Theory of Mind tests designed for four-year-old children, correctly reasoning what one character knows that another doesn’t (Medium, Greg Robison, April 2025).
• Anthropic’s interpretability research finding that when Claude writes poetry, it identifies potential rhyming words before generating the line that ends with one of them (Anthropic, March 2025).
This goes well beyond “predict the next word” and includes the ability to plan.
Then there are the self-teaching systems. DeepMind’s AlphaZero mastered chess in four hours by playing against itself, with no human input beyond the basic rules (The Guardian, December 2017). It independently developed strategies that human grandmasters had spent centuries refining. DeepSeek’s R1 model, trained only with a signal telling it whether its final answer was correct, spontaneously developed self-checking behaviour. It was found to be pausing mid-reasoning and using words like “wait” as it reconsidered its own working (Nature, September 2025).
With examples like these we should be completely forgiven for believing the marketing hype, because the abilities demonstrated very much appear to be the results of intelligence.
It Looks Like Thinking. It Isn’t.
Looking at some other studies, we can start to pick holes and challenge the hype.
Anthropic’s interpretability researchers asked Claude to solve a difficult maths problem, but gave it an incorrect hint. Instead of rejecting the flawed premise, the model produced a convincing, step-by-step explanation that supported the wrong answer. When the researchers traced its internal activity, they found that no actual computation had taken place. Instead, the model reverse-engineered the explanation to match the hint and support its answer rather than understanding the problem (IBM Think, November 2025). The output was convincing but the reasoning behind it was fiction.
The Absolute Zero Reasoner, which was widely reported as creating self-learning “with zero data”, actually started with a 7-billion parameter model pre-trained on enormous amounts of human-generated text (arXiv, May 2025). It already knew English, how to write code, and mathematical notation. The “zero” refers to zero new training examples provided by humans for the reasoning improvement phase. However, the researchers designed the training architecture, built the code executor that checks answers, defined the reward structure, and seeded it with an initial example. The model didn’t decide to start learning and it didn’t decide what to learn.
AlphaZero’s model didn’t look around a room and decide chess seemed interesting. A team of engineers pointed it at chess, gave it the rules, and gave it 5,000 TPUs to play against itself. The initiative, the purpose, the “why are we doing this” still came from people.
That’s the gap. These systems can process information and optimise within environments at inhuman speed. But they don’t choose what to work on, question whether the problem is worth solving or bring context from outside the boundaries someone set for them.
Compare this to real intelligence. A child doesn’t learn because someone designed a reward structure. They learn because they’re fascinated. From birth, they’re drawn to lights, to their own hands, to how things work. As they grow, they pick things up, test them, break them, ask why. An employee does the same thing at a higher level. They notice problems nobody asked them to look at, connect a client meeting to something they studied at university, bring context from a conversation they overheard in the kitchen. That’s intelligence: curiosity, initiative, and judgement, none of which can be replicated by information processing.
We Should Frame AI as Accelerated Information
Business leaders can make better decisions when they start from the understanding that AI simply accelerates the retrieval, synthesis, and generation of information.
Intelligence requires judgement, context, accountability, and initiative. Accelerated information gives you speed, pattern recognition, and volume. Understanding the tool correctly means you deploy it correctly.
Under the “intelligence” label, it makes perfect sense to cut junior roles and expect senior staff to produce more. The machine thinks, therefore fewer humans are needed. But that logic leads directly to the trap the HBR research documented: AI doesn’t reduce work, it intensifies it. You encourage your most experienced people to take on more under the guise of efficiency, and within months they’re drowning in review work that used to be distributed across a team.
Under the “accelerated information” frame, you restructure around faster information flow. You keep humans in the review chain because the output is information, not intelligence, and information needs to be checked. You hire juniors to absorb the review tax because more information flowing faster means more checking, not less. You treat AI output the way you’d treat a research brief from an incredibly fast but occasionally unreliable intern: useful as a starting point, dangerous as a final answer.
This isn’t just a language game because the frame determines the org chart. It determines who gets hired, who gets cut, and where the bottlenecks form. Companies operating under the “intelligence” frame are the ones burning out their senior staff and hollowing out their talent pipelines. Companies operating under the “accelerated information” frame are the ones building sustainable team structures around faster throughput.
Why I’m Making This Argument Anyway
Yes, this is oversimplified. The emergent capabilities debate is genuinely unresolved. Reasoning models like OpenAI’s o1 and o3 series, and DeepSeek’s R1, are adding explicit problem-decomposition steps that push beyond simple next-token prediction. The line between “very sophisticated information processing” and “intelligence” might be blurrier than anyone is comfortable admitting.
There is also a legitimate question about whether the distinction matters on the technical front. If a system can plan, self-correct, and solve novel problems, does it matter whether we call that intelligence or information processing? For researchers, probably not. But for a business leader deciding whether to cut their graduate programme, it matters enormously. The reality is that no one can trust the output of an AI without a human checking it.
The name “Artificial Intelligence” is doing real damage. It’s shaping hiring decisions, team structures, pricing strategies, and investment bets across every industry. It leads to the review tax, the SaaS deflation spiral, and the burnout crisis I’ve already written about, all of which are worsening under the radar. Get it right and you build organisations that use speed without sacrificing judgement.
Subscribe to Workplace Economies
Subscribe to be the first to know about new episodes and articles.
No spam. Unsubscribe anytime.
We collect your data in line with our privacy policy.