AI Is Not the Problem
Why many organisations struggle not with AI itself, but with weak decision systems
„By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.“
Many organisations are moving quickly on AI. Far fewer are moving with the same clarity on responsibility, judgement and implementation.
If AI Is Not the Problem, What Is?
The real problem is not whether people use AI. It is whether the organisation knows how decisions around AI are supposed to be made.
AI is often discussed as a technology problem.
Can the tools deliver value? Are the models accurate enough? Will people adopt them? Should companies move faster or more carefully?
And these questions matter, but they often miss the deeper issue.
In many organisations, AI does not fail first because the technology is weak. It fails because the system around it is weak. Roles are unclear. Responsibility is blurred. Review standards are inconsistent. Learning remains informal. What looks like an AI problem is often a decision problem in disguise.
Recent data from Germany points in exactly this direction. Gallup reports that 64 % of employees say AI is already being used in their company to improve productivity, efficiency or quality and 48 % say they use it daily or several times a week. At the same time, only 21 % of employees in AI-using organisations fully agree that their manager actively supports its use, and only 20 % say their company offers good opportunities to develop the skills needed for new technological demands. The adoption problem, in other words, is not simply exposure to AI. It is weak support, weak guidance and weak capability building around it.
That matters because tools do not enter empty space. They enter organisations. And organisations are decision environments.
AI changes faster than organisations do
One reason this tension is growing is that AI has moved from specialised infrastructure to an everyday work tool. Gallup describes this shift clearly. Generative AI is no longer distant or abstract for most employees. It now supports research, writing, analysis and programming directly in the flow of work.
This creates a familiar pattern. The technology moves quickly, but he surrounding structures do not.
Microsoft's 2025 Work Trend Index describes this as the emergence of a new organisational blueprint in which systems become increasingly AI-operated but human-led. In that report, 82 % of leaders say this is a pivotal year to rethink strategy and operations, and 81 % expect agents to be at least moderately integrated into their AI strategy within the next 12 to 18 months.
That sounds ambitious. The only problem is that organisational adaptation rarely happens at the same speed as technological possibility.
A field experiment by Dillon and colleagues illustrates this particularly well. Across 66 firms and 7,137 knowledge workers, access to an integrated generative AI tool led users to spend around two fewer hours per week on email and reduced work outside regular hours. Yet the study did not find significant shifts in the quantity or composition of workers' tasks. People became faster at parts of their work, but the broader structure of work did not automatically reorganise itself.
That distinction is crucial. Saving time is not the same as redesigning judgement, coordination or accountability.
Most AI friction is not technical friction
When organisations say they struggle with the human side of AI, they often mean things like scepticism, uncertainty, uneven uptake or low trust.
These are real issues, but many of them are secondary effects.
People hesitate when they do not know what AI is for in their context. They rely too heavily on it when review standards are weak. They ignore it when there is no visible leadership support. They use it privately or inconsistently when formal processes lag behind actual work.
Seen from that angle, the core issue is not motivation alone. It is decision architecture.
The NIST AI Risk Management Framework makes this point in a more formal way. Its GOVERN function treats AI risk management as a cross-cutting organisational task, not as a narrow technical add-on. It emphasises policies, processes, transparency, ongoing monitoring, documented roles and responsibilities, accountability structures and training. The accompanying playbook goes even further by stating that human-AI configurations require clearly differentiated roles, explicit oversight responsibilities, proficiency standards and procedures for tracking outcomes and risks.
This is where many organisations are still underdeveloped.
They have the tool. They may even have a policy, but they do not yet have a real system.
What a weak decision system looks like in practice
In practice, weak AI decision systems usually show up in four ways.
First, use without role clarity.
Who is allowed to use AI for what kind of work? Who is responsible for the output? Who verifies critical content? Who decides when human judgement must override the model?
Without clear answers, responsibility becomes performative. Everyone is involved, yet no one is clearly accountable.
Second, use without review logic.
Not every AI output needs the same degree of scrutiny. But many organisations have no stable criteria for what can be used directly, what requires checking and what should never be delegated in the first place. This creates either complacency or overcorrection. In both cases, judgement quality suffers.
Third, use without learning loops.
Errors are corrected locally and then forgotten. Good uses remain individual hacks rather than organisational knowledge. No one systematically asks where the tool worked, where it misled, what conditions improved performance and how future use should change.
Fourth, use without leadership translation.
Gallup's numbers are especially revealing here. AI may already be present in daily work, but visible managerial support remains weak. That gap matters because employees do not just need permission. They need orientation. They need leadership that translates a technology into priorities, norms and practical use.
Without that translation, AI remains either a private productivity hack or a vague strategic slogan.
The evidence increasingly points to organisation, not just adoption
The broader research is moving in the same direction.
McKinsey's 2025 survey on how organisations are rewiring to capture value from AI finds that CEO oversight of AI governance is one of the factors most associated with higher reported bottom-line impact. It also reports that redesigning workflows has the strongest relationship with EBIT impact among the organisational attributes they tested. Yet only 21 % of respondents say their organisation has fundamentally redesigned at least some workflows as a result of gen AI deployment.
That is a revealing combination. The value is not most strongly linked to mere access. It is linked to governance and workflow redesign.
Another McKinsey report reaches a similar conclusion from a different angle. Whilst 92 % of companies plan to increase their AI investments, only 1 % consider themselves mature in deployment. The report explicitly argues that the challenge of AI in the workplace is not primarily a technology challenge, but a business challenge that requires leaders to align teams, address headwinds and rewire the company for change.
Even the more optimistic productivity evidence should be read in this light. Brynjolfsson, Li and Raymond show that access to a generative AI assistant increased productivity among 5,179 customer support agents by 14 % on average, with a 34 % improvement for novice and lower-skilled workers. That is a strong result. But it does not mean that AI automatically creates organisational value on its own. It shows that under certain conditions, the tool can improve task performance and accelerate learning. The question for leadership is what kind of system allows such gains to translate into decision quality, coordination quality and durable value.
In other words, AI can improve work, but organisations still have to decide how that improvement is governed, checked and absorbed.
What companies actually need
If the real bottleneck is the decision system around AI, then the solution is not just more rollout, but more structure.
At minimum, organisations need four things:
They need clear decision rights. Who decides, who reviews, who escalates and who owns the consequences.
They need explicit use-case boundaries. Where AI is helpful, where it is optional and where it should not be trusted without strong human review.
They need quality thresholds. What counts as good enough for low-risk support work is not good enough for client communication, strategic judgement or sensitive internal decisions.
They need learning loops. AI use should generate feedback that improves future judgement, not just isolated output that disappears into the workflow.
This is also where the popular phrase "human in the loop" becomes too vague. A human somewhere in the process is not the same as meaningful human oversight. Oversight only becomes real when responsibilities, intervention points and standards are specified in advance.
Otherwise the human is left with symbolic accountability after the system has already shaped the decision.
The real question is not whether organisations use AI
The real question is whether they have built a system in which AI can be used intelligently.
That means a system in which people know what the tool is for, what kind of judgement still belongs to humans, how outputs are evaluated, how errors are surfaced and how the organisation learns from use over time.
Without that, companies do not scale intelligence. They scale ambiguity.
And this may be the most important shift for leaders to understand now.
AI does not simply add another tool to the workplace. It changes how decisions are prepared, how quickly outputs appear and how easily weak reasoning can hide behind fluent answers. If the surrounding decision system remains vague, the likely result is not transformation but confusion with better interfaces.
AI is not the problem.
The problem is that many organisations are trying to introduce a new complex technology into old structures of unclear judgement, diffuse responsibility and weak feedback.
This is fixable. But it requires something harder than installing software. It requires leaders who are willing to design the system, not just deploy the tool.
L. A.
-
Brynjolfsson, E., Li, D., & Raymond, L. R. (2023). Generative AI at Work. NBER Working Paper Series. http://www.nber.org/papers/w31161 NATIONAL BUREAU OF ECONOMIC RESEARCH
Dillon, E. W., Jaffe, S., Immorlica, N., & Stanton, C. T. (2025). Shifting Work Patterns with Generative AI (Version 4). arXiv. https://doi.org/10.48550/ARXIV.2504.11436
Meyer, H., Yee, L., Chui Michael, & Roberts, R. (2025). Superagency in the Workplace—Empowering people to unlock AI’s full potential. McKinsey & Company. https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work
Microsoft (Hrsg.). (2025). 2025 Work Trend Index Annual Report—2025: The Year the Frontier Firm is Born. Work Trend Index Annual Report. https://www.microsoft.com/en-us/worklab/work-trend-index/2025-the-year-the-frontier-firm-is-born
Natinal Institute of Standards and Technology U. S. Department of Commerce, E. (Hrsg.). (2023). AI RMF Playbook. National Institute of Standards and Technology (U.S.). https://www.nist.gov/itl/ai-risk-management-framework/nist-ai-rmf-playbook
Singla, A., Sukharevsky, A., Hall, B., Yee, L., Chui Michael, & Balakrishnan, T. (2025). The state of AI in 2025—Agents, innovation, and transformation. Quantum Black - AI by McKinsey. https://www.mckinsey.com/~/media/mckinsey/business%20functions/quantumblack/our%20insights/the%20state%20of%20ai/november%202025/the-state-of-ai-2025-agents-innovation_cmyk-v1.pdf
Singla, A., Sukharevsky, A., Yee, L., Chui Michael, & Hall, B. (2025). The state of AI in 2025—How organizations are rewiring to capture value. Quantum Black - AI by McKinsey. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-how-organizations-are-rewiring-to-capture-value
Tabassi, E. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0) (Natinal Institute of Standards and Technology U. S. Department of Commerce, Hrsg.; NIST AI 100-1). National Institute of Standards and Technology (U.S.). https://doi.org/10.6028/NIST.AI.100-1
Reflection starts with dialogue.
If you’d like to share a thought or question, you can write to me at contact@lucalbrecht.com
Thinking from Scratch
by Luc Albrecht
Exploring how we think, decide and create clarity