AI Problems Are Not AI Problems
The current discussion about AI in companies follows a pattern. Someone discovers a risk — hallucinations, prompt injection, persona drift — and presents it as a novel threat requiring novel solutions. Conferences are organised, frameworks invented, consulting products assembled.
Look more carefully, and you recognise the familiar. Not identical, but structurally related. And that is good news. Because familiar problems have familiar solutions.
Hallucinations Are Not a New Phenomenon
An AI that confidently delivers false information is called a hallucination. That sounds like a fundamental defect. In practice, it is the employee who presents numbers they have not verified with authority in a board presentation.
Every organisation knows this problem. And every organisation has developed mechanisms against it. The four-eyes principle for important documents. Cross-checking numbers against primary sources. Approval processes that prevent a single person from uncontrollably setting facts loose in the world.
None of these mechanisms are perfect. Mistakes happen anyway. But the system works well enough because bad decisions can be recognised and corrected.
With AI, it is no different. An AI that generates facts needs a checking instance. Not because AI is particularly unreliable, but because every information source needs a checking instance — human or machine. Whoever blindly trusts an AI would have blindly trusted the employee too. The problem does not sit in the technology, but in the process.
Prompt Injection is a Manipulated Template
Prompt injection is considered one of the greatest security risks when using AI. An attacker injects instructions into data that the AI processes as input. The AI follows these instructions because it cannot reliably distinguish between legitimate and injected instructions.
That sounds threatening and technically complex. In practice, it is a manipulated decision brief.
Anyone who has worked in an organisation knows the document that is prepared so that only one decision seems possible. The numbers are right, the arguments are coherent, the alternatives are presented so that they appear unattractive. The decision-maker signs because the brief is convincing — not because the decision is right.
The difference from prompt injection: speed and scale. An employee can manipulate one brief. With AI systems that automatically process documents from the company wiki, every writable document becomes a potential attack vector. The permission model designed for human readers does not protect against machine consumers.
But the countermeasure is the same as with human decision-making processes: accept no single source as the sole basis for a decision. Cross-check against independent sources. And for critical decisions, a checking instance that sees only the result, not the manipulated input.
Persona Drift is the Experienced Colleague Going Off-Track
In early 2026, Anthropic published a research paper titled "The Assistant Axis." The core message: AI systems can change their personality over the course of longer conversations. They drift away from the helpful assistant role and take on other identities — mystical, theatrical, in the worst case dangerous.
The paper is scientifically interesting. It shows that there is a measurable geometric direction in neural networks that determines how strongly a model behaves as a helpful assistant. And this direction can shift in a conversation, especially with emotional or philosophical topics.
The headlines from this read: "AI can go mad." That is alarmism.
What actually happens, everyone who leads teams knows. The experienced colleague who, after twenty years in the organisation, pursues their own agenda rather than the client's. The consultant who becomes so immersed in a project that they lose professional distance. The developer who, after six months in the same codebase, starts selling workarounds as architecture.
People drift. Employees drift. AI drifts. The solution is the same in all three cases: clear role expectations, regular feedback, and an escalation instance that intervenes when behaviour deviates from expectation.
Content Poisoning is the Politically Motivated Report
The subtlest risk is not one the AI itself produces. It arises when AI-generated content becomes part of the organisation's knowledge — and is later consumed by other AI systems as a trusted source.
An AI writes a project report. The report contains a slight bias — not wrong, but tendentious. The report is stored in the wiki. Months later, another AI analyses the wiki to prepare a decision. The bias flows into the decision basis. Nobody notices it because the report is formally correct.
This is not a theoretical scenario. It is everyday life in every organisation where reports are written. The politically motivated quarterly report that does not falsify the numbers but presents them so that they support the desired conclusion. The project status report that downplays risks because the author does not want to be the bearer of bad news.
The only difference: speed and volume. An AI produces more content than any employee. The cumulative bias can arise faster. But the countermeasure remains: source validation, cross-checking, and a healthy scepticism towards any single information source.
The Only Honest Difference
The analogies are not perfect. An AI hallucinates with a conviction that no employee achieves. Prompt injection scales in a way that manual manipulation cannot. Persona drift happens faster and more invisibly than with humans.
But the structure of the problems is the same. And therefore the solution approaches are the same — scaled to the new dimensions.
This means concretely: whoever introduces AI into an enterprise environment does not need revolutionary security concepts. They need the consistent application of existing principles. Review and approval. Four-eyes principle. Escalation paths. Source validation. Separation of Concerns.
Not because AI is harmless. But because the problems are not new. Only the tool is.
What This Means for Decision-Makers
If someone wants to sell you an AI governance framework that starts with a blank sheet of paper, be sceptical. Your organisation has spent the last twenty years building governance for human decision-making processes. These structures have not become obsolete. They need to be extended, not replaced.
The question is not whether AI brings new risks. The question is which of your existing control mechanisms are applicable to AI decisions — and where they need to be adapted.
In most cases, the answer is: less adaptation than expected.