Your content management system was designed to publish information in pages. Large language models are defining your narrative by reading everything on your website. You need publishing platforms that will put you back in control.

Large language models are creating their own narrative based on what they find

A CMO told us he was hesitant to build AI-powered search into his organisation’s website. His concern was that a language model pointed at his website might surface contradictory facts, conflicting opinions, outdated recommendations, and that he wouldn’t be able to control what it said.

It’s a reasonable concern. One you probably share. But there’s only one problem with it.

Large language models are already doing exactly that to your content. On platforms you don’t control, to audiences you’ll never know about, without you being in the room.

The question was never whether to let this happen. It’s already happening. The question is whether you have any visibility over it.

Websites were never built to be read all at once

Research organisations, innovation companies and government departments deal with a large body of knowledge as part of their day-to-day work. Their websites reflect this, publishing continuously across programmes, research outputs, policy positions, guidance and expertise, across many topics, over many years.

For communications teams at these organisations, managing the message meant managing the output. What went out, when, in what form. If you were careful about what you published, you were in control of your narrative. That worked because the web is discrete by nature. Pages don’t relate to each other. Search engines match keywords to pages. There was no mechanism that would ever read everything simultaneously and join it all up.

So websites grew without that constraint. Reports were published and moved on from. Positions evolved without earlier statements being updated. Programmes ended but their pages remained. Not because of any failure, but because nothing would ever assemble it into a single picture.

Until now.

The conclusions are being drawn whether you’re ready or not

A language model encountering your website doesn’t know which pages you’re proud of and which you’d quietly prefer people didn’t find. It doesn’t know that the 2019 report reflects a position the organisation has since moved away from. It doesn’t know that two programme pages, published three years apart, make contradictory claims about the same topic.

It knows what’s there. It joins it up. And it presents the result as a representation of your organisation’s knowledge.

This is happening on platforms you don’t control, in AI search tools, in the interfaces your stakeholders and policymakers are already using to research organisations like yours. It’s also happening, or will happen, inside your own systems if you’re building or considering AI-powered search on your own website.

The narrative is being assembled from everything you’ve ever published. You’re not in the room when it happens.

The same technology that creates the problem can fix it

The shift from content management to knowledge management sounds like a significant undertaking. But the same technology that is causing the exposure also makes managing it practical for the first time.

By creating a content pipeline that processes your website using the same technology that large language models use, it becomes possible to manage your knowledge as a whole body of information rather than at a page level. 

The system reads everything, understands the relationships between topics, identifies where content conflicts, flags what has become outdated, and surfaces what needs attention. Not as a one-time audit, but continuously, as your website grows and changes.

This is what makes narrative control possible at this level. Not more editorial resource. Not a bigger team. A different kind of system, built around how AI reads content rather than how humans browse pages.

We are building a new type of content management system

This is exactly what we have built natively into Temper Knowledge, a knowledge-driven CMS due for full release in Q2 2026. It processes your website content through an AI pipeline, reading and understanding it semantically, not just indexing it. That means it can do what external AI systems are already doing to your content, but under your control and with your editorial team in the loop.

The monitoring features below were built directly in response to concerns raised by CMOs across the Catapult Network, organisations who could see the opportunity of AI-powered search on their own websites but were acutely aware of the risks.

Contradictory Content Checker Identifies where content across your website conflicts with itself. Two programme pages making different claims about the same topic. A policy position that has shifted without the earlier statement being updated. The system surfaces the conflict. Your editorial team resolves it.

Fact Checking & Claims Monitor Flags statistics and factual claims that may have been superseded, both at the point of publishing new content and continuously across your existing website. It doesn’t tell you something is wrong. It tells you something may need attention, before an AI system decides for you.

Recency Checker Identifies content whose thinking or guidance may no longer reflect where the sector is, even if it remains factually accurate. A 2020 position on a fast-moving topic may contain no errors and still no longer represent your organisation’s current understanding. This surfaces it.

Temper Knowledge launches in Q2 2026.

Sign up to receive updates and be the first to know when Temper Knowledge launches.