Ask. Learn. Improve
Features
Real EstateData CenterMarketing & SalesHealthcareLegal Teams
How it worksBlogPricing
LoginGet a demo
LoginGet a demo

Product

  • AI Agents
  • Workflows
  • Knowledge Base
  • Analytics
  • Integrations
  • Pricing

Solutions

  • Healthcare
  • Legal Teams
  • Real Estate
  • Marketing and Sales
  • Data Centers

Resources

  • Blog

Company

  • About
  • Contact
  • Privacy Policy
  • Terms of Service

©2026. Mojar. All rights reserved.

Built by Overseek.net

Free Trial with No Credit Card Needed. Some features limited or blocked.

©2026. Mojar. All rights reserved.

Built by Overseek.net

Free Trial with No Credit Card Needed. Some features limited or blocked.

← Back to Blog
Industry News

The Knowledge Base Didn't Die. Its Interface Changed.

Progress Software's Sitefinity Generative CMS launch is a signal. Enterprise knowledge hasn't disappeared — its consumption layer is being rebuilt for AI-native discovery. Here's what that means.

5 min read• April 1, 2026View raw markdown
knowledge managementRAGgenerative CMSenterprise AIconversational discoverycontent operationsAI readiness

The knowledge base isn't dead. But the way people get to it is being rebuilt from scratch, and most organizations haven't noticed yet.

On March 31, Progress Software shipped an update to Sitefinity Generative CMS framed around AI-powered search, conversational interfaces, personalized content delivery, and what they're calling "agentic RAG." The announcement wasn't a surprise. The language was direct enough to pay attention to: enterprise content and knowledge systems are being rebuilt for AI-native discovery instead of traditional browse-and-search interfaces.

That is the shift. And it is broader than one CMS vendor.

The old interface was always a workaround

For decades, "knowledge management" meant building a hierarchy — folders, categories, menus, search boxes — and hoping employees or customers would navigate to the right answer. Sometimes they did. Often they gave up halfway, called someone, or made a decision without ever checking the documentation.

Keyword search improved things at the margins. Faceted filters helped. But the model stayed the same: create a document, put it somewhere, and hope the person who needs it finds their way to it.

That was always a workaround for what the goal actually was: getting the right information to the right person at the right moment. The browse-and-search paradigm existed because nothing better was available. That is no longer the case.

What vendors are building instead

The Sitefinity release is the clearest recent example of what enterprise content infrastructure now looks like when rebuilt for AI. The update ships real-time AI-driven personalization, conversational interfaces, and "AI-powered discovery" — users ask direct questions and get synthesized answers, rather than navigating page trees.

Kontent.ai is moving in the same direction, adding expert agents to its agentic CMS offering. The pattern in 2026 CMS conversations is consistent: the front end of knowledge is becoming conversational and retrieval-shaped. Search is turning into synthesis. Navigation is collapsing into question-answering. The static page is being replaced by dynamically assembled answers.

One line from the Progress announcement is worth sitting with: "Many enterprises are experimenting with generative AI but haven't made the transition to large-scale use in their digital presence for reasons of maturity and governability." Governability gets buried at the end of that sentence. It is the most important word in it.

Why this changes content operations

The shift from browse-and-search to AI-mediated retrieval changes what "good content" means at an operational level.

Before, structured content was primarily an SEO and reuse concern. Was it tagged correctly? Was it findable? Could it be repurposed across channels? Those questions still matter.

But now structured content is machine-consumable infrastructure. When AI systems become a primary access path to enterprise knowledge, the structure, provenance, freshness, and internal consistency of source documents become operational requirements — not editorial preferences.

A CMS optimized for human navigation could tolerate a degree of inconsistency. A document slightly out of date didn't produce a visible failure — users might get a slightly wrong answer and move on. Retrieval systems amplify this problem. When an AI synthesizes an answer from three documents, two of which have conflicting information, the output isn't slightly wrong. It is confidently wrong, with a natural-language wrapper around the mistake.

This is the part of the transition that product announcements tend to skip past.

The risk that comes with a smoother interface

Progress uses the word "governance" several times in their announcement. That is good. But governance at the delivery layer — who sees what, how content is personalized — is different from governance at the source layer.

Source layer governance means asking harder questions. Is this document current? Does it contradict anything else in the same knowledge base? Who last reviewed it, and when? What happens when a policy changes and the old version is still being retrieved and served?

A conversational interface does not fix those problems. It scales them.

If a knowledge base has 400 documents and 20% are outdated, that was manageable when users had to navigate to the right section. The accurate 80% got found because people knew where to look. Put a conversational AI interface on top of the same knowledge base and the stale 20% gets equal retrieval weight. The system has no way to distinguish a document updated yesterday from one untouched in three years.

That is the risk nobody is talking about clearly enough. A smoother interface can scale bad source truth faster than any folder tree ever could. We covered a version of this in our piece on how agentic CMS surfaces demand governed content operations underneath them — the Sitefinity release makes that argument more concrete.

The real opportunity

The vendors building AI-native discovery surfaces are solving a real problem. Users want to ask questions and get answers, not navigate nested menus. That is the right direction.

But the organizations that will benefit are the ones pairing better interfaces with maintained knowledge underneath. That means treating source documents as living infrastructure — audited, versioned, consistent — not as a static archive that a smarter search engine now queries.

The point Progress's EVP makes about "maturity and governability" being the blockers to large-scale deployment is accurate. What they don't say is that governability at the delivery layer is the easy part. The hard part is keeping the source truth that feeds those delivery systems current and internally consistent.

As we've argued before, the real enterprise AI moat isn't the model or the interface — it's the governed source of truth underneath both. Retrieval accuracy is only as good as the documents being retrieved.

The interface is changing fast. That is genuinely good news for users. Whether the knowledge underneath it keeps pace is the question enterprises should be asking right now, before their shiny new conversational layer starts confidently distributing last year's policies.

Related Resources

  • →Agentic CMS and Governed Content Operations
  • →The Real Enterprise AI Moat Is a Governed Source of Truth
← Back to all posts