Five roles. A hundred questions.
A new category always raises the same doubts. We grouped the questions by role — find yours, read what concerns you directly.
Document Authority
CDO · Chief Knowledge OfficerYou carry the transverse standard, regulatory compliance (AI Act, GDPR), and alignment across business domains. You own no Document Product — you guarantee that all of them respect the same rules.
Same principles, opposite scope. A Data Catalog governs structured data: tables, columns, schemas. The DKP governs the unstructured estate: documents, subjects, editorial contradictions, freshness of the doctrine. The familiar concepts are all there — ownership, lineage, quality, observability — but quality is measured in meaning, not in format. It is the strategic equivalent of what your team already built on the data side, applied to the other half of the information estate — the half that represents 80 to 90 percent of the volume and that no one has tooled yet.
As a layer, not a replacement. SharePoint, Documentum, Veeva, Confluence remain your sources of editorial truth. Your Data Catalog keeps its hold on structured data. K-AI sits between the two: semantic indexing of the sources, continuous audit, exposure of a clean layer via MCP. That clean layer is what your AI agents — Claude, Copilot, enterprise ChatGPT, in-house agents — should consume, instead of querying the raw base directly. It is the equivalent of a semantic layer for your documents.
Three substantive requirements. One, source traceability: end-to-end lineage, with the exact version cited by every AI answer. Two, control of documentary bias: conflicts, obsolete content and duplicates detected continuously, so the AI no longer answers from a phantom doctrine. Three, right to explanation: an immutable audit log of AI consumption, a confidence score per answer, ACLs mirrored at chunk level. This is what makes an AI deployment defensible in front of a regulator or an internal audit.
Four structuring indicators, readable by any decision-maker. Open-conflict rate, per domain and globally. Freshness rate: share of critical documents whose version is less than twelve months old. Coverage of the business reference: share of expected subjects with at least one published document. Share of AI answers that are cited versus unsourced. One dashboard per Document Owner, consolidated at CDO level, with a quarterly export for the executive committee.
Not mandatory. The CDO can carry the mandate, especially if the data governance team is already mature. But the documentary axis is specific enough — editorial vocabulary, interfaces with business directions, animation of expert Document Producers — that about half of our customers dedicate one person to it. Sometimes a full-time role, sometimes an extended mandate of an existing Head of Data or Knowledge Lead. It is not a new silo: it is the same discipline as yours, applied to an adjacent estate.
Both, in that order. The standard is negotiated upstream with the Document Owners: criticality taxonomy, expected freshness durations, minimal metadata format, audit process. That is where adoption is won. Once adopted, the standard becomes non-negotiable at the instance level: no publication without audit, no AI consumption without citation. This is the same logic as a Data Mesh — local ownership, global standard.
Pricing per K-AI instance (one domain knowledge base) and per audited volume. The logic is the inverse of classical ECM: you do not pay to store, you pay to make documents activable. A typical deployment starts on one or two pilot domains, validates the KPIs, then expands by wave. Cost is rarely the bottleneck — Document Owner availability is.
Document Owner
Business direction (HSE, Compliance, HR, Legal, Medical…)You direct a domain. You commit to the editorial quality of your Document Products the way a Data Owner commits to the quality of their tables. One Owner per Product.
You take back control of your documentary estate. Your procedures become a named, versioned, audited Document Product. You see in real time the real cracks: two contradictory intervention deadlines between two notes, a subject expected by the reference that has no published document, or recurring field questions that stay unanswered. You arbitrate those situations — you do not write. The doctrine is still produced by your experts; you decide.
Generally between three and fifteen, depending on the size of your direction. The rule is simple: one Product per major subject in your business reference that has its own editorial logic. In HSE, for example: Work at heights, Confined spaces, ATEX, CMR chemical risk, First aid. Too few, and you mix different doctrines inside the same Product, which makes conflicts unmanageable. Too many, and ownership dilutes, your Producers no longer know where to publish.
Yes, and that is exactly why this model exists. Without a governance platform, an AI agent answers from your raw base, with no traceability, no audit, no citation. In case of dispute, you are exposed without being able to demonstrate what the machine had in hand. With K-AI, every answer cites the exact version of the published document, under the mirrored ACL, with a confidence score and a timestamp. Your responsibility stays full, but it becomes informed and defensible.
Three criteria, in this order. Risk: what would cost dearly if the AI answered wrong — litigation, accident, regulatory sanction? Consultation volume: what gets read every day by your teams? Editorial condition: what do you already know is poorly maintained internally? Almost all of our customers start with HSE, Compliance, or Quality, because these three domains tick all three criteria. No big bang — we take one Product, bring it to cruising regime, then chain the next.
First readable audit: two to four weeks after indexing the base — that snapshot alone already says a lot about the real state of the estate. First wave of concrete resolutions (conflicts closed, obsolete archived, duplicates merged): six to ten weeks. First reliable AI use case, plugged into your cleaned base via MCP: twelve weeks targeted. The pace is set by Producer availability, not by the platform — that is the honest answer.
The Document Steward animates day to day, you arbitrate. The platform exposes plainly the Producers who leave conflicts open on their scope — no more grey zone. But adoption is not won by sanction: it is won by showing the value Producer-side. Fewer recurring questions to handle, fewer clarification meetings, AI answers that no longer drift on their scope. In cruising regime, the Producers defend K-AI themselves.
Document Steward
Business Knowledge Manager · extended Data StewardYou animate Producers day-to-day, process K-AI Audit signals, track quality KPIs. Main user of the K-AI Audit console — that’s where you spend your days.
In the morning, you open the console: overnight open signals — conflicts, duplicates, obsolete, missing subjects — sorted by severity and age. You instruct each signal: comparative reading of extracts side by side, choice of action (assign to a specific Producer, merge two versions, archive, mark as accepted if it is deliberate). Sixty to ninety minutes in the morning is enough in cruising regime. The rest of the day, you work with the Producers and the Document Owner: harder arbitrations, editorial conventions, framing of new Products.
On a well-kept domain: five to twenty signals per day, of which only two to five demand a real editorial decision. The rest resolve in one click — obvious duplicate, version genuinely out of date, motif already accepted. In the initial catch-up phase it is denser: a few weeks to clear a historical backlog before the base reaches a healthy baseline. After that, the cadence stabilizes.
Complements — and extends it naturally. The discipline is the same as for structured data: you track duplicates, you watch freshness, you make sure every asset has an Owner. The subject changes: editorial meaning instead of technical schema, doctrine contradiction instead of referential integrity violation. At about half of our customers, the Document Steward and the Data Steward are the same person, with an extended mandate and a bit more time allocated.
Three concrete levers. One, severity thresholds are tunable per Product — you decide what deserves a signal and what can wait. Two, similar signals are grouped automatically: the same motif repeated across ten documents becomes a single signal, and you treat the root cause. Three, accepted motifs (for example a deliberate duplicate between French and English versions) are remembered: the platform stops re-asking the same question. After a few weeks, K-AI Audit knows your tolerance.
Domain business knowledge (you must understand what the doctrine says to arbitrate), editorial appetite (you work on text, not on spreadsheets), Data Steward rigor (KPI tracking, animation of a Producer community). No need to be a data scientist: the console is editorial, not technical. Many of our Stewards come from existing functions: documentation, quality, knowledge management, compliance. The role is new, the profile is not.
All the history of your decisions stays in the platform: processed signals, accepted motifs, justifications, editorial conventions set. Your successor picks up by reading the trace — they see why a given duplicate was judged acceptable, why a given obsolete version was archived, which areas are tense with which Producers. It is the strict opposite of tacit knowledge that disappears with a person. It is written, transmissible governance.
Document Producer
Subject Matter Expert (SME) · Engineer, lawyer, HR, doctor…You are the author. You create and maintain the content of a Product under the Owner’s authority. K-AI does not write in your place — it tells you what is missing, what contradicts, what is aging.
No, and the opposite is true. K-AI produces no doctrine: it does not know how to arbitrate between two interpretations of a norm, it does not know how to write a new protocol, it does not know how to decide the company’s position on an edge case. Doctrine remains your craft — that is precisely why you are there. What the platform does is free up time: fewer recurring questions to handle, fewer 'has this changed?' emails. AI agents will consume your content, not write in your place, but as an extension of what you write.
In cruising regime, less than today without K-AI. You stop answering the same field question ten times (the answer is in the document, and the engine finds it). You see your duplicates disappear, so you no longer write the same thing twice. You stop searching SharePoint to check whether someone else already covered the subject. The catch-up phase is more demanding: one to two hours per week for a few months to clear old conflicts. After that, you gain back significantly more time than you put in.
Yes, but it is not a validation committee that slows everything down. You write as usual in Word, Confluence, your normal tool. At publication time, K-AI runs an automatic review: potential conflicts with an existing document, subjects already covered elsewhere, missing metadata (effective date, version, scope). It proposes an action, you decide. It is editorial assistance that feels more like an intelligent proofreader than a gate.
You mark it as 'non-conflict', and you write in two sentences why: two distinct geographic scopes, two deliberately coexisting versions, two levels of detail. The platform learns your arbitration and stops re-asking the same question. If the Document Steward or the Document Owner contests it, the discussion opens in the comment — traced, defensible, readable by all. No invisible censorship, no document modified behind your back.
Yes, without any change. K-AI is not a text editor; it does not ask you to move your files. It sits on top of your existing sources: Office, SharePoint, Confluence, Notion, your business EDM. No change to production tools, and no change to hosting either: your documents stay where they are, K-AI indexes them in place.
No, never. The layer that AI agents can consume contains only published and audited documents. Your drafts stay in your editorial workspace and are visible only to you, the Document Steward, and the Document Owner of the Product (for the audit queue). You stay in control of when your work goes into production.
Document Consumer
Employees · AI agents · Third-party applicationsYou consume the clean, governed, semantically coherent layer. Humans via conversational search, AI agents via MCP — same quality contract in both cases.
Every answer comes with three signals. The cited sources: document title, version, date. A confidence score: high when the answer leans on a recently audited document, low when sources are scattered or old. An 'audited by [the Owner of the domain] on [date]' badge. If the base has gaps or contradictions, K-AI says so explicitly rather than inventing. You decide whether to trust the answer or to go check at the source.
For two reasons. Either no published document covers your question exactly — there is a hole in the doctrine. Or several documents cover the subject but contradict each other, and the conflict has not yet been arbitrated. In both cases, K-AI prefers silence to invention. Your question goes up to the Document Steward and the Document Owner of the concerned domain — it directly feeds the editorial work queue. That is how the base fills out.
Yes, at paragraph level. The access rights defined in your source (SharePoint, EDM, Confluence) are mirrored into K-AI at fine granularity. You will never see an answer that leans on a document you do not have access to, even partially. When an AI agent is invoked by a user, it strictly inherits that user’s rights — it never has more than you do.
Every answer has a 'report' button. You describe in one sentence what is wrong (incorrect information, outdated source, misunderstood context). The feedback opens a signal in the console of the Steward of the concerned domain, with your question, the answer received, and the cited sources. It is a structured feedback loop — not an email that gets lost, not a ticket that festers. You receive a notification when the signal is processed.
One MCP endpoint per K-AI instance, OAuth authentication, scopes mirrored on user rights. Your agent calls kai.search or the other exposed functions and receives the answer, the cited sources, and the confidence score. Compatible with Claude Desktop, Cursor, Copilot Studio, and in-house agents. Full integration documentation lives in the K-AI GitBook — you are connected within a few hours.
No. K-AI trains no model on your data. Requests and answers are kept only for two purposes: audit (knowing what the machine answered, to whom, when, from which source) and the quality feedback loop (unanswered questions, reports). Default hosting in Europe, with Snowflake or AWS options depending on the contract.
