By continuing to browse this website, you agree to our use of cookies. Learn more at the Privacy Policy page.
Contact Us
Contact Us
AI-powered RAG-based multi-agent solution for knowledge management automation
  • MarTech & AdTech
  • AI & ML

AI-powered RAG-based multi-agent solution for knowledge management automation

AI powered knowledge management automation

Project snapshot

This case study tells the story of developing a human-autonomous AI-based knowledge base solution for a multinational marketing and advertising holding company, creating a uniform index across fragmented corporate knowledge and building an LLM-based chatbot with 95% response accuracy.

Client

Multinational marketing and advertising holding company with 5,000+ employees operating in more than 25 countries, managing extensive technical documentation, methodological materials, and internal regulations.

Solution

AI-powered RAG-based multi-agent system that autonomously creates, tests, and validates corporate knowledge base, ensuring real-time accuracy and seamless access through specialized agents for retrieval, generation, quality control, and tone adaptation.

Business function

Knowledge management and employee support

Industry

Marketing & advertising

Challenge

Automate corporate knowledge management with minimal human effort while ensuring accuracy, relevance, and real-time accessibility across fragmented, loosely structured knowledge bases owned by different teams.

Result

  • 95% accuracy in query responses
  • Fully autonomous system with automated annotation
  • Fast information retrieval eliminating manual searches
  • Reduced support team workload through automated responses

What is RAG?

Retrieval-Augmented Generation (RAG) is an AI approach that first searches for relevant documents in a knowledge base and then generates a response using this retrieved information as context.

For example, if a user asks, “What are the benefits of AI-powered search?”, RAG retrieves relevant documents and prompts the language model with: “Based on the retrieved data, AI-powered search improves efficiency, ensures up-to-date information, and automates knowledge management. Generate a detailed response.” This ensures answers are fact-based, relevant, and up-to-date rather than relying solely on pre-trained knowledge.

RAG solves a key limitation of fine-tuned LLMs: the risk of outdated knowledge. Instead of relying solely on pre-trained data, RAG retrieves relevant documents in real-time and generates responses based on fresh, contextualized information.

Client background

The client is a global marketing and advertising holding company with over 15 years of history, employing 5,000+ professionals across more than 25 countries. Their operations encompass comprehensive marketing services, advertising campaigns, and strategic consulting for international brands.

Before implementing the AI-powered knowledge management system, the client faced several critical operational challenges:

  • Time-consuming searches – Employees spent excessive time searching for information, with simple queries like “How to book a vacation” returning over 1,000 results in Confluence
  • Outdated information – Retrieved information was often irrelevant or outdated, reducing operational efficiency
  • Support team overload – Repetitive questions overwhelmed the support team, preventing focus on complex issues
  • Onboarding difficulties – New employees struggled to quickly find necessary information, slowing integration
  • Fragmented knowledge bases – Multiple knowledge sources in different formats owned by different teams created information silos

The company needed an intelligent assistant that could efficiently search, process, and present relevant information from their vast knowledge base while maintaining high accuracy.

Business challenge

The company sought an AI-powered knowledge management solution to automate information retrieval and ensure high accuracy, relevance, and real-time accessibility across fragmented corporate knowledge bases with minimal human effort.

Potential threat: If the solution was not implemented, the client would continue facing declining productivity, employee dissatisfaction, and increasing support costs from manual information searches.

Constraints:

  • Data: Highly fragmented, loosely structured knowledge across multiple formats and ownership boundaries.
  • Accuracy: Need for 95%+ response accuracy to provide reliable information for business operations.
  • Privacy: High data privacy requirements necessitating on-premise deployment capabilities.
  • Integration: Must connect with closed internal systems and existing infrastructure.
  • Scalability: Handle 5,000+ employees across 25+ countries accessing information simultaneously.
  • Adaptability: Maintain up-to-date information without frequent retraining or manual updates.
  • Tone: Preserve corporate communication style – friendly, concise, with appropriate humor.

Problems and solutions

The RAG-based multi-agent knowledge management system presented challenges in information retrieval accuracy, automated quality control, and deployment flexibility. Xenoss team created solutions for intelligent search, autonomous validation, and adaptive response generation.

Initial low accuracy with basic retrieval

At project start, answer accuracy was around 40%, with the model providing outdated data, ignoring query context, and generating responses that were either too long or too vague.

Solution: We implemented optimized data retrieval excluding irrelevant sources, configured Guardrails filters to verify answer correctness, developed the RAGAS system for automated testing, and introduced specialized agents for search, generation, and verification, increasing accuracy to 95%.

Fragmented knowledge base management

The client’s knowledge existed across multiple systems in different formats owned by different teams, making it impractical to search – simple queries returned over 1,000 results without relevance ranking.

Solution: We created a uniform index on top of fragmented knowledge space, implementing vector-based semantic search and e5-large embedding model for high-quality text vectorization and intelligent similarity matching.

Manual annotation bottleneck

Traditional approaches required extensive human effort to annotate documents, create question-answer pairs, and validate knowledge base entries, creating scalability limitations.

Solution: We developed an automated annotation process where documents are uploaded, auto-annotator identifies key blocks, generates questions and answers, and sends only for validation. This allows single documents to automatically transform into entire knowledge domains with minimal human involvement.

Response quality and hallucination prevention

LLMs risk generating plausible-sounding but incorrect information (hallucinations), particularly problematic for business-critical knowledge where accuracy is essential.

Solution: We implemented a multi-agent architecture with specialized Quality Control agent (Guardrails) that verifies response correctness before user delivery, plus RAGAS automated evaluation system continuously monitoring accuracy, relevance, and error patterns.

Corporate tone of voice maintenance

Generic AI responses lacked the company’s communication style, requiring responses to match corporate tone – friendly, concise, with appropriate humor for employee engagement.

Solution: We developed an Adaptation agent (ToV – Tone of Voice) that ensures responses align with corporate communication standards, analyzing and adjusting generated content to maintain consistent brand voice across all interactions.

Deployment flexibility and data privacy

Organization required both cloud and on-premise deployment options while maintaining high data privacy standards and integration with closed internal systems.

Solution: We architected the system for flexible deployment using Llama 3.1 8B on-premise for secure processing while supporting cloud deployment, ensuring high availability without external dependencies and meeting strict privacy requirements.

System architecture

The foundation of the system is a multi-agent architecture where different agents perform specialized tasks:

  • The retrieval agent extracts relevant documents from the database.
  • The generation agent formulates meaningful responses based on retrieved information.
  • The quality control agent (Guardrails) verifies the correctness of the response before sending it to the user.
  • The adaptation agent (ToV – Tone of Voice) ensures that responses align with the corporate communication style.
AI RAG-based multi-agent solution for knowledge management

Automated data annotation

We developed an automated annotation process, minimizing human involvement:

  1. A document is uploaded into the annotation system.
  2. The auto-annotator identifies key text blocks and converts them into vector representations.
  3. The system generates questions and answers and sends them to a human for validation.
  4. If validation is approved, testing is initiated, and recommendations are provided.
  5. If needed, the system corrects errors and re-tests the model.

This process allows a single document to be automatically transformed into an entire knowledge domain, which is then used for retrieving relevant information.

Automated data annotation

Deployment and maintenance

To make the system as accessible as possible, we started with a web interface and then added a Telegram bot integration. The bot quickly became integral to daily workflows, allowing employees to access knowledge directly within their work conversations.

On-premise vs. cloud deployment

The system was initially designed to be deployed on-premise, however it is built in a way that allows easy cloud deployment also.

The on-premise choice was driven by:

  • High data privacy requirements.
  • There is a need to integrate with closed internal systems.
  • The requirement for high availability without reliance on external cloud services.

Results

95% accuracy in responses

Achieved high accuracy through continuous refinement with optimized retrieval, Guardrails verification, RAGAS automated testing, and multi-agent architecture ensuring reliable, business-critical information delivery.

Fully autonomous operation

Employees no longer spend time on manual searches thanks to automated annotation, intelligent retrieval, and self-validating knowledge base that operates without human intervention for routine queries.

Reduced support team workload

Common questions are now automated through the intelligent assistant, allowing support teams to focus on complex issues while maintaining high-quality responses and fast resolution times.

Want to build your own solution?

Contact us