Project snapshot
This case study tells the story of developing a human-autonomous AI-based knowledge base solution for a multinational marketing and advertising holding company, creating a uniform index across fragmented corporate knowledge and building an LLM-based chatbot with 95% response accuracy.
Client
Multinational marketing and advertising holding company with 5,000+ employees operating in more than 25 countries, managing extensive technical documentation, methodological materials, and internal regulations.
Solution
AI-powered RAG-based multi-agent system that autonomously creates, tests, and validates corporate knowledge base, ensuring real-time accuracy and seamless access through specialized agents for retrieval, generation, quality control, and tone adaptation.
Business function
Knowledge management and employee support
Industry
Marketing & advertising
Challenge
Automate corporate knowledge management with minimal human effort while ensuring accuracy, relevance, and real-time accessibility across fragmented, loosely structured knowledge bases owned by different teams.
Result
Retrieval-Augmented Generation (RAG) is an AI approach that first searches for relevant documents in a knowledge base and then generates a response using this retrieved information as context.
For example, if a user asks, “What are the benefits of AI-powered search?”, RAG retrieves relevant documents and prompts the language model with: “Based on the retrieved data, AI-powered search improves efficiency, ensures up-to-date information, and automates knowledge management. Generate a detailed response.” This ensures answers are fact-based, relevant, and up-to-date rather than relying solely on pre-trained knowledge.
RAG solves a key limitation of fine-tuned LLMs: the risk of outdated knowledge. Instead of relying solely on pre-trained data, RAG retrieves relevant documents in real-time and generates responses based on fresh, contextualized information.
Client background
The client is a global marketing and advertising holding company with over 15 years of history, employing 5,000+ professionals across more than 25 countries. Their operations encompass comprehensive marketing services, advertising campaigns, and strategic consulting for international brands.
Before implementing the AI-powered knowledge management system, the client faced several critical operational challenges:
The company needed an intelligent assistant that could efficiently search, process, and present relevant information from their vast knowledge base while maintaining high accuracy.
The company sought an AI-powered knowledge management solution to automate information retrieval and ensure high accuracy, relevance, and real-time accessibility across fragmented corporate knowledge bases with minimal human effort.
Potential threat: If the solution was not implemented, the client would continue facing declining productivity, employee dissatisfaction, and increasing support costs from manual information searches.
The RAG-based multi-agent knowledge management system presented challenges in information retrieval accuracy, automated quality control, and deployment flexibility. Xenoss team created solutions for intelligent search, autonomous validation, and adaptive response generation.
At project start, answer accuracy was around 40%, with the model providing outdated data, ignoring query context, and generating responses that were either too long or too vague.
Solution: We implemented optimized data retrieval excluding irrelevant sources, configured Guardrails filters to verify answer correctness, developed the RAGAS system for automated testing, and introduced specialized agents for search, generation, and verification, increasing accuracy to 95%.
The client’s knowledge existed across multiple systems in different formats owned by different teams, making it impractical to search – simple queries returned over 1,000 results without relevance ranking.
Solution: We created a uniform index on top of fragmented knowledge space, implementing vector-based semantic search and e5-large embedding model for high-quality text vectorization and intelligent similarity matching.
Traditional approaches required extensive human effort to annotate documents, create question-answer pairs, and validate knowledge base entries, creating scalability limitations.
Solution: We developed an automated annotation process where documents are uploaded, auto-annotator identifies key blocks, generates questions and answers, and sends only for validation. This allows single documents to automatically transform into entire knowledge domains with minimal human involvement.
LLMs risk generating plausible-sounding but incorrect information (hallucinations), particularly problematic for business-critical knowledge where accuracy is essential.
Solution: We implemented a multi-agent architecture with specialized Quality Control agent (Guardrails) that verifies response correctness before user delivery, plus RAGAS automated evaluation system continuously monitoring accuracy, relevance, and error patterns.
Generic AI responses lacked the company’s communication style, requiring responses to match corporate tone – friendly, concise, with appropriate humor for employee engagement.
Solution: We developed an Adaptation agent (ToV – Tone of Voice) that ensures responses align with corporate communication standards, analyzing and adjusting generated content to maintain consistent brand voice across all interactions.
Organization required both cloud and on-premise deployment options while maintaining high data privacy standards and integration with closed internal systems.
Solution: We architected the system for flexible deployment using Llama 3.1 8B on-premise for secure processing while supporting cloud deployment, ensuring high availability without external dependencies and meeting strict privacy requirements.
System architecture
The foundation of the system is a multi-agent architecture where different agents perform specialized tasks:
Automated data annotation
We developed an automated annotation process, minimizing human involvement:
This process allows a single document to be automatically transformed into an entire knowledge domain, which is then used for retrieving relevant information.
Deployment and maintenance
To make the system as accessible as possible, we started with a web interface and then added a Telegram bot integration. The bot quickly became integral to daily workflows, allowing employees to access knowledge directly within their work conversations.
On-premise vs. cloud deployment
The system was initially designed to be deployed on-premise, however it is built in a way that allows easy cloud deployment also.
The on-premise choice was driven by:
95% accuracy in responses
Achieved high accuracy through continuous refinement with optimized retrieval, Guardrails verification, RAGAS automated testing, and multi-agent architecture ensuring reliable, business-critical information delivery.
Fully autonomous operation
Employees no longer spend time on manual searches thanks to automated annotation, intelligent retrieval, and self-validating knowledge base that operates without human intervention for routine queries.
Reduced support team workload
Common questions are now automated through the intelligent assistant, allowing support teams to focus on complex issues while maintaining high-quality responses and fast resolution times.
Want to build your own solution?
Contact us