This time, IceRock acted as its own customer. The goal of the project was to solve an internal business problem: reduce distractions for the developers and improve access to our knowledge base.
We developed an intelligent chatbot integrated into the corporate Slack. This bot accesses our internal knowledge base in Confluence in real time, finds relevant information, and leverages GPT capabilities to provide employees with accurate and detailed answers to their questions.
The project started as an R&D initiative by one of our top developers, Alexey, aimed at exploring the emerging Retrieval-Augmented Generation (RAG) technology. It has since evolved into a full-fledged internal tool that saves time for dozens of our colleagues.
In any fast-growing IT company, the volume of internal documentation, protocols, and guides grows exponentially. At IceRock, Confluence serves as a centralized repository for this knowledge. However, our experience shows that simply having a knowledge base does not guarantee its usage.
We encountered a classic problem:
Our task was twofold:
We developed the IceRock Assistant GPT system, consisting of a backend service and Slack integration.
The solution is a chatbot that can be summoned in any Slack thread by mentioning it (@IceRock Assistant).
Key user scenario:
Key features of the solution:
The entire process was built around the RAG (Retrieval-Augmented Generation) architecture, which allows LLM queries to be “supplemented” with relevant data from external sources. The process can be divided into two independent pipelines: Indexing and Query Processing.
For the bot to be able to find something, you first need to prepare the information and “feed” it to the bot. This process runs in the background on a schedule (every hour):
This process is triggered every time a user mentions the bot in Slack:
Just like with any R&D project, the main challenges were the concepts rather than the code.
Challenge 1: Ensuring data relevance. A knowledge base is a living entity, with documents constantly being updated. If the bot responds with outdated information, it could do more harm than good.
Challenge 2: Contextual “amnesia.”. The first version of the bot only responded to single requests, which is not how users typically communicate. Users ask follow-up questions, such as “What if I'm a team lead?” or “Is it the same for the Android department?” Without the dialog history, the bot did not understand what these questions were referring to.
Challenge 3: Architecture visualization.. The system turned out to be multi-component (Slack, Backend, Confluence, Qdrant, PostgreSQL, OpenAI), making it challenging to explain how everything fits together.
Creation of a working internal product: “IceRock Assistant GPT” has been successfully implemented and is being used by employees.
Reduced team workload: The bot handles most of the typical questions, allowing developers to stay focused on their main tasks.
“Revitalized” knowledge base: Our documentation in Confluence has evolved from a passive repository into an active tool directly integrated into the communication workflow.
Invaluable R&D experience: The IceRock team has gained first-hand experience of working with RAG, one of today's most sought-after AI technologies. We have learned all the intricacies of working with vector databases (Qdrant), the OpenAI API, and the logic behind building complex AI assistants.
Foundation for future products: This internal project has become the basis for future commercial offerings related to creating custom GPT assistants for our clients, trained on their own corporate data.