Building intelligent Q&A chat systems with RAG, LangChain, and ChromaDB: a step-by-step guide

SHARE

Building intelligent Q&A chat systems with RAG, LangChain, and ChromaDB: a step-by-step guide

SHARE

- 11 min to read

Building intelligent Q&A chat systems with RAG, LangChain, and ChromaDB: a step-by-step guide

Learn how to create intelligent Q&A chat systems using RAG, LangChain, and ChromaDB. Step-by-step guide to building a myth-themed chat application.

By Aren Hovsepyan

Building intelligent Q&A chat systems with RAG, LangChain, and ChromaDB: a step-by-step guide

In our previous article on semantic search, we explored the creation of embeddings to rank documents based on their relevancy to a given query phrase. However, semantic search has broader applications beyond document ranking. It can serve as a mechanism to connect Language Models (LLMs) to knowledge bases, enabling them to analyze and leverage external data sources.

By default, general LLMs only possess information from their training datasets. However, if we provide them with additional data, they can analyze it and uncover hidden connections between different pieces of information. LLMs can also be used to:

  • Analyze data and find hidden connections: By feeding LLMs with external data, they can uncover relationships and patterns between different pieces of information.
  • Summarize large amounts of information: LLMs have the capability to summarize large volumes of data, extracting specific and relevant details.

What is RAG?

RAG, short for Retrieval-Augmented Generation, is a powerful framework that combines retrieval models and language generation models. It utilizes AI to retrieve documents from a given resource based on the relevancy of a specific context. RAG provides the capability to augment the generation process by leveraging retrieval models for enhanced responses in conversational chat applications.

The key idea behind RAG is to use retrieval models to retrieve relevant documents, and to use language generation models to generate responses based on those documents. By incorporating a retrieval mechanism, RAG can provide more accurate and contextually relevant responses in conversational systems.

Some key use cases of RAG include:

  • Relevant book recommendations for students: Students can leverage RAG to obtain a curated list of relevant books for their case studies, research papers, or learning materials.
  • Q&A-based chat for employees: Employees can use a RAG-powered question-and-answer chat system to retrieve specific documents, policies or company standards.

However, it's essential to understand that LLMs themselves cannot access external resources or perform operations beyond generating text. This is where the LangChain library comes in. LangChain enables interaction between LLMs and external systems, allowing for seamless switching of context between the LLM and our software.

What is LangChain?

LangChain is a communication layer between our software and an LLM. It is a set of tools and functions that allow us to:

  • Quickly bootstrap LLM: LangChain provides an easy way to bootstrap LLM and use it as a generative model for conversational chat applications.
  • Create a chain of prompts: LangChain allows us to create a chain of prompts that go back and forth between the LLM and our software. The chain of prompts is a sequence of prompts that take some input from the LLM, perform some operations, and then pass it back to the LLM with augmented data.
  • Communicate between software and LLM: LangChain allows us to send back information to our software and automatically process it. Further, you can send it back to LLM for subsequent processing.

The chain of prompts helps to alleviate some of the weaknesses of LLMs. By adding prompts that can perform specific operations on the input, we can channel the dialog in a more structured way, while also enriching the data being fed to the LLM.

Now let's discuss certain limitations of large language models (LLMs) and understand the techniques we can utilize to overcome those limitations.

Adding memory to LLM and understanding context windows

LLMs are stateless, which means they do not have the ability to remember previous messages or maintain short-term memory like humans do. To overcome this limitation, we can implement context windows and add memory to LLMs.

Context windows involve providing the LLM with the entire conversation history, including past user inputs and model responses. Feeding back this context, the LLM can generate more accurate and relevant responses to ongoing conversations.

Context Memory limitation

While adding memory to LLMs using context windows is a significant step towards improving their performance, it is essential to take into account the limitations imposed by the number of tokens allowed in a single request. Limitations on token* count exist to ensure the efficient processing of large language models.

For instance, models like gpt-3.5-turbo and gpt-3.5-turbo-16k have token limits of 4096 and 16,385 tokens, respectively*. On the other hand, gpt-4 has a token limit of 8132.

As the amount of context increases, the processing becomes slower and more resource-intensive. Therefore, merely adding an infinite amount of text is not feasible within these limitations.

To address this challenge, it is important to summarize the chat history and include only the most relevant messages. For example, if a chat contains 100 messages, it is pragmatic to include only the last 20 messages to ensure that the token count remains within the allowable limits.

Generally, 1 token can represent approximately 4 English characters. 4096 Tokens are roughly equivalent to 16,384 characters.

Step-by-step guide: building a conversational chat app with RAG and LangChain

In this guide, we will walk through the process of building a chat application where users can ask questions about Greek and Roman myths based on the book "Stories of Old Greece and Rome" by Emilie K. Baker.

Before we can proceed with building the application, there are several steps we need to follow. The first step involves preparing our data. Similar to the approach described in the ChromaDB semantic search article, we need to convert our data into documents and store them in Chroma.

Prerequisites

Before we dive into the code, there are a few packages we need to install. These packages are @langchain/core, @langchain/openai, and @langchain/community.

Here's a brief summary of what each package does:

@langchain/core: adds the core methods of LangChain.

@langchain/openai: This package bundles classes related to the OpenAI API, which we will use for automatic embedding generation.

@langchain/community: This package contains helper classes related to various things created by the community, including vector store utilities such as the Chroma utility method that we will use in our application.

To install these packages, you can run the following command in your terminal:

Once the packages are installed, we can proceed to the next step of our application.

Step 1: Preparing the data

To begin with, we need to split the large plain text file into smaller, more manageable documents. It is necessary as a single document can not accommodate the entire book. To achieve this, we will utilize the TextLoader and RecursiveCharacterTextSplitter.

The RecursiveCharacterTextSplitter is a utility class that helps us split plain text files into smaller documents for our chat application. We can adjust two options: chunkSize and chunkOverlap.

To control the size of each document, we use the chunkSize option. By specifying the desired size, we can make the chunks more manageable for further processing.

Preserving context is important when splitting documents. The chunkOverlap option helps with this. It adds a portion from the end and beginning of adjacent chunks to maintain the necessary context within each document.

To find the best configuration, I suggest experimenting with different values for chunkSize and chunkOverlap. It will help us determine the optimal chunk size based on the formatting and connection strength between sentence contexts.

Let me know if you have any other questions or if there's anything else I can assist you with.

Step 2. Loading items into ChromaDB

We have successfully prepared our documents, and now it's time to store them in ChromaDB so that our chat application can access and retrieve them efficiently.

First, we need to establish a connection with ChromaDB and load our documents.

We can achieve this easily using LangChain’s Chroma class from @langchain/community package as follows:

We pass the OpenAIEmbeddings instance as the first argument to the Chroma constructor. It allows us to automatically create embedding vectors for our documents. The next argument is the connection configurations of our vector store instance.

If you're interested in learning more about embeddings, I recommend referring to the semantic search article, where we delve deeper into the concept. There we use a plain ChromaDB package without LangChain. In a nutshell, embeddings are vector representations that capture the semantic meaning of text data.

Here is the full code of the script responsible for preparing our data for further operations in our chat application. We will name this file index-docs.js.

Step 3: Create an API route in Astro application to handle chat requests

To handle user requests in our Astro application, we need to create an API route that supports the POST method. Since the specific implementation details of the Astro framework are beyond the scope of this assistance, I recommend referring to the practical guide of the Astro framework for more detailed information.

Here is the code for the API route:

In the provided code, notice the getRunnableSequence function. We will delve deeper into this function and its implementation details in a later section.

Step 4: RAG implementation and runnable sequences

To implement a conversational chat application using RAG (Retrieval-Augmented Generation) model, we need to follow the following steps:

Step 1: Condense question

In order to handle natural language questions from users and derive information from the chat history, we need to consider a scenario where users may ask questions that can be deduced from the chat history. For example, instead of explicitly asking "Give me the traits of Zeus!" a user might ask "Who is Zeus?" and then follow up with "What are his traits?"

To handle this scenario, we need to pass the chat history as context and create an instruction to rephrase the follow-up question to be a standalone question. It will enable the model to generate more coherent and relevant responses based on the conversation context.

We will refer to this prompt as condenceQuestionTemplate.

Here is the placeholder for the condenceQuestionTemplate prompt:

The PromptTemplate is a component provided by @langchain/core. LangChain offers multiple methods for creating prompts, and it generates specific objects in the background that seamlessly interact with other LangChain like chains and agents.

In simple terms, chains are a combination of prompt and LLM models, while agents are responsible for managing the overall chat flow through the different chains and actions. They serve as the main control center of the chat bot.

Step 2: Creating an answering prompt to analyze and answer the text document

In this step, we will create an answering prompt to analyze the text document that we have provided, and generate a response to the specific question that we narrowed down in our previous example.

Note that we use curly braces for {context} and {question} as part of LangChain's template replacement convention. It is particularly important because we will be creating a pipeline of chains that will automatically fill in the necessary inputs.

Step 3: Combine all RAG components together and give it a try!

A crucial concept within LangChain is the use of Runnables. Runnables allow us to combine multiple LangChain components, such as LLM models, prompts, and process inputs, as well as multiple chains themselves, in a sequential manner using pipes.

Since this concept is essential to understand and implement successfully, I highly recommend referring to the official LangChain documentation for further details and examples.

So now, let's summarize what we should do.

To achieve the desired results, just follow these simple steps:

  • Pass the input question to the condenseQuestionTemplate.
  • Utilize a retriever to search for the relevant document using semantic search from Chroma.
  • Feed the document and the condensed question into the answerTemplate prompt to obtain the final results.

Now, similar to our index-docs.js script, we need to initialize a vector store using fromExistingCollection, which directly loads documents from our desired collection. In a nutshell, this is the same process as in index-docs.js , however, LangChain provides multiple ways to achieve the same thing. Next, we create a retriever from our database and configure it to retrieve the last 5 documents to be included in our context.

It is important to note that it is not recommended to include a large number of documents in the retriever as it may exceed the limits of our context window. It is advised to adjust this based on your specific needs and the nature of your data.

Since we have already created a retriever, let's combine everything into a Runnable sequence.

In the code snippet below, our sequence consists of the following steps:

  1. Format the input variables, specifically transforming the history into a string.
  2. Pass the CONFIDENCE_QUESTION_PROMPT, which is a sentence from the PromptTemplate.
  3. Use our model, which utilizes the gpt-3.5 model, and set the verbose flag to true in order to debug and understand how it works behind the scenes.
  4. Lastly, pass StringOutputParser from @langchain/core/output_parsers, which directly parses our input into a string instead of the LangChain output object. It is important to note that it does not process the actual LLM output, but rather converts the LangChain returned object, so we get a string output.

Furthermore, we need to create a Runnable called answerChain which will retrieve the document based on the condensed question and pass it to the answer prompt. As you can see, we are using Runnable interface to retrieve our document and set it to context input variable of our prompt.

And lastly, to connect both Runnable sequences together, we need to add our secret sauce of the LangChain - piping. LangChain utilizes a piping interface, which is handy when you need to combine multiple chains together.

Here is the full implementation of getRunnableSequence method:

Now, let's run it on the application and see how it looks! I've already prepared a nice UI for us to quickly test our application, so we can get started right away.

Moreover, since we set a verbose flag in our ChatOpenAI model, we can observe in terminal the steps that LangChain does in order to get our response.

You can check out the demo and also dig into the project on GitHub.

Where RAG can be applied?

The RAG approach can be applied in various areas, enabling users to access and retrieve information efficiently. Here are some examples:

  • Employee benefits and policies:
    • Employees can use the chat Q&A feature to get information about their company's benefits, such as healthcare coverage, retirement plans, or vacation policies.
    • They can also seek information on legal matters, such as employment laws, workplace safety regulations, or discrimination policies.
  • Education and learning:
    • Students can leverage the chat Q&A function to quickly analyze their study materials and extract key points. This can help them grasp important concepts more efficiently.
    • They can also ask questions about specific subjects or topics, seeking clarification or additional explanations from their learning resources
  • Product information and support:
    • Consumers can utilize the chat Q&A feature to access detailed information about products or services. For example, they can inquire about technical specifications, pricing, or availability.
    • They can also seek support and troubleshooting assistance by asking questions related to common product issues or usage scenarios.
  • Health and wellness:
    • Individuals can use chat Q&A to obtain information about their health, including symptoms, diseases, or available treatment options.
    • They can seek advice on maintaining a healthy lifestyle, such as nutrition tips, exercise routines, or stress management techniques.
  • Travel and tourism:
    • Tourists can utilize the chat Q&A function to gather information about travel destinations, attractions, or local customs and traditions.
    • They can ask questions regarding visa requirements, transportation options, accommodation recommendations, or popular tourist spots.

Summary

Let's summarize what we have learned in our today's journey:

  • We have gained an understanding of the relevance of the RAG approach in semantic search and Language Models (LLMs).
  • We have explored the limitations of LLMs, such as memory constraints and context window restrictions, and discovered solutions like incorporating chat history and setting message limits.
  • We have introduced LangChain as a platform that enables seamless context switching between AI and our software, enhancing the conversational chat experience.
  • We have created a step-by-step example showcasing how to ask questions using a mythology book, demonstrating the efficiency of RAG.
  • We have identified various domains where the RAG approach can be applied, including employee benefits and education.

In conclusion, I encourage you to delve further into LangChain as it combines various methods of interaction with LLMs. This understanding will shed light on how to create software based on general artificial intelligence (Gen AI) principles, opening up new possibilities and advancements in the field of conversational AI.

LINK TO DEMO
LINK TO GITHUB REPOSITORY

Share

Written by

Aren Hovsepyan

I'm passionate about technology and data-intensive applications. I thrive at architecting and building scalable, consumer-focused products with throughput for hundreds of thousands users. I'm also keen on learning more about new, cutting edge technology solutions, such as crypto ecosystems and database storage systems for large scale applications.

Readers also enjoyed