Orchestrate and Track Your LangChain Experiments with Flyte
LangChain is an open-source library that’s designed to simplify working with large language models (LLMs) and empowers developers to build sophisticated applications with ease. When developing LLM-based applications, managing the underlying infrastructure is crucial to optimal performance and scalability. This is where the combination of LangChain and Flyte, a robust ML orchestrator, truly shines.
By integrating LangChain into Flyte, you can use a production-grade orchestrator to automate the entire infrastructure: ingesting documents, creating vector stores and performing batch inferences. Not only can you iterate quickly on your LangChain experiments, you’ll also benefit from many of Flyte's built-in capabilities like fine-grained resource allocation, secrets management and caching. In this article, you will learn how to integrate LangChain seamlessly into Flyte and employ the Flyte callback feature for efficient tracking and monitoring of your LangChain experiments.
LangChain Retrieval QA with Pinecone and Flyte
Let's leverage LangChain, Pinecone and Flyte to develop a retrieval-based question-answering (QA) system. This will allow you to explore how Flyte can effectively orchestrate a LangChain experiment and understand the advantages it offers. Here's what you'll be doing:
- Building a data ingestion pipeline: You will learn how to construct a pipeline that can efficiently ingest data into a vector database in parallel while caching the outputs.
- Querying the vector database: You will perform queries on the vector database using user-provided queries to retrieve the predictions.
- Logging LangChain experiment metrics: You will gain insights into how to log the metrics of the LangChain experiment.
After successfully completing this experiment, you will have the ability to:
- Understand the limitations of LangChain in building production-grade pipelines
- Recognize how Flyte addresses these limitations and enables running scalable and secure experiments
- Utilize the current experiment as a starting point template to develop advanced LangChain-based applications with the help of Flyte
Prerequisites
If you are new to Flyte, we recommend the Getting Started guide. You can also watch recordings of conference talks for further insights: Machine Learning for Production Workloads with Flyte and Run Your Data and Machine Learning Workflows on Kubernetes with Flyte.
If you are new to LangChain, Pinecone's introduction to LangChain serves as a helpful introduction.
To get started with the experiment, please follow the instructions below:
- Install the Flytekit library by running the command `pip install flytekit`
- Install the Flytekit-Envd plugin by running the command `pip install flytekitplugins-envd`
- Install Docker on your system
- Install flytectl by running either of the following commands:
• `brew install flyteorg/homebrew-tap/flytectl`
• `curl -sL https://ctl.flyte.org/install | sudo bash -s -- -b /usr/local/bin` - Spin up a Flyte cluster by running the command `flytectl demo start`
- Export the environment variable `FLYTECTL_CONFIG=~/.flyte/config-sandbox.yaml`
You should now have a functional Flyte cluster running on your system. Let's proceed with creating a Flyte workflow that performs the following tasks:
- Transcribes a couple of Flyte YouTube videos in parallel using OpenAI Whisper.
- Splits the transcriptions into chunks to accommodate the context window of the LLM.
- Embeds the transcriptions.
- Stores the generated embeddings in the Pinecone vector database.
- Queries the vector database.
Data Ingestion: Transcribe, Split, Embed and Store
To start, configure a Flyte task that accepts `url` and `index_name` as input arguments. Enable caching by setting `cache` to `True`, which helps avoid re-execution of the data ingestion pipeline for the same URL.
In addition, ensure secure access to the Pinecone API key, environment and OpenAI API key by defining the necessary secrets.
To customize the task further, create a custom image using `ImageSpec` and assign it to the `container_image` parameter of the task decorator. This allows you to incorporate specific dependencies or configurations required for the task's execution.
To ensure adequate resources for execution, consider increasing the memory request allocated to the task. This adjustment ensures that the task has sufficient memory to handle its processing needs effectively.
Next, let's proceed with the following operations:
- Import the required libraries within the task to ensure all the necessary functionality is available.
- Initialize Pinecone to facilitate efficient storage and retrieval of embeddings.
- Leverage OpenAI Whisper to transcribe the YouTube videos.
- Generate embeddings from the transcriptions and store them in the Pinecone vector database. Embeddings are numerical representations of the transcribed text that capture its semantic meaning, enabling efficient similarity searches and information retrieval.
Next, define a Flyte workflow that iterates through a set of YouTube URLs and invokes the `embed_and_store` task for each URL. With Flyte's `map_task` functionality, you can effortlessly execute tasks in parallel on a series of inputs, enabling efficient processing of multiple URLs simultaneously.
You can find the complete pipeline code here.
Breaking down the ingestion pipeline
If you desire fine-grained control over each operation and the ability to run them independently, you can split the transcribe-split-embed-store pipeline into multiple tasks. This approach allows for greater flexibility and reusability of individual operations.
To begin, define image specifications using `ImageSpec`. This step involves specifying the desired configurations and dependencies required for each task.
Here are the image specifications for the three tasks:
After defining the image specifications, you can proceed to define the `load_data`, `split_data`, and `store_in_vectordb` tasks separately. For detailed code examples, you can refer to this gist.
Next, create a Flyte workflow to map over all the tasks.
To trigger either workflow on the Flyte backend, execute the following command:
Note: If you need to increase the memory limits of your demo cluster, you can update the task resource attributes of your cluster by following these steps:
- Create a config file `custom_resources.yaml` specifying the desired resource settings:
- Run the following CLI command:
Retrieve and Query
This step involves querying the vector database. Let's define a `query_vectordb` task that accepts `index_name` and `query` as inputs. In the task decorator, initialize the necessary secrets, specify a custom image for the task, and increase the memory request as done previously. Additionally, set `disable_deck` to `False` to enable rendering a Flyte Deck that captures the relevant metrics.
Next, let's proceed with the following operations within the task:
- Import the necessary libraries within the task to ensure all the required functionality is available.
- Initialize Pinecone, which enables efficient storage and retrieval of embeddings.
- Retrieve the vector database and configure the search type to be `similarity`.
- Define a `RetrievalQA` chain and initialize the Flyte callback.
- Execute the chain by passing the user-provided query and return the result.
To trigger this task on the Flyte backend, execute the following command:
The metrics will be displayed on the Flyte UI as follows:
These metrics correspond to the metrics captured at the start and end of the LLM, as well as the metrics related to text complexity and the dependency tree.
Monitor LangChain Experiments with the Flyte Callback
Triggering LangChain experiments within Flyte is a straightforward process that allows you effortlessly to orchestrate your LangChain experiments. By utilizing the `FlyteCallback` in a LangChain LLM, chain or agent, you can seamlessly integrate and coordinate your LangChain experiments with Flyte. To delve deeper into this integration, refer to the comprehensive Flyte x LangChain documentation available at: https://python.langchain.com/docs/ecosystem/integrations/flyte.
Advantages of using Flyte for LangChain
- Parallel data ingestion: Generate embeddings and store them in a vector database, leveraging Flyte to ingest the data in parallel
- Declarative resource assignment: Assign resources declaratively to LangChain tasks for efficient utilization
- Monitoring within Flyte UI: Monitor your LangChain experiments seamlessly within the Flyte user interface
- Caching: Avoid redundant ingestion by caching steps for identical data inputs.
- Enhanced security: Benefit from increased security for your LangChain experiments within the Flyte cluster
- Scalability for multiple teams: Seamlessly scale LangChain experiments to multiple teams with Flyte
Running the ingestion in parallel took approximately 6 minutes, and multiple ingestion steps took about 11 minutes. On the other hand, running the data ingestion sequentially on the Flyte cluster took about 19 minutes.
Next steps
Here are some exciting directions to explore further:
- Orchestrate advanced LangChain experiments: Explore the possibilities of orchestrating advanced LangChain experiments using agents with Flyte
- Productionize your pipeline: Take your pipeline to the next level by deploying Flyte on-premises or in the cloud. This allows you to run your pipeline on a schedule and achieve reliable and scalable production-grade workflows
- Build a full-fledged data ingestion pipeline: If you're dealing with large volumes of data, consider constructing a comprehensive data ingestion pipeline in Flyte. Leverage tools like DuckDB, Spark, Snowflake, and others to efficiently process and manage your data at scale
- Construct a batch inference pipeline: Seamlessly create a batch inference pipeline that processes a set of inputs. Refer to this detailed blog post at https://www.unionai.com/blog-post/parallel-audio-transcription-using-whisper-jax-and-flyte-map-tasks-for-streamlined-batch-inference for insights and guidance on leveraging Flyte's map tasks to streamline batch inference