Skip to content

Introduction

Screenshot

Welcome to the LLMFlows user guide. In this introductory section, we will review some of the main abstractions of LLMFlows and cover some basic patterns when building LLM-powered applications. In the end, we will see how we can use the Flow and FlowStep classes to create explicit and transparent LLM apps easily.

LLMs

LLMs are one of the main abstractions in LLMFlows. LLM classes are wrappers around LLM APIs such as OpenAI's APIs. They provide methods for configuring and calling these APIs, retrying failed calls, and formatting the responses.

Info

LLM classes can be imported from llmflows.llms

OpenAI's GPT-3 is one of the commonly used LLMs, and is available through their completion API. The LLMFlows' OpenAI class is a wrapper around this API. It can be configured in the following way:

from llmflows.llms import OpenAI

llm = OpenAI(api_key="<your-api-key>")

Info

When using the OpenAI LLM classes, you must provide an OpenAI API key through the api_key parameter when initializing the OpenAI LLM class.

All LLM classes have .generate() and .generate_async() methods for generating text. The only thing we need to provide is a prompt string.

result, call_data, model_config = llm.generate(
   prompt="Generate a cool title for an 80s rock song"
)

The .generate() method returns the text completion, the API call information, and the config that was used to make the call.

print(result)
"Living On The Edge of Time"

Chat LLMs

Chat LLMs gained popularity after ChatGPT was released and the chat completions API from OpenAI became publicly available. LLMFlows provides an OpenAIChat class that is an interface for this API.

Regular LLMs like GPT-3 require just an input prompt to make a completion. On the other hand, chat LLMs require a conversation history. The conversation history is represented as a list of messages between a user and an assistant. This conversation history is sent to the model, and a new message is generated based on it.

LLMFlows provides a MessageHistory class to manage the required conversation history for chat LLMs.

from llmflows.llms import OpenAIChat, MessageHistory

chat_llm = OpenAIChat(api_key="<your-api-key>")
message_history = MessageHistory()

Info

OpenAI's chat completion API supports three message types in its conversation history:

  1. system (system message specifying the behavior of the LLM)
  2. user (message by the user)
  3. assistant (response generated by the LLM as a response to the user message)

Like OpenAI, The OpenAIChat class has generate() and generate_async() methods. However, instead of a prompt string, the OpenAIChat requires a MessageHistory class as an argument to its generate methods.

For more information, visit the OpenAIChat section of our API reference.

After we define the OpenAIChat and MessageHistory classes we can use them to build a simple chatbot assistant with a few lines of code:

while True:
    user_message = input("You:")
    message_history.add_user_message(user_message)

    llm_response, call_data, model_config = chat_llm.generate(message_history)
    message_history.add_ai_message(llm_response)

    print(f"LLM: {llm_response}")
You: hey
LLM: Hello! How can I assist you today?
You: ...

In the snippet above, we read the user input, and then pass it as a user message to the message_history object. Then we pass the object to the generate() method of the llm which returns the string response, API call information, and the model configuration.

Finally, we add the llm response to the message_history with the add_ai_message() method and repeat the while loop.

Prompt Templates

Prompt templates are the second primary abstraction in LLMFlows. The PromptTemplate class allows us to create strings with variables that we can fill in dynamically later on.

Info

The PromptTemplate class can be imported from llmflows.prompts

We can create prompt templates by passing in a string. The variables within the string are defined with curly brackets.

from llmflows.prompts import PromptTemplate

title_template = PromptTemplate("Write a title for a {style} song about {topic}.")

Once a prompt template object is created an actual prompt can be generated by providing the required variables. Let's imagine we want to generate a song title for a hip-hop song about friendship:

title_prompt = title_template.get_prompt(style="hip-hop", topic="friendship")
print(title_prompt)
"Generate a title for a hip-hop song about friendship"

Question

Q: What happens if we don't provide all the variables?

A: The prompt template will raise an exception specifying that there are missing variables.

Now that we have the actual prompt we can use it with an LLM.

llm = OpenAI()
song_title, _, _ = llm.generate(title_prompt)
print(song_title)
"True to the Crew"

Combining LLMs

So far, we covered the OpenAI, OpenAIChat, MessageHistory, and the PromptTemplate classes, and we saw how we could build simple LLM applications that generate outputs based on dynamically created prompts.

Another common pattern when building LLM applications is using the output of an LLM as an input to another LLM. Imagine we want to generate a title for a song, then create lyrics based on the title and finally paraphrase the lyrics.

Let's create the prompts for the three steps:

from llmflows.prompts import PromptTemplate

title_template = PromptTemplate("What is a good title of a song about {topic}")
lyrics_template = PromptTemplate("Write the lyrics for a song called {song_title}")
heavy_metal_template = PromptTemplate(
    "paraphrase the following lyrics in a heavy metal style: {lyrics}"
)

Now we can use these prompt templates to generate text based on an initial input, and each generated text can serve as input for the variables in the following prompt template.

from llmflows.llms import OpenAI

title_llm = OpenAI()
writer_llm = OpenAI()
heavy_metal_llm = OpenAI()

title_prompt = title_template.get_prompt(topic="friendship")
song_title, _, _ = title_llm.generate(title_prompt)

lyrics_prompt = lyrics_template.get_prompt(song_title=song_title)
song_lyrics, _, _ = writer_llm.generate(lyrics_prompt)

heavy_metal_prompt = heavy_metal_template.get_prompt(lyrics=song_lyrics)
heavy_metal_lyrics, _, _ = heavy_metal_llm.generate(heavy_metal_prompt)

Let's see what we managed to generate. For the first LLM call we provided the topic manually and we got the following title:

print("Song title:\n", song_title)
Song title:
"Friendship Forever"

The song title was then passed as an argument for the {song_title} variable in the next template and the resulting prompt was used to generate our song lyrics:

print("Song Lyrics:\n", song_lyrics)
Song Lyrics:

Verse 1:
It's been a long road, but we made it here
We've been through tough times, but we stayed strong through the years
We've been through the highs and the lows, but we never gave up
Friendship forever, through the good and the bad

Chorus:
Friendship forever, it will always last
Together we'll stand, no matter what the past
No mountain too high, no river too wide
Friendship forever, side by side

Verse 2:
We've been through the laughter and the tears
We've shared the joys and the fears
But no matter the challenge, we'll never give in
Friendship forever, it's a bond that will never break

Chorus:
Friendship forever, it will always last
Together we'll stand, no matter what the past
No mountain too high, no river too wide
Friendship forever, side by side

Bridge:
We'll be here for each other, through thick and thin
Our friendship will always remain strong within
No matter the distance, our bond will remain
Friendship forever, never fade away

Chorus:
Friendship forever, it will always last
Together we'll stand, no matter what the past
No mountain too high, no river too wide
Friendship forever, side by side

Finally, the generated song lyrics were passed as an argument to the {lyrics} variable of the last prompt template, which is used for the final LLM call that produces the heavy metal version of the lyrics:

print("Heavy Metal Lyrics:\n", heavy_metal_lyrics)
Heavy Metal Lyrics:

Verse 1:
The journey was hard, but we made it here
Through the hardships we endured, never wavering in our hearts
We've seen the highs and the lows, but never surrendering
Friendship forever, no matter the odds

Chorus:
Friendship forever, it will never die
Together we'll fight, no matter what we defy
No force too strong, no abyss too deep
Friendship forever, bound in steel we'll keep

LLM Flows

In the previous sections, we reviewed the LLM, MessageHistory, and PromptTemplate abstractions and introduced two common patterns when building LLM-powered apps. The first pattern, was using prompt templates to create dynamic prompts, and the second one was using the output of an LLM as input to another LLM.

In this section, we will introduce two new main abstractions of LLMFlows - Flowsteps and Flows.

Info

The Flow and FlowStep classes can be imported from llmflows.flows

Flows and FlowSteps are the bread and butter of LLMFlows. They are simple but powerful abstractions that serve as the foundation for constructing Directed Acyclic Graphs (DAGs), where each FlowStep represents a node that calls a LLM. While these abstractions are designed to be simple and intuitive, they offer robust capabilities for managing dependencies, sequencing execution, and handling prompt variables.

Let's try to reproduce the previous example using Flows and Flowsteps. As a start, let's define the same templates that we are already familiar with:

from llmflows.prompts import PromptTemplate

title_template = PromptTemplate("What is a good title of a song about {topic}")
lyrics_template = PromptTemplate("Write the lyrics for a song called {song_title}")
heavy_metal_template = PromptTemplate(
    "paraphrase the following lyrics in a heavy metal style: {lyrics}"
)

Once we have the prompt templates, we can start defining the flowsteps:

from llmflows.flows import Flow, FlowStep

title_flowstep = FlowStep(
    name="Title Flowstep",
    llm=OpenAI(),
    prompt_template=title_template,
    output_key="song_title",
)

lyrics_flowstep = FlowStep(
    name="Lyrics Flowstep",
    llm=OpenAI(),
    prompt_template=lyrics_template,
    output_key="lyrics",
)

heavy_metal_flowstep = FlowStep(
    name="Heavy Metal Flowstep",
    llm=OpenAI(),
    prompt_template=heavy_metal_template,
    output_key="heavy_metal_lyrics",
)

To create a flowstep, we have to provide the required parameters for the FlowStep class:

  • name (must be unique)
  • the LLM to be used within the flow
  • the prompt template to be used when calling the LLM
  • output_key (must be unique), which is treated as a prompt variable for other flowsteps

Question

Q: What if I don't want to provide a prompt template? In many cases I can simply use a string instead.

A: Makes sense! In this scenario, feel free to create a prompt template without any variables.

Once we have the FlowStep definitions, we can connect the flowsteps.

title_flowstep.connect(lyrics_flowstep)
lyrics_flowstep.connect(heavy_metal_flowstep)

Now we can create the flow and start it. We must provide the first FlowStep to create the' Flow' object. Finally, to start it, we must use the start() method and provide any required initial inputs.

songwriting_flow = Flow(title_flowstep)
result = songwriting_flow.start(topic="love", verbose=True)

This is it!

Although this seems like a lot of extra abstractions to achieve the same functionality as in the previous examples, if we start inspecting the results, we will see some advantages of using Flows and FlowSteps.

After running all FlowSteps, the Flow will return detailed results for each individual FlowStep:

print(result)
{
    "Title Flowstep": {...},
    "Lyrics Flowstep": {...},
    "Heavy Metal Flowstep": {...}
}

Let's take a look at what happend when running the "Title Flowstep":

print(result["Title Flowstep"])
{
   "start_time":"2023-07-03T15:23:47.490368",
   "prompt_inputs":{
      "topic":"love"
   },
   "generated":"\n\n\"Love Is All Around Us\"",
   "call_data":{
      "raw_outputs":{
         "<OpenAIObject text_completion id=cmpl-7YMFPac1MKUje0jIyk4adkYssk4rQ at 0x107946f90> JSON":{
            "choices":[
               {
                  "finish_reason":"stop",
                  "index":0,
                  "logprobs":null,
                  "text":"\n\n\"Love Is All Around Us\""
               }
            ],
            "created":1688423027,
            "id":"cmpl-7YMFPac1MKUje0jIyk4adkYssk4rQ",
            "model":"text-davinci-003",
            "object":"text_completion",
            "usage":{
               "completion_tokens":9,
               "prompt_tokens":10,
               "total_tokens":19
            }
         }
      },
      "retries":0,
      "prompt_template":"What is a good title of a song about {topic}",
      "prompt":"What is a good title of a song about love"
   },
   "config":{
      "model_name":"text-davinci-003",
      "temperature":0.7,
      "max_tokens":500
   },
   "end_time":"2023-07-03T15:23:48.845386",
   "execution_time":1.355005416,
   "result":{
      "song_title":"\n\n\"Love Is All Around Us\""
   }
}

There is a lot to unpack here, but after finishing the flow, we have complete visibility of what happened at each flowstep. By having this information, we can answer questions such as:

  • When was a particular flowstep run?
  • How much time did it take?
  • What were the input variables?
  • What was the prompt template?
  • What did the prompt look like?
  • What was the exact configuration of the model?
  • How many times did we retry the request?
  • What was the raw data the API returned?
  • How many tokens were used?
  • What was the final result?

This ties to our "Simple, Explicit, and Transparent LLM apps philosophy." This information gives developers complete visibility and easily log, debug, and maintain LLM apps.

This, however, is only one of many values that LLMFlows can provide. This simple example is excellent for this guide, but real-life applications are usually more complex. Next, we will go deeper into more complex applications where Flows and FlowSteps start to shine due to features like figuring out variable dependencies and running FlowSteps in parallel.


Next: LLM Flows