Prompts and Chains with Ollama and LangChain

Prompts and Chains with Ollama and LangChain

And, of course, it runs on my 🥰 Pi5.

For this new post, I did some updates on the Pi GenAI Stack, essentially the add of the streamlit dependency (we won't use it for this current blog post, but for the next one) and a gardening restructuration of the "Python dev environment" of the stack.

So, let's update the stack on your Pi (btw, you can run it locally on your workstation if it's an arm architecture) and connect to the Web IDE of the Python dev environment with this URL http://<ip address of your Pi>:3000.

First prompt

A prompt is a user's way of giving instructions to a language model. It provides context and guides the model to generate an appropriate response, like answering questions, finishing sentences, or conversing. Think of it as a starting point that helps the model understand what you want from it.

LangChain provides helpers to ease the creation of prompts. So, I will create a prompt with variables (what and language) thanks to the PromptTemplate.from_template function, to ask the model to explain to me the meaning of what for a specific language.

Once the Web IDE is launched, create a directory (01-prompt), and add a new file (app.py) in this directory with the following code:

import os

from langchain_community.llms import ollama
from langchain.prompts import PromptTemplate

ollama_base_url = os.getenv("OLLAMA_BASE_URL")

model = ollama.Ollama(
    base_url=ollama_base_url, 
    model='tinydolphin',
)

prompt_template = PromptTemplate.from_template(
    "Explain the programming concept of {what} in {language}."
)
prompt = prompt_template.format(what="loop", language="python")

completion = model.invoke(prompt)

print(completion)

Run the code with the following command:

python3 app.py

And be patient ... Remember, we are running this on a Pi5 ⏳.

About 20 seconds later, you should obtain a text like this one:

 The 'for' statement is a basic programming structure used for iterating through arrays or lists in Python. It is an essential part of Python because it allows us to perform operations on large pieces of data that we cannot do when using for loops in other languages, such as Java and C#. Below is an example of the 'for' loop structure:

```python
for num in array:
    print(num)
```
In this statement, we define a variable `array` with some values, then we use a 'for' loop to iterate over each value in the list or array. Inside the for loop, we have two statements - one to print out the current element of the list and another to perform a computation (in our case, printing out each number) on that element.

Remember, Python uses indentation for defining blocks of code, so it is always best to start all your loops/statements with '    ' or '#!'.

The second prompt with "chain"

You can think of Langchain as a Lego set for information processing. Each Lego brick is a component that does something specific, like searching documents, asking questions, or translating languages.

A chain in Langchain is like connecting these Lego bricks together. You link several components in a specific order, where the output of one becomes the input for the next. This lets you create complex workflows that do multiple things at once.

Chains can be simple or complex, short or long, depending on your needs. As it's my baby steps with AI, we will stay simple 😉.

Create a directory (02-prompt-chain), and add a new file (app.py) in this directory with the following code:

import os

from langchain_community.llms import ollama
from langchain.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser

ollama_base_url = os.getenv("OLLAMA_BASE_URL")

model = ollama.Ollama(
    base_url=ollama_base_url, 
    model='tinydolphin',
)

what = "loop"
language = "python"

# Prompt template
prompt = PromptTemplate.from_template(
    "Explain the programming concept of {what} in {language}."
)

# Chain using model and formatting          
chain = prompt | model | StrOutputParser()    

response = chain.invoke({"what": what, "language": language})  

print(response)

So, the code is similar to the previous one, but we added a new import (from langchain_core.output_parsers import StrOutputParser) to use an "output parser".

Output parsers transform an LLM's output into a more suitable format. In our example, we used the StrOutputParser. StrOutputParser is a simple output parser that converts the output of a LLM into a string.

This is the chain we will use to generate the response:

chain = prompt | model | StrOutputParser()

The chain will "send" the prompt to the model and transform the result into a string.

Let's run the code with the following command:

python3 app.py

And be patient ...⏳.

About 20 to 30 seconds later, you should obtain a text like this one (for the same prompt, the result is not necessarily the same as the previous one):

 In Python, a loop is a simple way to repeat a code block until a condition is met. It's used extensively for iterating over lists, dictionaries or any data structure. Here is an example:

```python
# Using a while loop
while True:
    # Inside the loop, we print out a message
    print("Hello, World!")

    # After printing, we wait 3 seconds and then enter the "while" loop again
    time.sleep(3)
```
The `loop` is an indentation statement inside a `block`. In the block, you can put code that will be executed as long as this condition is true:

- `True`: The loop is running.
- `False`: The loop has finished and is exiting.

You use a while loop to run the same piece of code as long as some condition is True. You enter the loop by using the `loop` statement in a function or by using a command like `print()` or `wait()`.

The "while" statement essentially means, "Ever-so-often, do this until x". This means that while the condition is True, the code block will be executed. If the condition is False, then it doesn't need to run any more and the loop ends.

🎉 This is your first chain. You are ready for the next step: we will develop a web application with StreamLit to add interactivity to our GenAI application.