Connect your LLM to the world with the Ollama functions
With LangChainJS, Ollama, still on a Pi 5 (and propelled by 🐳 Docker Compose)
Ollama functions are similar to the OpenAI functions. Thanks to this feature, the model can understand (from a prompt) that you want to call a program with specific parameters. The model will generate a JSON object by extracting the function's name and expected parameters. It helps call external services (Google search, redirection to a specific service with a support bot, etc.)
I use my project, "Pi-GenAI Stack" to run the following examples. You don't need to install all the dependencies, and even the Web IDE is provided (you can read the other post in this series for more information). But it's elementary to reproduce it with your own means. To start the JavaScript Web IDE, you can use the following command:
docker compose --profile javascript up
Function's specifications
First, we need to specify the details of our functions.
With this first example, when I ask the model: "Hey, say hello to Philippe", I expect that the LLM will return an object like this: { function_call: { name: 'hello', arguments: '{"name":"Philippe"}' } }
const helloSpecifications = {
name: "hello",
description: `
When you want to say hello to a given person,
generate a greeting message with the name of this person
`,
parameters: {
type: "object",
properties: {
name: {
type: "string",
description: "name of the person"
}
}
}
}
The description of the function is very important, so you need to define what is your intention and what are you expecting (it will help to build the final prompt)
This is another example. With this one, with a question like "Please, add 45 and 56", I expect, as a result, an object like: { name: 'addNumbers', arguments: '{"numbers":[45,56]}' }
const addNumbersSpecifications = {
name: "addNumbers",
description:`
When you want to add the given figures or numbers,
create a list with this figures and or numbers
`,
parameters: {
type: "object",
properties: {
numbers: {
type: "array",
description: "the list of the numbers and figures"
}
}
}
}
The description of the parameters is very important too. With this example, some results could sometimes be different if you use this as the description:
"the list of the numbers"
instead of"the list of the numbers and figures"
Initialize the binding between the LLM and the function's definitions
I did my experimentation with the tinydolphin LLM (as usual, I want to run all of this on a Raspberry PI, and tinydolphin is perfect for this).
To use the Ollama functions, you need to use the OllamaFunctions
API like this:
Create a new instance of
OllamaFunctions
Bind the function's definitions to the model
const model = new OllamaFunctions({
baseUrl: ollama_base_url,
temperature: 0.1,
model: "tinydolphin",
}).bind({
functions: [
helloSpecifications, addNumbersSpecifications
],
})
Then, invoking the model is really simple. Use the following code:
const response = await model.invoke([
new HumanMessage({
content: "Hey, say hello to Philippe"
}),
])
If the LLM recognize a pattern to call a function, the response object will look like this:
AIMessage {
lc_serializable: true,
lc_kwargs: { content: '', additional_kwargs: { function_call: [Object] } },
lc_namespace: [ 'langchain_core', 'messages' ],
content: '',
name: undefined,
additional_kwargs: {
function_call: {
name: 'hello',
arguments: '{"name":"Philippe","description":"When you want to say hello to a given person, generate a greeting message with the name of this person","parameters":{}}'
}
}
}
You can see that you can get the name of the function with response.additional_kwargs.function_call.name
and the parameters with JSON.parse(response.additional_kwargs.function_call.arguments)
The entire example
This is the final example:
import { OllamaFunctions } from "langchain/experimental/chat_models/ollama_functions";
import { HumanMessage } from "@langchain/core/messages";
let ollama_base_url = process.env.OLLAMA_BASE_URL
// Descriptions/specifications of the functions
const helloSpecifications = {
name: "hello",
description: `
When you want to say hello to a given person,
generate a greeting message with the name of this person
`,
parameters: {
type: "object",
properties: {
name: {
type: "string",
description: "name of the person"
}
}
}
}
const addNumbersSpecifications = {
name: "addNumbers",
description:`
When you want to add the given figures or numbers,
create a list with this figures and or numbers
`,
parameters: {
type: "object",
properties: {
numbers: {
type: "array",
description: "the list of the numbers and figures"
}
}
}
}
const model = new OllamaFunctions({
baseUrl: ollama_base_url,
temperature: 0.1,
model: "tinydolphin",
}).bind({
functions: [
helloSpecifications, addNumbersSpecifications
],
})
const whichFunction = async (text) => {
const response = await model.invoke([
new HumanMessage({
content: text
}),
])
const functionName = response.additional_kwargs.function_call.name
const params = JSON.parse(response.additional_kwargs.function_call.arguments)
switch (functionName) {
case "hello":
console.log("👋 hello", params["name"])
break
case "addNumbers":
console.log("🤖 the numbers are", params)
break
default:
console.log("🤖 I don't know what to do")
break
}
}
await whichFunction("Hey, say hello to Philippe")
await whichFunction("please, add 45 and 56")
await whichFunction("This is a nice day, hello Sam")
await whichFunction("please, add 3 and 4")
await whichFunction("please, add 42, 67 and 32")
To run it, save it to index.mjs
and type:
node index.mjs
And you should get these results:
👋 hello Philippe
🤖 the numbers are { numbers: [ 45, 56 ] }
👋 hello Sam
🤖 the numbers are { numbers: [ '3', '4' ] }
🤖 the numbers are { numbers: [ 42, 67, 32 ] }
And now, it's up to you to use this data to call functions and services. And, of course, it runs on a Raspberry PI 🤓