#llm
Read more stories on Hashnode
Articles with this tag
Make a Small Language Model smarter · Introduction The topic of this blog post is to make a SLM smarter (SLM for Small Language model). Why would you...
Use Mistral Function Calling support and Ollama to run WASM Rust functions with Parakeet and Extism · Since version 0.0.6, Parakeet supports "Function...
With Docker Compose, Ollama, LangChain4J and Vert-x · In this series, "AI Experiments with Ollama on a Pi5," I explained that you could run LLM locally,...
With LangChainJS, Ollama, still on a Pi 5 (and propelled by 🐳 Docker Compose) · Ollama functions are similar to the OpenAI functions. Thanks to this...
With LangChainJS, Ollama and Fastify, still on a Pi 5 (and propelled by 🐳 Docker Compose) · In the previous blog post, "Create a Web UI to use the GenAI...
With LangChainJS, Ollama and Fastify, still on a Pi 5 (and propelled by 🐳 Docker Compose) · Today, we won't speak AI; we will develop a SPA (Single Page...