I’m running ollama with llama3.2:1b smollm, all-minilm, moondream, and more. I am able to integrate it with coder/code-server, vscode, vscodium, page assist, cli, and also created a discord ai user.
I’m an infrastructure and automation guy, not a developer so much. Although my field is technically devops.
Now, I hear that some llms have “tools.” How do I use them? How do I find a list of tools for a model?
I don’t think I can simply prompt “Hi llama3.2, list your tools.” Is this part of prompt engineering?
What, do you take a model and retrain it or something?
Anybody able to point me in the right direction?
What I’m wondering is, is there a standard format for instructing models to give outputs using the tool? They’re specifically trained to be better at doing this right
Ah for training a new model from scratch? Yes there is a specific format, you can look at the ollama source code or any of the big models that accept tool use like llama4 for the format both to and from a model. However unless you’re secretly a billionaire I doubt you could compete with these pertained models in tool calling.
Ollama’s model list on their website has a filter for tool using models. To be honest all open source models suck at tool use compared to the big players, openai, anthropic, google. To be fair I don’t have any hardware capable of running deepseeks newest models so I haven’t tested them for tool use.
No I meant like, for prompting tool supporting models to be aware of the functions you are making available to it. I’ve tried arbitrary prompts to tell it to do this and it sort of works but yeah the models I’ve tried don’t seem very good at that, was mainly wondering if using a specific format in the prompt would improve performance