I’m running ollama with llama3.2:1b smollm, all-minilm, moondream, and more. I am able to integrate it with coder/code-server, vscode, vscodium, page assist, cli, and also created a discord ai user.
I’m an infrastructure and automation guy, not a developer so much. Although my field is technically devops.
Now, I hear that some llms have “tools.” How do I use them? How do I find a list of tools for a model?
I don’t think I can simply prompt “Hi llama3.2, list your tools.” Is this part of prompt engineering?
What, do you take a model and retrain it or something?
Anybody able to point me in the right direction?
No I meant like, for prompting tool supporting models to be aware of the functions you are making available to it. I’ve tried arbitrary prompts to tell it to do this and it sort of works but yeah the models I’ve tried don’t seem very good at that, was mainly wondering if using a specific format in the prompt would improve performance