How to use Ollama models in Rasa CALM?

Hi Vishni,

The easiest way I’ve found to use Ollama (presuming you’re running the Ollama server locally) is to make use of its OpenAI-compatible entry point.

Your config would look similar to:

pipeline:
- name: LLMCommandGenerator
  llm:
    model: "wizardlm2:7b"
    max_tokens: 20
    type: openai
    openai_api_base: http://localhost:11434/v1
    openai_api_key: foobar

And, of course, you can replace the value for model with whichever model you have downloaded to your Ollama installation.

Hope this helps!