Can I use Rasa Pro developer edition without an OpenAI API key?

Hi, I’ve been using Rasa Open Source since the beginning of the year and had decided to switch to Rasa Pro CALM Developer Edition just last week. I’ve got all of my setup and installation done perfectly.

But I just have this main question I would like to clarify:

  • Does CALM completely depend on the OpenAI API? Can I disable the pipelines/configs that depend on the OpenAI API and just leverage CALM itself? The reason for this is I don’t to depend on an external/third-party service when I am adding/training my bot with sensitive data.

Is there an alternative around this?

Hi Muzzammil,

An LLM is recommended for using CALM as a dialogue system. However, the LLM certainly doesn’t have to be an OpenAI one.

Starting with version Rasa Pro 3.10, CALM uses LiteLLM under the hood to integrate with different LLM providers. Hence, all LiteLLM’s integrated providers are also supported with CALM. For a general overview of how to set this up in your project, check out our LLM Configuration page in our docs.

1 Like

Hi Lauren, Thank you so much for clarifying this! Will explore the resources provided as well :smile:

1 Like

Hi Please i have a question on this point. 'm developping a AI Assistant with Rasa-pro too. But i want to use ollama instead of openai. But i get the error message that the provider must be openai to train the model. How can i change it please?

Okay I seem to be getting a similar error as well.

I used ollama, followed the syntax from Self-Hostel Model server: ollama . I had the endpoint running (checked using ollama serve)

I am receiving the following error:

INFO     rasa.engine.training.hooks  - Starting to train component 'SingleStepLLMCommandGenerator'.
2024-09-23 17:12:13 INFO     rasa.dialogue_understanding.generator.llm_based_command_generator  - [info     ] llm_based_command_generator.flow_retrieval.enabled
2024-09-23 17:12:13 ERROR    rasa.shared.providers.llm._base_litellm_client  - [error    ] Environment variables: ['OLLAMA_API_BASE'] not set. Required for API calls. event_key=base_litellm_client.validate_environment_variables missing_environment_variables=['OLLAMA_API_BASE']
2024-09-23 17:12:13 ERROR    rasa.shared.utils.llm  - [error    ] llm_based_command_generator.train.llm_instantiation_failed error=ProviderClientValidationError("Environment variables: ['OLLAMA_API_BASE'] not set. Required for API calls.") message=Unable to instantiate LLM client.
Unable to create the LLM client for component - LLMBasedCommandGenerator. Please make sure you specified the required environment variables. Error: Environment variables: ['OLLAMA_API_BASE'] not set. Required for API calls.

Not sure why I need an API key for this process. (I’ve set the OLLAMA_API_BASE as an env as well) Please is there a mistake from our end? @Lauren-Goerz

Hi I have a ollama service running on a docker container. And i try to set up as llm model to use. Config file:

image

But i get this error:

I don’t understand why it’s doing that. When i check up the service with a curl, it works nicely:

image

and when i set provider to openai i get this error instead:

Hello, please can you help me find a solution to my problem please? and how to use olama entirely as llm provider and not openai?