I’m currently integrating Ollama as the LLM provider in RASA CALM. While my configuration seems to be correct, I’m encountering an issue where RASA still makes calls to OpenAI’s API and gives the following error:
"ProviderClientAPIException: RateLimitError: OpenAIException - Error code: 429 - {‘error’: {‘message’: ‘You exceeded your current quota, please check your plan and billing details.’}
"
This happens despite specifying Ollama as the LLM provider in my config.yml. Here’s my configuration:
It seems RASA still relies on OpenAI for some requests, and I want to ensure it only uses Ollama locally. Has anyone else experienced this, and do you know how to fully disconnect OpenAI while running a local LLM?
but i am getting errors like
2024-09-12 12:03:21 WARNING langchain.llms.base - Retrying langchain.llms.openai.acompletion_with_retry.._completion_with_retry in 4.0 seconds as it raised APIError: Invalid response object from API: ‘404 page not found’ (HTTP response code was 404).
it needs OPENAI API KEY just for running and can’t run without it :
export OPENAI_API_KEY=**************************************************** (I didn’t pay for it just for making RASA CALM project RUN)