My config file-
recipe: default.v1
language: en
pipeline:
- name: SingleStepLLMCommandGenerator
llm:
model: gemma
base_url: http://localhost:11434/v1
# api_type: ollama
flow_retrieval:
embeddings:
provider: "huggingface_local"
model: "BAAI/bge-small-en-v1.5"
api_base: "http://localhost:11434/"
model_kwargs: # used during instantiation
device: "cpu"
encode_kwargs: # used during inference
normalize_embeddings: true
# active: false
policies:
- name: FlowPolicy
- name: IntentlessPolicy
llm:
model: gemma
base_url: http://localhost:11434/v1
embeddings:
provider: "huggingface_local"
model: "BAAI/bge-small-en-v1.5"
api_base: "http://localhost:11434/"
model_kwargs: # used during instantiation
device: "cpu"
encode_kwargs: # used during inference
normalize_embeddings: true
# api_type: ollama
assistant_id: 20240916-212334-mean-developer
My error looks like this-
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
2024-09-17 01:04:50 ERROR rasa.utils.log_utils - [error ] nlg.llm.error error=Prov
- List item