Issue connecting RASA PRO CALM with Embedding model on self-hosted vllm server: litellm.BadRequestError: LLM Provider NOT provided

Hi, I’m trying to connect RASA with self-hosted embedding model on vLLM

Below is my config.yml:

But I’m getting the following error:

ERROR    rasa.dialogue_understanding.generator.flow_retrieval  - [error] 
Failed to populate the FAISS store with the provided flows. 
error_type=ProviderClientAPIException
event_key=flow_retrieval.populate_vector_store.not_populated

ERROR    rasa.dialogue_understanding.generator.llm_based_command_generator  - [error] 
Flow retrieval store isinaccessible. 
event_key=llm_based_command_generator.train.failed

ERROR    rasa.engine.graph  - [error] graph.node.error_running_component
node_name=train_SingleStepLLMCommandGenerator0
ProviderClientAPIException: ProviderClientAPIException:
Failed to embed documents
Original error: litellm.BadRequestError: LLM Provider NOT provided. 
Pass in the LLM provider you are trying to call. 
You passed model=self-hosted/BAAI/bge-base-en-v1.5
Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` 
Learn more: https://docs.litellm.ai/docs/providers

provider: self-hosted is working for LLM. but its not working for embedding. What should be the provider name under embeddings?