Hello everyone, i tried initializing my first CALM bot today. I followed the guide. I set the correct environment variables for using the Azure endpoint but when i run rasa shell and try to interact with the bot i get the following error:
[error ] llm_command_generator.llm.error error=InvalidRequestError(message=“Must provide an ‘engine’ or ‘deployment_id’ parameter to create a <class ‘openai.api_resources.chat_completion.ChatCompletion’>”, param=‘engine’, code=None, http_status=None, request_id=None)
and this info right after it:
[info ] llm_command_generator.predict_commands.finished commands=[ErrorCommand(error_type=‘rasa_internal_error_default’, info={})]
thats my current config, my deployment is called “rasa” for engine i put gpt-4 and the modell i chose in Azure, for the model at embeddings i used one i found in forums, because there wasnt anything hinting to what that could be in my Azure Studio.
This is the error i get now
[error ] llm_command_generator.llm.error error=InvalidRequestError(message=‘The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.’, param=None, code=‘DeploymentNotFound’, http_status=404, request_id=None)