Rasa PRO 3.8 OPENAI Errors

I am getting this error Retrying langchain.llms.openai.acompletion_with_retry…_completion_with_retry in 8.0 seconds as it raised APIConnectionError: Error communicating with OpenAI. , if I turn on flow_retrieval and then turned off

  • name: LLMCommandGenerator prompt: prompts/command-generator.jinja2 llm: type: “openai” model: “gpt-4” temperature: 0.7 flow_retrieval: active: false

Where as my standlaone opeai test code works fine

completion = openai.ChatCompletion.create( model=“gpt-3.5-turbo”, messages=[ {“role”: “system”, “content”: “You are a poetic assistant, skilled in explaining complex programming concepts with creative flair.”}, {“role”: “user”, “content”: “Compose a poem that explains the concept of recursion in programming.”} ] )

print(completion.choices[0].message)

Hi @geeta.m.desai, what’s the version of gpt-3.5-turbo are you trying?

I tried with gpt-3.5-turbo and GPT-4

blocked.gif

Please read my comments in the original thread, I have tried execuritng external program to test OpenAI connectivity at the same time and works fine

I have tried with flag(flow_retrieval) on and Off both

After trying everything ony, I have reached to forum.

I believe we are using the Completion API instead of ChatCompletion which unfortunately is not supported in some newer version of gpt-3.5-turbo. We are working towards fixing that.

If you are facing this in gpt-4. could you post the full error logs and also the version of

openai

do you have a specific version of openai installed separately in the environment?

- name: LLMCommandGenerator
  llm:
    model_name: gpt-4
    request_timeout: 7
    max_tokens: 256
What you mean by completion API instead of ChatCompletion, I have just configured LLM
and Rasa sends those requests to OpenAI. I am not sending any API requests directly.
Same config works fine with Rasa Plus 3.7. My bot is working absoutely fine with Gpt-4 and 3.5-Turbo both

I stopped migrating to 3.8 for now. I will finish my bot deelopment and then migrate. I am very much inetersted n selectively sending conversation history for dialogue understnading

Will Rasa 3.8 support PII Management. I could not get clearity from the documentation. While PII documentation has these lines " FEATURE NOT YET CALM-COMPATIBLE.We are working to ensure future integration and compatibility." 3.8 feature list does not clelarify if it is supported or no. Please provide some guidance on this.

I Have the same error - I am trying to just run the Tutorial. Tried with 3.80 and 3.81.

2024-05-26 08:20:23 INFO rasa.model_training - [info ] Your Rasa model is trained and saved at ‘models/20240526-082021-full-drone.tar.gz’. event_key=model_training.train.finished_training

? Do you want to speak to the trained assistant? :robot: Yes

2024-05-26 08:20:26 INFO root - Connecting to channel ‘rasa.core.channels.development_inspector.DevelopmentInspectInput’ which was specified by the ‘–connector’ argument. Any other channels will be ignored. To connect to all given channels, omit the ‘–connector’ argument.

2024-05-26 08:20:26 INFO root - Starting Rasa server on http://0.0.0.0:5005

2024-05-26 08:20:26 INFO rasa.core.processor - Loading model models/20240526-082021-full-drone.tar.gz…

2024-05-26 08:20:26 INFO rasa.dialogue_understanding.generator.llm_command_generator - [info ] llm_command_generator.flow_retrieval.enabled

/Users/srangaiah/repos/knowledge_engine/rasa_37/venv/lib/python3.10/site-packages/rasa/core/processor.py:129: UserWarning: The model metadata does not contain a value for the ‘assistant_id’ attribute. Check that ‘config.yml’ file contains a value for the ‘assistant_id’ key and re-train the model. Failure to do so will result in streaming events without a unique assistant identifier.

rasa.shared.utils.io.raise_warning(

2024-05-26 08:20:26 INFO root - Rasa server is up and running.

/Users/srangaiah/repos/knowledge_engine/rasa_37/venv/lib/python3.10/site-packages/sanic/server/websockets/impl.py:521: DeprecationWarning: The explicit passing of coroutine objects to asyncio.wait() is deprecated since Python 3.8, and scheduled for removal in Python 3.11.

done, pending = await asyncio.wait(

2024-05-26 08:20:47 ERROR rasa.dialogue_understanding.generator.command_generator - [error ] command_generator.predict.error error=Error communicating with OpenAI

Here is my config.yml

recipe: default.v1 language: en pipeline:

  • name: LLMCommandGenerator llm: model_name: gpt-4

policies:

  • name: FlowPolicy

- name: EnterpriseSearchPolicy

- name: RulePolicy

Found the issue. This is isolated to MACOS - SSL. Ran this command and it started working

bash /Applications/Python*/Install\ Certificates.command