Bot migration from rasa community to rasa pro leveraging coexistence

I had a working bot which was created using rasa community. In that rasa form was implemented. Now I am trying to migrate that same code base to rasa pro. I have integrated a local LLM model [gemma3:12b] using ollama with this codebase. Trying the coexistence [CALM with NLU]. The issue I am facing are :

  1. The form which was implemented using NLU is not working as expected in CALM.
  2. Zero shot commands are not recognized. [If I want the assistant to collect name and role number of a student from a single statement, its not working. Assistant is collecting one parameter at a time ]
  3. The local LLM are significantly slower in intent recognition. Is it expected behavior. If not how I can improve the performance. Requesting guidance on these areas.

Hi @Jyoti_Prakash_Behera ! thanks for posting, happy to help here.

  1. this could get tricky, what does your form do? is it easy to reimplement as a flow?
  2. can you show a trace? e.g. if you run with rasa inspect --debug you can see the commands being produced in response to a user message
  3. yeah depending on size this can be a little slow. If you’re just usign it for intents, can you use a flow trigger instead? Starting Flows | Rasa Documentation