I had a working bot which was created using rasa community. In that rasa form was implemented. Now I am trying to migrate that same code base to rasa pro. I have integrated a local LLM model [gemma3:12b] using ollama with this codebase. Trying the coexistence [CALM with NLU]. The issue I am facing are :
- The form which was implemented using NLU is not working as expected in CALM.
- Zero shot commands are not recognized. [If I want the assistant to collect name and role number of a student from a single statement, its not working. Assistant is collecting one parameter at a time ]
- The local LLM are significantly slower in intent recognition. Is it expected behavior. If not how I can improve the performance. Requesting guidance on these areas.