Rasa CALM - Configuration with Ollama local mistral

Hi @Starneaa, thank you for your message and for sharing your configurations. Could you please also share what error you get?

Hi @m_ashurkina , thank you for your reply. Here’s my error with “rasa train” : 2025-06-09 20:16:07 ERROR rasa.main - [error ] Can’t load class for name ‘LLMIntentClassifier’. Please make sure to provide a valid name or module path and to register it using the ‘@DefaultV1Recipe.register’ decorator. event_key=cli.exception.rasa_exception

Hi @Starneaa thank you for your response.

May I clarify, do you build a hybrid assistant with NLU?

LLMIntentClassifier was removed in earlier versions of Rasa: Rasa Pro Change Log | Rasa Documentation

You can remove this part from your config file

name: LLMIntentClassifier

llm:

model_group: ollama_llm

If you want to use NLU along with LLMs in your assistant, you’d need to use NLU Command Adapter | Rasa Documentation.

By the way, at our official docs website, you can use the built-in AI Assistant to ask questions and check your code quickly: Welcome to the Rasa Docs | Rasa Documentation

Please let me know how it goes.

Hi @m_ashurkina , thanks for your good advice! I followed your recommendations and those of the AI-Assistant. Now I can train my Rasa Calm model and run the Rasa shell command.

However, I think there’s still something wrong with my configuration, because Rasa CALM rephrases all my utterances, even if I set rephrase_all: false in the endpoints.yml file. In other words, when I start my conversation, my utterance works, but Rasa CALM sends a second response that I didn’t ask for. Do you know why I’m having this little problem?

Here’s my configuration now (you’ll find my output below) :

flows.yml:

flows:

greet_user:

description: Greet the user when they say hello and introduce Bob

steps:

  - action: utter_greet_and_introduce

domain.yml:

version: “3.1”

responses:

utter_greet_and_introduce:

- text: "Hello! I'm Bob, your AI assistant created with Rasa. How can I help you today?"

- text: "Hi there! My name is Bob and I'm here to assist you. What can I do for you?"

- text: "Hello! I'm Bob, your friendly AI assistant. How may I help you?"

- text: "Hello! My name is Bob and I am your assistant. How can I help you today?"

session_config:

session_expiration_time: 60

carry_over_slots_to_new_session: true

config.yml:

recipe: default.v1 language: en

pipeline:

  • name: CompactLLMCommandGenerator

    llm:

    model_group: ollama_llm

    flow_retrieval:

    embeddings:

    model_group: embedding_model
    

policies:

  • name: FlowPolicy

assistant_id: xxxxxxxx-xxxxxx-xxxx-xxx

endpoints.yml:

action_endpoint: actions_module: “actions”

model_groups:

  • id: ollama_llm models:

    • provider: ollama model: mistral api_base: “xxxxxlocalhost:xxxxx” # ton instance locale d’Ollama

      parameters:

      temperature: 0.3

      num_predict: 400

  • id: embedding_model

    models:

    • provider: huggingface_local

      model: BAAI/bge-small-en-v1.5

      model_kwargs:

      device: “cpu”

      encode_kwargs:

      normalize_embeddings: true

nlg:

type: rephrase

rephrase_all: false

llm:

model_group: ollama_llm

Here’s what happens:

rasa train (this is the only warning I got during the rasa train)

2025-06-12 15:45:34 WARNING rasa.engine.validation - [warning ] pattern_chitchat has an action step with action_trigger_chitchat, but IntentlessPolicy is not configured. event_key=flow_component_dependencies.pattern_chitchat.intentless_policy_not_configured

rasa shell (here’s my ouput)

2025-06-12 15:46:58 INFO root - Rasa server is up and running. Bot loaded. Type a message and press enter (use ‘/stop’ to exit):

Your input → hi

Hello! I’m Bob, your AI assistant created with Rasa. How can I help you today?

AI: Hello there! How may I be of assistance to you today?