Core Model behaviour changes after retraining

Hi,

I just came to notice that the way in which core predicts changes slightly after retraining with same hyperparameters, cofig file and stories.My Model was working perfectly but i had to make some domain changes.After making the changes i retrained the Core part & now i notice some slight changes in the way the same story behave.Has anyone encountered this and is there any way to solve this ?

@amn41 @akelad @dakshvar22 @MetcalfeTom

Hi @Anand_Menon! I have never come across this before. If you did not make any changes to the stories, then I don’t believe it should have retrained the Core policies. Can you elaborate a bit more on what you mean by “some slight changes in the way the same story behaves”?

Hi @tyd

Okay well slight changes means for example I have written a story in which if a user ask some business related FAQ in between a slot filling based intent (Ex: Money transfer intent) the bot should answer that question plus it should resume from where it left off.

  • User : I want to transfer money
  • Bot : Well please provide the account number
  • User : UI1234567890
  • Bot : Well please enter the amount
  • User : Well i was wondering how to get a credit card what are its procedure (Well this user in non cooperative :sweat_smile: )
  • Bot: Credit card requirements are as follows … It seems you were in between a money transfer.Please enter the amount?
  • User : 2000 $
  • Bot : Amount transferred successfully.
  • User: Procedure to get credit card
  • Bot: Credit card requirements are as follows …
  • User: Bye
  • Bot: bye bye take care After writing stories for this scenario which was working perfectly at first then i made some template changes in domain.yml and retrained the model. The new model behaviour
  • User: Procedure to get credit card
  • Bot: Credit card requirements are as follows … It seems you were in between a money transfer.Please enter the amount?

The keras policy predicted my form_action as the next action instead it should have been the action_listen Well this is wrong as per my story since i am not inside any form action such a behaviour should never come.The above scenario was just asking a simple business FAQ question.Which i have already written stories for. I double checked the variation between my new model and previous model.The old model was not showing the above variation but the new one does

I again retrained the model without making any changes not even in the domain.yml. This issue was not persisted it was working just fine. I even saved copies of all these model and each model tend to have some slight changes. I think the severity of this issue is going to increase as we scale up the bot with new Contextual and FAQ intents.

I would love to share my stories.md and other files but unfortunately it contains client data.

I hope this helps

ML algorithms contain randomness in different parts of the algorithms. Therefore unless you fixed random_seed, it will have slightly different results every time you retrain

1 Like

Well that is correct @Ghostvv

But i guess rasa must have set the random_seed value to a default value so that previous model and the new model behavior should match right?

Which policy do you think is causing this randomness and where is this random_seed parameter so that i can set it to a constant value?

we didn’t fix random_seed by default

Any ML policy, please take a look for possible options for core policies: Policies

Well thanks for that info but i am not yet convinced that a framework which is used to built production ready chat bot does not set a seed to a constant value. The chat bot should be consistent and should produce the same behaviour as that of the older version unless no changes are made to its config or other files.Correct me if i am wrong.

1 Like

well… there is no good number, and you use statistical modeling to create a behavior of a chatbot. Such modeling is unstable and even if you fix random seed, small changes to train data can lead to unpredictable changes in prediction, this is the nature of these methods. That’s why you can customize all the config parameters to try to find out wanted behavior. But it shouldn’t be set by default, since it is not what is expected from ML framework

Okay … Thanks for that info Let me try some parameter tuning and see if the model stable enough.