Why Rasa NLU uses SVM rather than LSTM?

Hi Everyone,

I am new to group and new to machine learning.

I read in few blogs that says RASA NLU uses SVM to classify the word embeddings to classify the intent. But recently my developer says LSTM is better to use rather than a SVM. Can I use rasa nlu with LSTM. Was that a good approach?

I currently use spacy_sklearn to train my model on Rasa nlu.

Thanks.

Hi, There is a lot to unfold regarding SVM and LSTM and completely depends on the pipeline and your data.

  1. Rasa NLU does not create your word embeddings, if you are using the spaCy pipeline, it uses pre-trained embeddings generated/available from spaCy and then on top runs an SVM which is a pretty good classifier.
  2. Rasa NLU also contains another pipeline using tensorflow where word embeddings are created from scratch with your training data using a neural network, i think it is RNN but i am not sure whether it is LSTM. This gives a better contextual similarity between your intents and examples.

LSTM or any RNN are used typically to sustain memory across the session of the user to impact the classifier during turns. Rasa core is doing that using LSTM where one action taken by the bot depends on it’s past behaviour. not sure how much context becomes relevant for intent classification in general. you can sustain memory using slots as well

So the question of which one is better depends on the amount of data, your context and how much pre-trained embeddings you already have for your language. I don’t think it is one or the other

Please also view the response from the articulate team : Why Rasa NLU uses SVM rather than LSTM? - Stack Overflow