There is a lot to unfold regarding SVM and LSTM and completely depends on the pipeline and your data.
- Rasa NLU does not create your word embeddings, if you are using the spaCy pipeline, it uses pre-trained embeddings generated/available from spaCy and then on top runs an SVM which is a pretty good classifier.
- Rasa NLU also contains another pipeline using tensorflow where word embeddings are created from scratch with your training data using a neural network, i think it is RNN but i am not sure whether it is LSTM. This gives a better contextual similarity between your intents and examples.
LSTM or any RNN are used typically to sustain memory across the session of the user to impact the classifier during turns. Rasa core is doing that using LSTM where one action taken by the bot depends on it’s past behaviour. not sure how much context becomes relevant for intent classification in general. you can sustain memory using slots as well
So the question of which one is better depends on the amount of data, your context and how much pre-trained embeddings you already have for your language. I don’t think it is one or the other