I have asked a question in the comment section of your CBOW and Skip Gram YT tutorial about implementing similar solutions in RASA, and I believe Vincent replied to me suggesting to ask the question here on RASA Forums, so here goes.
So, I have a general-purpose language model in spaCy (for Hungarian Language) that has dense word-embedding vectors in it. I would like to use these word2vec word-embeddings (token.vector) in my RASA model for better accuracy, but I haven’t found too much info about how one might do that. I haven’t checked the code in great detail yet, however I am not sure that this is even possible without writing custom code into the RASA pipeline.
Question: If possible to use word2vec word-embeddings in the RASA Open Source, how can I do that?