I would like to know in which step word embeddings are created? Otherwise, Are word embeddings created by Text Featurizes ( CountVectorsFeaturizer for example) or by Intent Classifiers (EmbeddingIntentClassifiers for example) ? I know that CountVectorsFeaturizer transforms tokens into vectors, and EmbeddingIntentClassifiers is a ANN with 2 hidden layers and calculates the coefficients used for the text classification. But word embedding is a dense matrix, represents the similarity between the terms and (according to my knowledge) is used by the classifier. I hope you might be able to give me some insights on this.