What is the word2vec model used when tensorflow embedding pipeline is used in multiple intent classification ? continuous bag of words or skip gram model ?
Neither. It uses count vectorizer as featurization and learns corresponding word vectors from supervised intent classification task
Thanks Ghostvv. I have one more question. With tensorflow pipeline for multiple intent classification and tokenization flag set to TRUE, does the model treats the individual intents as dependent or separate. For example, I have a data with three intents intent1 intent2 intent3 . Will it be same when I train a single model for “Intent1_intent2_intent3” and three different models for “intent1”, “intent2” & “intent3” ?? Is there any dependency between prediction of multiple intents ??
Tokenization flag helps learning when you have composite intents, but the model still treats all intents separately.
What do you mean by three different models for “intent1”, “intent2” & “intent3”
?
I need a clarification: Does the tensorflow_embedding_pipeline use any word2vec-algorithm at all?
the algorithm is completely different from word2vec. You can provide word embeddings as features if you want