I just read your paper on Dialogue Transformers - and now scanned the source code in order to understand more, how the precise architecture works.
In particular, I am interested how or where (in the code) you compute both the dialogue state embeddings and the system action embeddings. For example, I would like to know how you can compare “apple and oranges”, i.e. system actions with something like a concatenation of previous system action, intent and entitities!?
Therefore, I would really appreciate it if there were some more detailed explanations on the source code? Is there maybe something like “Rasa TEDP explained” (c.f. BERT or “Transformers explained”)? Or anyone who at least can point me to the significant spots in the code regarding embeddings?
Thanks a lot in advance!