Rasa Core - Attention mechanism of Embedding Policy

I read your paper https://arxiv.org/abs/1811.11707 about embedding policy.

And then i looked through your source code implementing embedding policy.

I have a question about your implementation of interpolation gate in attention mechanism.

As i know, interpolation gate is the combination of contents weight and previous final weight like below. The final weight means the weight provided by contents addressing and location addressing. (The contents weight means the weight provided by only contents addressing)

image

However, It looks like your interpolation gate is the combination of contents weight and previous contents weight.

https://github.com/RasaHQ/rasa_core/blob/master/rasa_core/policies/tf_utils.py

Is your mistake? Or… Is there any other intention I do not know?

Thanks in advance :slight_smile:

Same question here. Anybody can clarify this?

yes, it is done intentionally, because we use time limited attention and softmax probs are slightly time dependent