I have a question regarding setting up a NLG server. what I see in the docs under responses only shows what the NLG server is sent and what it sends back. What I don’t see clearly is how the domain file is set up to work with a NLG server. My first thought is you don’t include the templates section at all in the domain.yml file and just declare the existence of your utter_ messages in the actions section of the domain. Is this correct?
To restate my question: How is the domain file set up to work an NLG server? and is there any examples? (I couldn’t find any, just the nlg server file in the master repo).
Secondly, can you mix using NLG and local templates? or is it an all or nothing solution?
example domain file with templates included (NLG mixed with local templates):
intents:
- greet
- goodbye
templates:
utter_greeting
utter_goodbye:
- text: see you later!
actions:
- utter_greeting
- utter_goodbye
hi @andrew.tangowork - yes currently it’s ‘all or nothing’. Your initial guess is correct, you can leave out the templates from your domain if you’re using a custom NLG server
Hi @amn41
Is there any reason for this behaviour ? It would be great to use Rasa’s embeded responses solution for some intent and an external endpoint for some others
that should be possible, the response selector is actually part of NLU, and so the selected response should be part of the NLU output, you can use your NLG server to serve it
So I am struggling with the same issue. I think this is what @amn41 meant by ‘all or nothing’. If you include the webhook to the NLG server, it passes all fields generated (in this format) from user utterance to the NLG server, rather than the normal machinery of rasa_core/rasa_actions. If you comment it out, rasa_core/rasa_actions apply actions based off actions learned from training stories/domain/nlu.
I am trying to create a slot dependent switch that changes the bots state from using Rasa embedded responses based off training data to sending requests to the NLG for bot responses.
hi @ahson - that’s helpful thanks. I’d like to understand if it would make sense for Rasa to support this. What’s the use case where you want this behaviour?
The second state requires natural language generation from an outside model. The bot continues to pass utterances to the NLG model, the external model creates the appropriate HTTP response and replies until I pass back a slot value, (conversation_complete:True) in the response ( decided by external model)
endpoints : core, nlg
If conversation_complete:True, the state in which utterances are passed to the NLG model stops.
endpoints : core,action
Third state would be to start another form to collect another set of standardized information (which could possibly be dependent on analytics describing interactions in state 2) .
I considered the idea of having multiple bots to do this…
This type of slot switch for directing endpoints mid conversation would really support creating hybrids architectures.
Do you think there’s another way to achieve this in the current Rasa implementation? (Maybe using the HTTP API)
thanks! would a simpler solution be to create another form for this?
i.e. create a ‘dummy form’ ExternalNLGModel(FormAction), override the required_slot method, and just always call the external model until the ‘completed’ slot is set?