I have a question regarding setting up a NLG server. what I see in the docs under responses only shows what the NLG server is sent and what it sends back. What I don’t see clearly is how the domain file is set up to work with a NLG server. My first thought is you don’t include the templates section at all in the domain.yml file and just declare the existence of your utter_ messages in the actions section of the domain. Is this correct?
example domain file 1 with templates excluded:
example domain file with templates included (without text):
To restate my question: How is the domain file set up to work an NLG server? and is there any examples? (I couldn’t find any, just the nlg server file in the master repo).
Secondly, can you mix using NLG and local templates? or is it an all or nothing solution?
example domain file with templates included (NLG mixed with local templates):
- text: see you later!
hi @andrew.tangowork - yes currently it’s ‘all or nothing’. Your initial guess is correct, you can leave out the templates from your domain if you’re using a custom NLG server
Is there any reason for this behaviour ? It would be great to use Rasa’s embeded responses solution for some intent and an external endpoint for some others
that should be possible, the response selector is actually part of NLU, and so the selected response should be part of the NLU output, you can use your NLG server to serve it
Thank you for your response
I have actually a Rasa 1.9.5 and here is an extract of what I get from the NLU to the server
- The found utter (
template field) is correct.
- The found intent (
intent.name field) is also correct.
- But the
response_selector field is empty and there is no trace of the text of the concerned utter in the NLU output
When I comment the endpoints.yml, everything is going well, and the utter text is retrieved by the bot.
It looks like the selected reponse is not send to the external server. Am I missing the point ?
So I am struggling with the same issue. I think this is what @amn41 meant by ‘all or nothing’. If you include the webhook to the NLG server, it passes all fields generated (in this format) from user utterance to the NLG server, rather than the normal machinery of rasa_core/rasa_actions. If you comment it out, rasa_core/rasa_actions apply actions based off actions learned from training stories/domain/nlu.
I am trying to create a slot dependent switch that changes the bots state from using Rasa embedded responses based off training data to sending requests to the NLG for bot responses.
that doesn’t seem right. Can you please create an issue with your config.yml and this output? thanks
hi @ahson - that’s helpful thanks. I’d like to understand if it would make sense for Rasa to support this. What’s the use case where you want this behaviour?
Hi @amn41, thanks for getting back to me!
In my use case, I basically have 3 conversational states.
The first state is basically “on-boarding”, I want to collect standardized information from the user. (so something like FormAction would be ideal) —
Once the form is filled, the following slots exist
Form_1_complete : True
NLG_conv_complete : False
Form_2_complete : False
The second state requires natural language generation from an outside model. The bot continues to pass utterances to the NLG model, the external model creates the appropriate HTTP response and replies until I pass back a slot value, (conversation_complete:True) in the response ( decided by external model)
endpoints : core, nlg
If conversation_complete:True, the state in which utterances are passed to the NLG model stops.
endpoints : core,action
Third state would be to start another form to collect another set of standardized information (which could possibly be dependent on analytics describing interactions in state 2) .
I considered the idea of having multiple bots to do this…
This type of slot switch for directing endpoints mid conversation would really support creating hybrids architectures.
Do you think there’s another way to achieve this in the current Rasa implementation? (Maybe using the HTTP API)
thanks! would a simpler solution be to create another form for this?
i.e. create a ‘dummy form’
ExternalNLGModel(FormAction), override the
required_slot method, and just always call the external model until the ‘completed’ slot is set?