Deploying chatbot on webpage using webchat

I have built my first chatbot using rasa and it works fine through the command line but I have been trying to deploy it using webchat with no avail. I apologise in advance if I jumble up some terminology but here are my main files:

credentials.yml

credentials

endpoints.yml

endpoints

connect.py

i use ‘rasa run’ to start the server and ‘rasa run actions’ to start the action server. I also execute connect.py, and altogether I am able to see the widget and send a message but there is no reply. This is the log from ‘rasa run’:

This goes on similarly.

This is what i get when connect.py is run:

I’m probably messing up something but I have no clue how to fix it or look for the error. Thank you in advance for your help :slight_smile:

Hey there! The relevant log here is Ignoring message as there is no agent to handle it. Sometimes this happens if for some reason only your NLU model is loading and not your Core. Have you confirmed that you can talk to that agent without the channel?

What versions are you running btw?

Thanks for your time! I am using Rasa 1.1.4 and Python 3.6.8

I’m not sure how to confirm that? Unless you mean if I have used ‘rasa shell’ to converse with the bot which I have, and it seems to work perfectly and is able to call custom actions successfully. I just don’t have the socket.io credentials and only use the action server for that and don’t need to run connect.py. Everything else remains the same.

also adding the webchat widget code that I added to the body of the html for the page following the readme on the webchat github, maybe the connection issue stems from the ports being used?

hm but if you’re not using connect.py to test it then it doesn’t really matter if rasa shell works right? because connect.py is attempting to train and load a different model

I thought while loading the agent through connect.py, if I gave the path to the latest model to the interpreter, it is loading the same one used by rasa shell?

Well it depends, are you passing the same path to that model in rasa shell or just letting shell pick the model? Because if you don’t pass a -m path it will use the most recently trained model

Every time I train a new model, I change the path given to the interpreter. So it should be using the same model.

@nikitajain18 could you please tell me how you get this response for the rasa run command ? I am unable to get more than a “starting rasa-core server at…” although I also run an equivalent to your “Connect.py”. How do you link these two please ?