How to deploy Rasa on server?

Hey,

I want to make my bot available for few people who have access to a server. How do I manage such things? I have no experience with servers. I am interested in the basic way to do that. So that you know how less I know about that: How do users use then this bot from there local machine (like via a webpage?)? I have a GUI which they can address.

Thanks!

are you using your bot as a server if yes then you can setup server either through Azure or AWS services and there you can clone your bot code, I have taken windows OS for the server (azure) you can opt for linux also, From there you have to run your bot on a port and from your azure / aws console you have to open that port so that it can be accessed publicly by any app as a URL. It is by far most easy way to access your bot apart for using it as localhost. You can also use ngrok if you want to host your bot locally on your machine , it will host your bot from your local repository itself (best for testing alternate for localhost) . If you have a GUI then it is better you go for http server https://core.rasa.com/http.html#http-server. I am also using my bot on a custom GUI and it works well by using as a bot server!

1 Like

Sorry, I try to understand your reply. What you mean bot as a server? I don’t want to run a localserver. At the moment I use ngrok for this.But with that you cannot do a real server for public access?

From there you have to run your bot on a port and from your azure / aws console you have to open that port so that it can be accessed publicly by any app as a URL.

So I do effectively the same on the server as on my local machine when using ngrok, ie opening a port etc. So the whole time the bot is running a console need to be open?

How can I open a port and choose a specific URL such that other user can acces from their local machine?

What requirements are there on the server`Why I have to use Azure or AWS?

I guess you did not go through the link to the docs which i provided that is why you are not getting bot as a server part! As concerned with Azure and AWS these are some common instances when we talk about hosting our application if you know any other service you can opt that. As talked about ngrok, yes you will need leave it open all the time, But not in case of a server there machine which you will opt for should not be shut down and you can go through docs for which ever service you are opting for as to how you can open a port, As of my case for Azure this can help https://stackoverflow.com/questions/21083782/windows-azure-virtual-machine-opening-a-port. For requirements you have to setup all the bot’s dependencies like python,pip rasa-nlu/core , spacy etc for running scripts! If you need any help can ask!

Thanks! So,as I understand correctly, you just need to open a port irrespective of kind of server? How are multiple users users then handled? What do I have to change for that?

I have my NLU and Core but I am really not familiar with server knowledge and it just blow up my mind. I don’t know how these two components have to interact with a server such that things work…

Are you using this as a synonym for bot as a server? https://core.rasa.com/http.html#http-server

So, is this already ready for real production?

I have the feeling you described two ways in your first post? The first part bot as a sever and at the end you linked to the http server stuff.

hi , let’s break your query in parts for better understanding both for you and me

For core multiple user request is handled like this POST /conversations/(str: sender_id)/parse curl -XPOST localhost:5005/conversations/default/parse -d
‘{“query”:“hello there”}’ | python -mjson.tool

Here in place of default you will have to add a user id which your GUI will assign for any user who will login to your bot for distinguishing between multiple users

You have to go through docs as after training your nlu + dialogue (core) model , You can run your bot using rasa’s http server command python -m rasa_core.server -d examples/babi/models/policy/current -u examples/babi/models/nlu/current_py2 -o out.log

Here path for nlu & core models are gone for making your bot work it will be same way done for server as done in local!

yes

If you are asking about my bot , so the ans is not yet

Their is no first and last the whole post is for http server of bot!

Thank, I appreciate your help. But I am rather confused about this http server bot. It looks like I have to manaully create the dialogue via those endpoints. So, it seems you have to do some work to handle your bot via this server?

To start a conversation, send a POST request to the /conversation/<sender_id>/parse endpoint. <sender_id> is the conversation id (e.g. default if you just have one user, or the facebook user id or any other identifier).

I use thgis GUI: https://github.com/scalableminds/chatroom

how do I know the user ID?

I am confused about those curl commands. This seems like manual work. It looks like I have to write curl ... everytime manually…Sorry but mabye I am too stupid.:grin:

Is there maybe any toy example? I am a little off when I read this:

If started as a HTTP server, Rasa Core will not handle output or input channels for you. That means you need to retrieve messages from the input channel (e.g. facebook messenger) and send messages to the user on your end.

Maybe you can give me some instruction what I have to do for a test on a server given my core model (in python) wiht trained dialogues and my GUI.:grinning:

you can run the url on browser curl basically is for that purpose only .

My GUI used socket.io from their i got a user id which i passed onto my rasa bot.

For handling input and output channels means input and output for the bot . Here i have passed user query to rasa like this http://localhost:5005/conversations/rasa-itsm/parse?q=hello

and for bot’s reply i used to get a json response from which i used to map the intent associated with the user query and for that particular intent block i used to send a response from my node app itself , you can do it using your actions.py file for that but i had a pre defined architecture for my native app which is also working for Dialogflow. I hope you get some clarity from this if any more query can ask

1 Like

I do roughly understand the points, but not the mechanims how they are connected! As I read your last post there is a lot of stuff I have to do besides having my bot! But noone can tell me how I do that?

For handling input and output channels means input and output for the bot . Here i have passed user query to rasa like this http://localhost:5005/conversations/rasa-itsm/parse?q=hello

Like this one. Ok, yes. And how does this help me? What and where do I have to write this in what kind of code?

and for bot’s reply i used to get a json response from which i used to map the intent associated with the user query and for that particular intent block i used to send a response from my node app itself , you can do it using your actions.py file for that but i had a pre defined architecture for my native app which is also working for Dialogflow. I hope you get some clarity from this if any more query can ask

As I said, but I am even not familiar with node apps.

My GUI used socket.io from their i got a user id which i passed onto my rasa bot.

But surely not automatically? What have you done?

It shouldn’t be so difficult to set up my bot on a server? Is there any instruction how to that with an example? I am desperated. This should be in the docs…So, am I right I have to write again an additional dialogue mangement system, I already have with core, for the server, to handle the responses for intents etc.? A guide how I do that would be great.

Thanks, but I am a server newbie.

2 Likes

@datistiquo i’m running bot in server and getting this url http://localhost:5005… I want to change localhost to server IP… How can i do that?

I am also get this doubt

@soundaraj we need to use public IP of the server… If you don’t know just go to server and type what is my IP… you will find public IP. Then open the particular port by setting inbound and outbound rule. If your server is in corporate network ask them to open the port from their end also.

@mohan bro please give me a more details or some demo

@soundaraj get public IP of your server and open the port in server(This can be done by system admins) later if you are using rasa1 then use “rasa run” or if it’s rasa1 earlier then use “python -m rasa_core.run --cors CORS”*" -d models/dialogue -u models/nlu/current
–port 5002 --credentials credentials.yml" you will get a url then replace localhost with your public IP it will work.