Rest api implementation

Hi All, I worked with Rasa long back and now all the docs had changed. I tried to implement the Rest api(post api) for accessing the chat bot through my website. For this i want to write the custom channel. Can anyone help me on doing this like any good tutorial or documentation.

1 Like

Hi @chinnusujitha,

I am trying to do the same. I found a bunch of articles using different techniques, but havent yet found the right soultion. I will let you know if I succeed in doing so.

Hey @chinnusujitha, just follow the below steps to connect your bot to custom channel using REST api,

Step1: you need to ensure your credentials.yml has the following content:

Step2: Once you have trained your bot, you can start your bot server by running the below command

rasa run -m models --enable-api --cors “*” --debug

Once you run this command, you can see the below output in the terminal image

Step3: Once your server is up and running you can access your bot throught REST api, given below:


Step4: Now you can test the api:

You can try out my custom UI :

I hope this will help you :slight_smile:


Hi @JiteshGaikwad

Is it normal that the Two Stage Fallback Policy doesn’t work when using the rest api ? It works when I’m using websocket.

Thank you

No, it should work

Okay, actually the problem is bigger than I thought:

Each time I have a question with buttons, clicking on button doesn’t execute the intent it should but the fallback policy.

When I’m trying to send http request manually I get the same problem, do you have any clue what could be the problem ?

Would you otherwise maybe have a working example so I can try to debug ?

Thank you in advance

hey @EvanD, I think so the problem lies in payload of buttons, make sure whenever you click the buttons, the payload is getting triggered properly?

Well, I’ve tried to change the format of the payload in the domain file but in vain.

I’m not sure what you mean by triggered properly, but I’d say yes.

For example, I have an intent called UploadTracksFTP and a button which has for title “More than 1000” with its payload /UploadTracksFTP

When I click on it, it sends as request payload :

{"message":"More than 1000","sender":"username"}

And I receive an empty list as response payload.

Would you have any further explanation ?

Thank you for your time

Well actually I realize the request payload message should be the name of the intent “/UploadTracksFTP” and not the title of the button. Though I don’t get why there’s such an error, is that linked to a rasa update or something ?

Anyway I’m now be able to do what I want, thank you for your help

hey @EvanD, I am glad it worked out & it’s not related to update of the rasa, it was due to the message that you were sending through payload. :slight_smile:

Actually I was referring to the payload as described here:

Sorry that I didn’t mentioned about it clearly previously.:relieved:

Hi @shreyask92,

Thanks for the reply. I would thankful if you share me the solution once you succeed.

Hi @JiteshGaikwad,

Thanks for the reply.Yes it is helpful for me.

I ran the nlu model using:

python3 -m rasa train nlu -c config.yml --fixed-model-name current -u data/data2Insights.json --out models -v -vv

And core model using:

python3 -m rasa train core -d domain.yml -s --debug --out models/dialogue -v

Then i ran the bot using this command:

rasa run -m models --enable-api --cors “*” --debug

Then i got an warning:

2019-07-11 12:58:23 WARNING rasa.core.interpreter - No local NLU model ‘/tmp/tmp26x19_ee/nlu’ found. Using RegexInterpreter instead.

By bot behaved weirdly means - If i say hi to it it is saying bye. Can you please help me on this.

And also if i want to run with different host other than localhost and different port number not an default one(5005), where i need to give those credentials?

You are facing the above warning because there was no nlu model found. make sure you have models tar file in the models folder.

if you want to change the default port of the rasa server you can pass the port number to the -p argument

rasa run -m models --enable-api --cors “*” --debug -p 5004

Hi @JiteshGaikwad, I have my nlu model in models folder only. Please see this screenshot.


And if i want to change the localhost to some ip address then where i need to change it?

I want something like this:


After webook i dont need again rest/webhook.

Thanks in Advance.

Hi @chinnusujitha,

When you are running the bot you specify the location of the model using “-m models”, and since you trained only rasa core, your latest model does not contain an nlu model. Instead use “rasa train --force <optional_parm>”, this would create the latest version of both core and nlu in one single tar.gz file, and you will not see this error again when you run your bot.

Hi @shreyask92, Thank You, That worked for me but if i have nlu model and core model in separate zip files(as shown in my above screen shot), how should i have to tell that to rasa run?

Hi All,

     Can anyone help me on this..

@chinnusujitha the easiest way would be to manually unzip the files, move all the files in one to the other, and re-zip. (then remove the core-). At the moment, I don’t believe there is a way to pull separate core and nlu models – this was the point of merging the train method, to make combined core and nlu models out of the box (ofc you can still also use nlu-only or core-only)

The train method will only re-train whichever parts have been modified, so nlu will not retrain if you just add stories – therefore there’s really no downside to creating combined models if you’re using core and nlu.

Is this API documentation given in rasa docs? I couldn’t find it…