Run rasa server directly, not from a source environment?

I have successfully installed and run the rasa server (Debian/AWS) while within an environment by running:

source ./venv/bin/activate
cd ~/htdocs/<my folder for model>
rasa run -m models --enable-api --cors "*"

But the server quits whenever I close the terminal.

I suspect it has something to do with running in within the environment, rather than directly.

How do I get it to run directly, or if that’s not the problem, how do I get the server to keep running when the terminal closes?

1 Like

@bferster you can create the docker compose environment.

is that persistent?

@bferster yes, I think so. It is container based, till you update or trained the model again.

For example: I have my Wordpress website running 24x7 on docker based environment, and even my rasa open source, without any terminal open, if you are using Windows or Mac you can easily manage whilst using Docker Desktop, to start or stop containers.

is there any way to make the source environment I’m using now persistent?

@bferster for that you need to create the conda environment on server as server run 24x7 so as your chatbot then or may be GitHub actions can help you, I not personally implemented but you can explore.

@bferster what is you current use case and resources you had for your deployment and for chatbot?

Keep in mind that a Helm Chart Installation or a Docker Compose Installation is better at handling multiple users than running rasa run which is supposed to be used for Local Mode only.

To quickly test deployment, you can use the Quick Installation which is a quick way to do a Helm Chart Installation with a single command.

To understand how deployment works in detail, you can look at the Rasa Advanced Deployment Workshop. It explains how to deploy and manage a Helm Chart Installation, the architecture and role of each pod/container, etc.

It’s not really a chat bot. I’m working for a university to develop a teacher-training tool where teachers can simulate the classroom environment: www.lizasim.com

They talk to rasa, and we come up with appropriate responses, based on the intents and entities returned

1 Like

I’ve never worked with Python before. Other than Helm, Docker, or Conda, there’s no way I can’t just evecute the server directly, and it will stay up?

@bferster your use case is very interesting, it means you want to provide teacher a live classroom environment with different sets of questions (asked by students) but how teachers reply or it will be vice-versa, are you able to create this use case or its just a starting/planning?

1 Like

We have a training set of 43,000 pairs of teacher-student interactions that have been coded to 34 intents that reflect different ways to teach kids. The teachers are real people, and the students are avatars

@bferster Teacher provides the input (Real Person) and student will replied (Avatar) Right based on training dataset. Correct?

Yes. One of the responses is chosen based on the intent found in the live teacher’s remark, with the help of some contextual information. I can’t believe there isn’t a simpler way to get the server persistent than having to learn yet another technology!

@bferster can you reply few question of mine if you can.

  1. What is your website platform Wix, Wordpress etc?
  2. What is your front end for chatbot?
  3. What is your server machine?

Thanks.

Sorry Mr Ferster, you will either need to deploy on a server using the above methods or keep your computer and terminal running at all times.

Again, the Quick Installation is a very simple command to deploy in one go. No need to learn Docker or Helm in depth! You will need a Linux server as described in the Requirements though.

Nonetheless, interesting project and good luck :slight_smile: We are always happy to help on the Forum!

I’m running a Debian / NodeJS server from Amazon AWS The front-end is primarily vanilla JavaScript with jQuery

Chris, The link you shared is for rasa- x, but I’m using rasa open-source. Is there a tutorial for that?

It’s for Rasa X as well as Rasa Open Source. The Quick Installation will install both.

If you want Rasa Open Source only you will have to go with the more complicated Docker Compose Installation I believe.

Rasa X is just a nice interface to manage the chatbot :slight_smile:

You can watch this tutorial which shows Quick Installation on GCP. Even though you use AWS, the process is similar.

1 Like

@bferster Do check this also may be it will help you : https://youtu.be/ko9-zPDuhQo

@bferster Do check this also https://youtu.be/tasoWTGM1hA

@bferster Docker will be easy pizzy for you, I believe that.

1 Like