How to run rasa-x on the mac?

Hello @tyd, thanks for your reply :slight_smile: . I used Rasa X locally to investigate and play with the tool (using a baseline data that I also have locally) just so that if I’m doing so, I’m not messing with the production deployment or data. The intentions were also to have a fully end-to-end system locally and based on my observations with the tool, write test cases for my QA team (who would be using it to test bots in dev and staging environments).

If you have any alternatives that would help me with this, that would also be appreciated and I can use Rasa X solely to collect more conversations.

@ganeshv Investigating and playing with it is why we support it. You can continue to use it locally if you find it valuable :slight_smile:

I might checkout setting up a Rasa X server. If you just forward the conversations from your production deployment and connect Integrated Version Control to the Git branch you are using for dev or staging, you will always have the latest state of your assistant + real conversations; rather than write tests, you can just save them from conversations; and there is no worry about messing with the production deployment

Thanks @tyd. Will explore the server deployment later this week.

For the local deployment however, I’m struggling to get rasa x started. When I type in rasa x, I get the following error and my Python also aborts suddenly -

zsh: abort      rasa x

I’m using rasa version 1.9.5 and rasa-x-0.27.4. I had initially thought that the two topics were related (rasa X not meant to run locally and rasa x not running locally on my machine), but it looks like something else is at work here.

@ganeshv Does rasa shell work? Does it give you the same error if you create a clean virtual environment?

Good point @tyd . I just tried rasa train before running rasa shell to make sure that my trained model is from the same rasa version and I got a very similar result for rasa train -

Training Core model...
zsh: abort      rasa train

I uninstalled and re-installed rasa and rasa X as well. Same result.

In fact, if I start from scratch, using rasa init on my command line, it fails again.

Welcome to Rasa! 🤖

To get started quickly, an initial project will be created.
If you need some help, check out the documentation at https://rasa.com/docs/rasa.
Now let's start! 👇🏽

? Please enter a path where the project will be created [default: current directory] .
Created project directory at '/Users/ganesh/rasa-errors-test'.
Finished creating project structure.
? Do you want to train an initial model? 💪🏽  Yes
Training an initial model...
Training Core model...
zsh: abort      rasa init

@ganeshv just a guess - are you runnign inside a docker container? then you might need to give it more memory/resources. I’ve seen that running pytorch or TF that things just abort.

also for those that haven’t seen it there’s an area on rasa docs on how to run a cut-down version of rasa-x. I can’t find the link now, but that maybe another starting point.

also this chappie shared a repo with docker configs for a rasa-x run https://github.com/rgstephens/jokebot/tree/9f442a9fa88b241d1157e65cc911dff2c8c1379f

From my own point of view, I really just want to try out rasa-x before spinning up servers, and doing all the dev-ops work. It would be nice if there was a public sandbox version even if it reset every 30 minutes! @tyd

@tyd - Dug more on this issue during my day and realized the issue is more around my zsh/bash or python. Until I get more clarity on this; I’ll stop hijacking the thread :see_no_evil:.

I agree @dcsan - to have a simplified version of rasa-x even for a limited workflow duration is what I was looking for.

@tyd Can we connect multiple instances of the bot that use the same repo as the codebase/repo to a single instance of Rasa X? I think it does if we specify the same Rasa X parameters in the endpoints.yml file for the codebase, but just wanted to be sure.

@ganeshv Can you describe what you mean by multiple instances of the same bot?

Hello @tyd, that’s a good question. To clarify, I was thinking of a setup where there is a single repo of Rasa, but multiple environments (or tenants) within which conversations happen. These conversations are responded to using the same kind of logic represented by the model/repo.

I saw each bot used in each tenant as an instance of a master bot (repo).

@ganeshv Rasa X currently only supports the standard Rasa project layout. In Rasa Open Source, there is the MultiProjectImporter, but we are still thinking through how to best set up Rasa X to handle multiple projects, multiple languages, etc.

@tyd Now I’m confused. What do you mean by the standard Rasa project layout?

In case you’re talking about the file/folder structure, that will still be identical for each bot deployed to each tenant. They are still answering questions based on the same model stored in the repo (hence, covering the same set of use cases and language). So these aren’t multiple projects per se, but the same project reused once per tenant.

What I was hoping this does, is collect conversations from all of these environments and track it in a single instance of Rasa X. For example, the environment A contains m conversations with the bot R and environment B contains n conversations with the same bot R, then the same instance of Rasa X would track a total of m + n conversations.

@ganeshv From the Rasa X docs:

For Rasa X to correctly visualize and modify your AI assistant’s data, your project needs to follow the default Rasa Open Source project layout created by rasa init

Okay. I think I understand what you are after now. Why are there different environments with the same bot though?

That is a great question as well and I realized I didn’t clarify this earlier. :slight_smile:

I’m looking to deploy as an agency, the same bot to multiple test environments. These multiple environments have nearly identical use cases so the same bot would work. There are a couple of local configurations (like differences in timezone, etc.) for those environments that differ which I’m handling through a slight modification in the build process (but the NER and dialog management portion will continue to remain consistent). The agency will fully own these test environments.

The other important reason is that it distributes the tracking of usable conversations across multiple test environments; so conversations happening in environment A will improve the bot for environments A and B.

@ganeshv Gotcha. You can definitely use rasa export to add all of the conversations to one Rasa X instance. I have not verified this, but you also might be able to connect multiple existing Rasa deployments to one Rasa X instance :thinking:

Thanks @tyd! I’ll try to verify if this setup is stable and confirm.

@tyd couple more questions -

  1. In your webinar, you had mentioned that, right now the integrated version control does not support any other file setup. The only support setup involves a single nlu.md, a single stories.md file. Is there anything in the works that allows us to use integrated version control for a consistent file structure that breaks nlu and stories file into modules?

  2. Within Rasa X, is it possible to compare evaluation metrics for different pipelines?

Can’t thank you enough for your help here, Ty!

Hey @ganeshv. Happy to help :slight_smile:

  1. We have added this capability. Split nlu.md and stories.md now work with Integrated Version Control.

  2. Not within Rasa X. However, I would recommend setting up a CI pipeline that runs automated evaluations and post the artifacts it produces somewhere accessible (e.g. Comment on a GitHub PR with Rasa NLU cross-validation results). Then, when you push changes from Rasa X or anywhere, you can this CI/CD pipeline.

1 Like