How to run rasa-x on the mac?

So it seems rasa-x is only supported for the dockerized version now, not the old version in pip.

However the installer script is ubuntu (with an ansible playbook ) dependent.

Is there a docker image or some other way to run rasa-x locally on a mac to test and train a bot I’m developing?

moving this to the rasa-X category instead. Is anyone else using this tool or at least previewing it locally?

If I were to setup some type of virtual-box image for Ubuntu on the mac, would that enable the rasa-x installer to run? sounds a bit inception-y thought >.<

Hey @dcsan

Unfortunatelly, Rasa X is not meant to be run “locally”.

You’re right about the dockerized version though. Any planform that can run docker can run Rasa X. So you can easily use a Virtual Box (or any other tool) with ubuntu inside, and use our deployment script.

but this is not a normal docker container, there’s another build step with ansible etc. https://storage.googleapis.com/rasa-x-releases/0.25.2/install.sh

I’m not sure why you didn’t just provide a set of docker images for rabbit mq etc or whatever services you need and a docker-compose script. Using ansible to build a machine image that can only run on ubuntu (or some virtual image) seems like an extra layer of virtualization to run a virtual container…

It seems like this is really raising the barrier for people to check out rasa-x. Wondering what the benefit is of this method over pure docker containers.

Is the source code for rasa-x available somewhere?

@degiz, @dcsan Also curious why running Rasa X locally or even inside a Docker container was not deemed as a use case.

Hi @ganeshv. Rasa X is intended to help you turn conversations into training data. The assumption was that it is not easy nor scalable to collect conversations from users if you use it locally on a computer that is not always running / whose IP address is not accessible to other users (whether it was installed via pip or Docker), which caused many of our users to struggle to understand how to use Rasa X and successfully use it.

Do you use Rasa X locally for something else or have you found a workflow for collecting conversations from users locally?

Hello @tyd, thanks for your reply :slight_smile: . I used Rasa X locally to investigate and play with the tool (using a baseline data that I also have locally) just so that if I’m doing so, I’m not messing with the production deployment or data. The intentions were also to have a fully end-to-end system locally and based on my observations with the tool, write test cases for my QA team (who would be using it to test bots in dev and staging environments).

If you have any alternatives that would help me with this, that would also be appreciated and I can use Rasa X solely to collect more conversations.

@ganeshv Investigating and playing with it is why we support it. You can continue to use it locally if you find it valuable :slight_smile:

I might checkout setting up a Rasa X server. If you just forward the conversations from your production deployment and connect Integrated Version Control to the Git branch you are using for dev or staging, you will always have the latest state of your assistant + real conversations; rather than write tests, you can just save them from conversations; and there is no worry about messing with the production deployment

Thanks @tyd. Will explore the server deployment later this week.

For the local deployment however, I’m struggling to get rasa x started. When I type in rasa x, I get the following error and my Python also aborts suddenly -

zsh: abort      rasa x

I’m using rasa version 1.9.5 and rasa-x-0.27.4. I had initially thought that the two topics were related (rasa X not meant to run locally and rasa x not running locally on my machine), but it looks like something else is at work here.

@ganeshv Does rasa shell work? Does it give you the same error if you create a clean virtual environment?

Good point @tyd . I just tried rasa train before running rasa shell to make sure that my trained model is from the same rasa version and I got a very similar result for rasa train -

Training Core model...
zsh: abort      rasa train

I uninstalled and re-installed rasa and rasa X as well. Same result.

In fact, if I start from scratch, using rasa init on my command line, it fails again.

Welcome to Rasa! 🤖

To get started quickly, an initial project will be created.
If you need some help, check out the documentation at https://rasa.com/docs/rasa.
Now let's start! 👇🏽

? Please enter a path where the project will be created [default: current directory] .
Created project directory at '/Users/ganesh/rasa-errors-test'.
Finished creating project structure.
? Do you want to train an initial model? 💪🏽  Yes
Training an initial model...
Training Core model...
zsh: abort      rasa init

@ganeshv just a guess - are you runnign inside a docker container? then you might need to give it more memory/resources. I’ve seen that running pytorch or TF that things just abort.

also for those that haven’t seen it there’s an area on rasa docs on how to run a cut-down version of rasa-x. I can’t find the link now, but that maybe another starting point.

also this chappie shared a repo with docker configs for a rasa-x run https://github.com/rgstephens/jokebot/tree/9f442a9fa88b241d1157e65cc911dff2c8c1379f

From my own point of view, I really just want to try out rasa-x before spinning up servers, and doing all the dev-ops work. It would be nice if there was a public sandbox version even if it reset every 30 minutes! @tyd

@tyd - Dug more on this issue during my day and realized the issue is more around my zsh/bash or python. Until I get more clarity on this; I’ll stop hijacking the thread :see_no_evil:.

I agree @dcsan - to have a simplified version of rasa-x even for a limited workflow duration is what I was looking for.

@tyd Can we connect multiple instances of the bot that use the same repo as the codebase/repo to a single instance of Rasa X? I think it does if we specify the same Rasa X parameters in the endpoints.yml file for the codebase, but just wanted to be sure.

@ganeshv Can you describe what you mean by multiple instances of the same bot?

Hello @tyd, that’s a good question. To clarify, I was thinking of a setup where there is a single repo of Rasa, but multiple environments (or tenants) within which conversations happen. These conversations are responded to using the same kind of logic represented by the model/repo.

I saw each bot used in each tenant as an instance of a master bot (repo).

@ganeshv Rasa X currently only supports the standard Rasa project layout. In Rasa Open Source, there is the MultiProjectImporter, but we are still thinking through how to best set up Rasa X to handle multiple projects, multiple languages, etc.

@tyd Now I’m confused. What do you mean by the standard Rasa project layout?

In case you’re talking about the file/folder structure, that will still be identical for each bot deployed to each tenant. They are still answering questions based on the same model stored in the repo (hence, covering the same set of use cases and language). So these aren’t multiple projects per se, but the same project reused once per tenant.

What I was hoping this does, is collect conversations from all of these environments and track it in a single instance of Rasa X. For example, the environment A contains m conversations with the bot R and environment B contains n conversations with the same bot R, then the same instance of Rasa X would track a total of m + n conversations.