Hello @tyd, thanks for your reply . I used Rasa X locally to investigate and play with the tool (using a baseline data that I also have locally) just so that if I’m doing so, I’m not messing with the production deployment or data. The intentions were also to have a fully end-to-end system locally and based on my observations with the tool, write test cases for my QA team (who would be using it to test bots in dev and staging environments).
If you have any alternatives that would help me with this, that would also be appreciated and I can use Rasa X solely to collect more conversations.
Thanks @tyd. Will explore the server deployment later this week.
For the local deployment however, I’m struggling to get rasa x started. When I type in rasa x, I get the following error and my Python also aborts suddenly -
zsh: abort rasa x
I’m using rasa version 1.9.5 and rasa-x-0.27.4. I had initially thought that the two topics were related (rasa X not meant to run locally and rasa x not running locally on my machine), but it looks like something else is at work here.
In fact, if I start from scratch, using rasa init on my command line, it fails again.
Welcome to Rasa! 🤖
To get started quickly, an initial project will be created.
If you need some help, check out the documentation at https://rasa.com/docs/rasa.
Now let's start! 👇🏽
? Please enter a path where the project will be created [default: current directory] .
Created project directory at '/Users/ganesh/rasa-errors-test'.
Finished creating project structure.
? Do you want to train an initial model? 💪🏽 Yes
Training an initial model...
Training Core model...
zsh: abort rasa init
From my own point of view, I really just want to try out rasa-x before spinning up servers, and doing all the dev-ops work. It would be nice if there was a public sandbox version even if it reset every 30 minutes! @tyd
@tyd Can we connect multiple instances of the bot that use the same repo as the codebase/repo to a single instance of Rasa X? I think it does if we specify the same Rasa X parameters in the endpoints.yml file for the codebase, but just wanted to be sure.
Hello @tyd, that’s a good question. To clarify, I was thinking of a setup where there is a single repo of Rasa, but multiple environments (or tenants) within which conversations happen. These conversations are responded to using the same kind of logic represented by the model/repo.
I saw each bot used in each tenant as an instance of a master bot (repo).
@ganeshv Rasa X currently only supports the standard Rasa project layout. In Rasa Open Source, there is the MultiProjectImporter, but we are still thinking through how to best set up Rasa X to handle multiple projects, multiple languages, etc.
@tyd Now I’m confused. What do you mean by the standard Rasa project layout?
In case you’re talking about the file/folder structure, that will still be identical for each bot deployed to each tenant. They are still answering questions based on the same model stored in the repo (hence, covering the same set of use cases and language). So these aren’t multiple projects per se, but the same project reused once per tenant.
What I was hoping this does, is collect conversations from all of these environments and track it in a single instance of Rasa X. For example, the environment A contains m conversations with the bot R and environment B contains n conversations with the same bot R, then the same instance of Rasa X would track a total of m + n conversations.
That is a great question as well and I realized I didn’t clarify this earlier.
I’m looking to deploy as an agency, the same bot to multiple test environments. These multiple environments have nearly identical use cases so the same bot would work. There are a couple of local configurations (like differences in timezone, etc.) for those environments that differ which I’m handling through a slight modification in the build process (but the NER and dialog management portion will continue to remain consistent). The agency will fully own these test environments.
The other important reason is that it distributes the tracking of usable conversations across multiple test environments; so conversations happening in environment A will improve the bot for environments A and B.
In your webinar, you had mentioned that, right now the integrated version control does not support any other file setup. The only support setup involves a single nlu.md, a single stories.md file. Is there anything in the works that allows us to use integrated version control for a consistent file structure that breaks nlu and stories file into modules?
Within Rasa X, is it possible to compare evaluation metrics for different pipelines?