Are RedisTrackerStore and RedisLockStore helpful?

Hello! I have been doing experiments to check how the Rasa model (with some simple intents) performs with 10, 20, 50. etc users messaging the bot concurrently in the background. Initially, I tested it with the InMemoryTrackerStore and InMemoryLockStore. I was looking at the time it takes to get a response back from the bot and recorded it for these instances.

It seems that as the users ping the bot, the time delay for a response gets longer. So for example for 10 users pinging the bot in the background, it takes 0.5 seconds for a user to receive a response at the beginning of the experiment, then as the 10 users keep messaging the bot in the background, this response time increases to 0.6 seconds, 0.7 and so on as time goes by.

I tested with one or both of the RedisTrackerStore and RedisLockStore. However, the response time is still gradually increasing over time as the users keep pinging the bot. I thought that the Reids tracker would fix this issue. I am confused as to why that is the case. Have I done something wrong?

The memory speed is more than redis with data used.

That’s not strange.

Did you think the RedisTrackerStore speed is more than InMemoryTrackerStore

Hi I updated my question so it’s more clear. Do you mind telling me what you think about it now?

I am looking to prevent this increase in response time as the users keep messaging the bot over time.

Predict user action depend on user past msg.

The more messages you have, the more data rasa need to process. So the speed will down.

How much will down is not certain, you can use performance test tool to check it.

You can also see this issue.

I know this is an older thread but figured I would add my experience of this issue for future readers.

InMemoryStoreTrackerStore and InMemoryLockStore are for situations where you have a single Rasa Core instance. I.e. If you have one instance for a chatbot it can store trackers and perform conversation locks in it’s own memory without issue.

If you want to scale up to multiple rasa instances for a single chatbot (i.e. load balancing) you will need a shared/external place to store trackers and lock conversations. I.e. in the case of TrackerStore all Rasa Core instances need access to the same trackers, allowing any instance to respond to any conversation (Yay, no need for sticky sessions). In the case of LockStore each Rasa Instance needs to be able to lock a conversations to ensure other instances don’t respond to the same conversation and messages are all processed in the correct order.

Either way you are storing and retrieving the same data, but I would imagine using InMemoryTracker and InMemoryLockStore would generally be faster, but as they don’t allow us to scale we sometimes need an external tracker store/lock store like Redis/SQL/Mongo/etc.

We had a similar issues with response time increasing due to the increasing size of trackers throughout a conversation. Each tracker is essentially a stack/list of previous conversation events that allows the chatbot to continue the conversation context, so they are expected to get bigger (and each input actually equates to multiple events).

To tackle this issue we decided to create a custom tracker using Firestore (GCP NoSQL DB). In the custom tracker code we trim the conversation history so that we only actually store the most recent n (e.g. 20) events. This seems to be working for us so far, but we will see when we scale up.

You should be able to do the same for Redis/Memory/whichever - have a look at the rasa code for the trackers, create your own version with trimmed tracker, then refer to the custom tracker file and Class name in your endpoints.yml ("."):

tracker_store: type: custom_tracker.MyCustomTrackerStore

Hope the above is helpful and accurate. If anyone notices any issues please advise - my understanding needs some fine tuning.

The only thing I was unsure about was whether trimming would remove SlotSet events and result in slots no longer being populated, so in the end I actually avoided trimming SetSlot events (except for older SlotSet events for slots that have since been set again). I guess this means we will still get a slight increase in memory usage but hopefully not too noticeable.