Error Custom Tracker with Postgres

Okay. I understand now. I will take a look at the underlying Implemention of max event history but if you no longer receive events beyond 40, perhaps you need to flush the tracker store once it reaches 40 events to see if new events are coming?

Because otherwise the predictions for rasa core won’t be correct.

@anne576 - okay the max_event_history sets a hard limit to the number of events to be stored in the tracker store. but your event broker will continue to pile on incoming events.

you would need to flush out the tracker store once 40 events have reached so it has space for the new events. Could you try to delete all events from the tracker store and check if new events are now arriving while check that the event broker is accumulating events 40 + new events since you delete the events from the tracker store.

I am sorry, i don’t have ready implementation of this to test out so this is a bit of a back and forth

1 Like

This solution worked. Thank you very much!

I’m sorry I want to just ask about this again as I realised there is an issue. You mentioned not passing the event_broker at all in the super(). This worked for getting rid of the error. However, even without the max_event_history line, there are no events being sent to the postgres tracker_store at all. I am not sure what I can do to have a working customised postgres tracker as well. Sorry for the inconvenience. I would be grateful if you happen to have any other ideas of how to fix this error.

I don’t think both are related, an event broker is simply used when you want to move events from the tracker store to elsewhere but it shouldn’t hinder the events getting stored in the tracker store.

can you share the file of your custom tracker store implementation. I’m curious. you should implement all the methods like save retrieve etc…

1 Like

myTracker.py (13.1 KB) This is the chunkier version of the Postgres tracker, which is a copy of the SQLTrackerStore from the Rasa code. I took max_events_history line out, will add it later once I get a working version of the Tracker. The endpoints config I posted at the beginning of this thread. Thank you for the help!

I actually managed to get the database connected to MyTracker which is good. The super() line needed to be moved just after the init and I also put in the config values for postgres, password, etc in the python file as well. Here is how it looks:

# SQL
class MyTrackerStore(SQLTrackerStore):
    """Store which can save and retrieve trackers from an SQL database."""
    def __init__(
        self,
        domain: Optional[Domain] = None,
        dialect: Text = "postgresql",
        host: Optional[Text] = None,
        port: Optional[int] = 5432,
        db: Text = "mydb",
        username: Text = "postgres",
        password: Text = "postgres",
        event_broker: Optional[EventBroker] = None,
        login_db: Optional[Text] = None,
        query: Optional[Dict] = None,
        **kwargs: Dict[Text, Any],
    ) -> None:
        super().__init__(domain, **kwargs)
        self.max_event_history = 20

        import sqlalchemy.exc

        engine_url = self.get_db_url(
            dialect, host, port, db, username, password, login_db, query
        )

        self.engine = sa.create_engine(engine_url, **create_engine_kwargs(engine_url))

        logger.debug(
            f"Attempting to connect to database via '{repr(self.engine.url)}'."
        )

        # Database might take a while to come up
        while True:
            try:
                # if `login_db` has been provided, use current channel with
                # that database to create working database `db`
                if login_db:
                    self._create_database_and_update_engine(db, engine_url)

                try:
                    self.Base.metadata.create_all(self.engine)
                except (
                    sqlalchemy.exc.OperationalError,
                    sqlalchemy.exc.ProgrammingError,
                ) as e:
                    # Several Rasa services started in parallel may attempt to
                    # create tables at the same time. That is okay so long as
                    # the first services finishes the table creation.
                    logger.error(f"Could not create tables: {e}")

                self.sessionmaker = sa.orm.session.sessionmaker(bind=self.engine)
                break
            except (
                sqlalchemy.exc.OperationalError,
                sqlalchemy.exc.IntegrityError,
            ) as error:

                logger.warning(error)
                sleep(5)

        logger.debug(f"Connection to SQL database '{db}' successful.")

However, I still managed to run into issues somehow. If I try and add in max_event_history line anywhere, for some reason it doesn’t stop the events from happening. So the max_event_history is taken in as a variable, and recognised in the Tracker class but doesn’t stop the events at all.

1 Like

Is it okay if I ask for your expertise, hopefully one last time on this :sweat_smile: ? what do you think the issue might be?

in your code, it seems your super().init(domain, **kwargs) is passed on before max_event_history, it should be after i suppose

all other self.methods() is okay to be run after init because they are different methods of the class but args should be passed while initialization, so you can put the max_event_history in the argument like this

def __init__(
        self,
        domain: Optional[Domain] = None,
        dialect: Text = "postgresql",
        host: Optional[Text] = None,
        port: Optional[int] = 5432,
        db: Text = "mydb",
        username: Text = "postgres",
        password: Text = "postgres",
        event_broker: Optional[EventBroker] = None,
        login_db: Optional[Text] = None,
        query: Optional[Dict] = None,
        max_event_history = 20,
        **kwargs: Dict[Text, Any],
    )