Problem in understanding the NLU components

As I am starting to build the dataset for my NLU pipeline. There are a few doubts that I have. The bot I am working on will be used in the production at the organization I am working in.

  1. As my dataset is going to have a lot of dates and time. So, how to make the intent classifier aware of dates as entities. The queries are going to have a lot of examples like:
  • book flight from Delhi to Mumbai on 18 august.
  • show flight options for 20-7-2020.
  • 25 dec flight options for Bangalore.

So, if I prepare training data as given below, will it work? Or is there any better way to define date entities? Does adding the “DucklingHTTPExtractor” in the pipeline would help(as it is just an entity extractor)?

  • book flight from Delhi to Mumbai on [18 august](date).
  • show flight options for [20-7-2020](date).
  • [25 dec](date) flight options for Bangalore.

I have raised this query 2 times already in this forum. Please help.

  1. I read the documentation and saw the example RASA Bots on github and I still don’t understand the purpose of lookup tables. How and where can I use lookup tables? It would be really helpful if someone explains this.

  2. The last doubt is about regex. I used the regex example from docs which is used to match the zip codes. I need a little clarity about its functionality.


  • [0-9]{5}

Let’s say my queries are like-

  • the location is 11007
  • my postal code is 11003

So, does the regex takes the input, matches the zip code, replaces it with some token, and then the text is passed further in the pipeline to make the pipeline aware of zip codes? or is there any other purpose behind it?

For second question. I found this blogpost Entity extraction with the new lookup table feature in Rasa NLU