Object of type 'MaxHistoryTrackerFeaturizer' is not JSON serializable

While training rasa core getting error.

****************here is my config.yml file ***********

# Configuration for Rasa NLU.
# https://rasa.com/docs/rasa/nlu/components/
language: en
pipeline: supervised_embeddings

# Configuration for Rasa Core.
# https://rasa.com/docs/rasa/core/policies/
policies:
  - name: "KerasPolicy"
    featurizer:
    - name: MaxHistoryTrackerFeaturizer
      max_history: 5
      state_featurizer:
        - name: BinarySingleStateFeaturizer
  - name: "MemoizationPolicy"
    max_history: 5
  - name: "FallbackPolicy"
    nlu_threshold: 0.4
    core_threshold: 0.3
    fallback_action_name: "utter_default"

1409/1409 [==============================] - 1s 356us/sample - loss: 0.0441 - acc: 0.9908
Epoch 100/100
1409/1409 [==============================] - 0s 268us/sample - loss: 0.0462 - acc: 0.9886
2019-08-16 11:19:33 INFO     rasa.core.policies.keras_policy  - Done fitting keras policy model
Processed trackers: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 17/17 [00:00<00:00, 657.52it/s, # actions=105]
Processed actions: 105it [00:00, 5027.53it/s, # examples=105]
2019-08-16 11:19:34 INFO     rasa.core.agent  - Persisted model to '/tmp/tmp5vuuy737/core'
Core model training completed.
Traceback (most recent call last):
  File "/home/vinbox/Documents/environments/my_env/bin/rasa", line 11, in <module>
    sys.exit(main())
  File "/home/vinbox/Documents/environments/my_env/lib/python3.6/site-packages/rasa/__main__.py", line 76, in main
    cmdline_arguments.func(cmdline_arguments)
  File "/home/vinbox/Documents/environments/my_env/lib/python3.6/site-packages/rasa/cli/train.py", line 112, in train_core
    kwargs=extract_additional_arguments(args),
  File "/home/vinbox/Documents/environments/my_env/lib/python3.6/site-packages/rasa/train.py", line 243, in train_core
    kwargs=kwargs,
  File "uvloop/loop.pyx", line 1417, in uvloop.loop.Loop.run_until_complete
  File "/home/vinbox/Documents/environments/my_env/lib/python3.6/site-packages/rasa/train.py", line 299, in train_core_async
    kwargs=kwargs,
  File "/home/vinbox/Documents/environments/my_env/lib/python3.6/site-packages/rasa/train.py", line 337, in _train_core_with_validated_data
    new_fingerprint = await model.model_fingerprint(file_importer)
  File "/home/vinbox/Documents/environments/my_env/lib/python3.6/site-packages/rasa/model.py", line 216, in model_fingerprint
    config, include_keys=CONFIG_MANDATORY_KEYS_CORE
  File "/home/vinbox/Documents/environments/my_env/lib/python3.6/site-packages/rasa/model.py", line 241, in _get_hash_of_config
    return get_dict_hash(sub_config)
  File "/home/vinbox/Documents/environments/my_env/lib/python3.6/site-packages/rasa/core/utils.py", line 357, in get_dict_hash
    return md5(json.dumps(data, sort_keys=True).encode(encoding)).hexdigest()
  File "/usr/lib/python3.6/json/__init__.py", line 238, in dumps
    **kw).encode(obj)
  File "/usr/lib/python3.6/json/encoder.py", line 199, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "/usr/lib/python3.6/json/encoder.py", line 257, in iterencode
    return _iterencode(o, 0)
  File "/usr/lib/python3.6/json/encoder.py", line 180, in default
    o.__class__.__name__)
TypeError: Object of type 'MaxHistoryTrackerFeaturizer' is not JSON serializable

Later on when i run rasa core getting “No policy ensemble or domain set. Skipping action prediction and execution.” and bot is not responding.

Please help.

I couldn’t reproduce this issue yet, my model trained fine using the above configuration. The error would suggest that the configuration dict contains an object as part of the policy configuration (but really it should rather be a string).

Which version of Rasa do you have installed in that python env (my_env)?

thanks for your reply

(my_env) (base) vinbox@vinbox-virtual-machine:~/rasa-master$ pip freeze|grep rasa rasa==1.2.3 rasa-core==0.14.5 rasa-core-sdk==0.14.0 rasa-nlu==0.15.1 rasa-sdk==1.2.0 (my_env) (base) vinbox@vinbox-virtual-machine:~/rasa-master$

can you please tell me where should i make a change to overcome this problem.

one more query:- what is the actual purpose of using ‘MaxHistoryTrackerFeaturizer’ in general.

Do you mind uninstalling rasa-core, rasa-nlu and rasa-core-sdk? They might provide conflicting modules that carry the same name as in rasa.

Which command do you use for training?

I have the same question . are you solve the problem now?

have uninstalled rasa-core,rasa-nlu,rasa-core-sdk. Still getting same error while training rasa core. i am using rasa train nlu and rasa train core commands for training.

(my_env) (base) vinbox@vinbox-virtual-machine:~/rasa-master$ (my_env) (base) vinbox@vinbox-virtual-machine:~/rasa-master$ rasa train core Training Core model… 2019-08-30 11:26:05 INFO root - Generating grammar tables from /usr/lib/python3.6/lib2to3/Grammar.txt 2019-08-30 11:26:05 INFO root - Generating grammar tables from /usr/lib/python3.6/lib2to3/PatternGrammar.txt Processed Story Blocks: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:00<00:00, 1171.04it/s, # trackers=1] Processed Story Blocks: 100%|████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:00<00:00, 75.79it/s, # trackers=20] Processed Story Blocks: 100%|████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:00<00:00, 33.92it/s, # trackers=35] Processed Story Blocks: 100%|████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:00<00:00, 32.85it/s, # trackers=39] Processed trackers: 100%|████████████████████████████████████████████████████████████████████████████████████████████| 523/523 [00:02<00:00, 232.66it/s, # actions=1615]


Layer (type) Output Shape Param #

masking (Masking) (None, 5, 49) 0


lstm (LSTM) (None, 32) 10496


dense (Dense) (None, 26) 858


activation (Activation) (None, 26) 0

Total params: 11,354 Trainable params: 11,354 Non-trainable params: 0


2019-08-30 11:26:12 INFO rasa.core.policies.keras_policy - Fitting model with 1615 total samples and a validation split of 0.1 Epoch 1/100 1615/1615 [==============================] - 2s 1ms/sample - loss: 2.8321 - acc: 0.3585 Epoch 2/100 1615/1615 [==============================] - 0s 301us/sample - loss: 2.3385 - acc: 0.4167 Epoch 3/100 1615/1615 [==============================] - 1s 339us/sample - loss: 2.1808 - acc: 0.4167 Epoch 4/100 1615/1615 [==============================] - 0s 293us/sample - loss: 2.0647 - acc: 0.4167 Epoch 5/100 1615/1615 [==============================] - 0s 293us/sample - loss: 1.9429 - acc: 0.4180 Epoch 6/100 1615/1615 [==============================] - 0s 299us/sample - loss: 1.8104 - acc: 0.4260 Epoch 7/100 1615/1615 [==============================] - 0s 297us/sample - loss: 1.6760 - acc: 0.4861 Epoch 8/100 1615/1615 [==============================] - 0s 293us/sample - loss: 1.5439 - acc: 0.5492 Epoch 9/100 1615/1615 [==============================] - 0s 289us/sample - loss: 1.4169 - acc: 0.6012 Epoch 10/100 1615/1615 [==============================] - 0s 284us/sample - loss: 1.3031 - acc: 0.6254 Epoch 11/100 1615/1615 [==============================] - 0s 291us/sample - loss: 1.1891 - acc: 0.6706 Epoch 12/100 1615/1615 [==============================] - 0s 287us/sample - loss: 1.0953 - acc: 0.7201 Epoch 13/100 1615/1615 [==============================] - 0s 293us/sample - loss: 0.9893 - acc: 0.7536 Epoch 14/100 1615/1615 [==============================] - 1s 317us/sample - loss: 0.9062 - acc: 0.7920 Epoch 15/100 1615/1615 [==============================] - 0s 307us/sample - loss: 0.8201 - acc: 0.8155 Epoch 16/100 1615/1615 [==============================] - 0s 297us/sample - loss: 0.7581 - acc: 0.8266 Epoch 17/100 1615/1615 [==============================] - 0s 294us/sample - loss: 0.6910 - acc: 0.8495 Epoch 18/100 1615/1615 [==============================] - 0s 295us/sample - loss: 0.6103 - acc: 0.8712 Epoch 19/100 1615/1615 [==============================] - 0s 289us/sample - loss: 0.5663 - acc: 0.8786 Epoch 20/100 1615/1615 [==============================] - 0s 292us/sample - loss: 0.5021 - acc: 0.8892 Epoch 21/100 1615/1615 [==============================] - 0s 288us/sample - loss: 0.4935 - acc: 0.8910 Epoch 22/100 1615/1615 [==============================] - 0s 279us/sample - loss: 0.4386 - acc: 0.9084 Epoch 23/100 1615/1615 [==============================] - 0s 280us/sample - loss: 0.3998 - acc: 0.9108 Epoch 24/100 1615/1615 [==============================] - 0s 290us/sample - loss: 0.3684 - acc: 0.9245 Epoch 25/100 1615/1615 [==============================] - 0s 290us/sample - loss: 0.3329 - acc: 0.9282 Epoch 26/100 1615/1615 [==============================] - 0s 284us/sample - loss: 0.3158 - acc: 0.9319 Epoch 27/100 1615/1615 [==============================] - 0s 292us/sample - loss: 0.2909 - acc: 0.9406 Epoch 28/100 1615/1615 [==============================] - 0s 294us/sample - loss: 0.2839 - acc: 0.9344 Epoch 29/100 1615/1615 [==============================] - 0s 294us/sample - loss: 0.2657 - acc: 0.9344 Epoch 30/100 1615/1615 [==============================] - 0s 289us/sample - loss: 0.2442 - acc: 0.9480 Epoch 31/100 1615/1615 [==============================] - 0s 292us/sample - loss: 0.2358 - acc: 0.9461 Epoch 32/100 1615/1615 [==============================] - 0s 288us/sample - loss: 0.2084 - acc: 0.9548 Epoch 33/100 1615/1615 [==============================] - 0s 281us/sample - loss: 0.2105 - acc: 0.9536 Epoch 34/100 1615/1615 [==============================] - 0s 288us/sample - loss: 0.2071 - acc: 0.9560 Epoch 35/100 1615/1615 [==============================] - 0s 288us/sample - loss: 0.1797 - acc: 0.9616 Epoch 36/100 1615/1615 [==============================] - 0s 285us/sample - loss: 0.1761 - acc: 0.9536 Epoch 37/100 1615/1615 [==============================] - 0s 290us/sample - loss: 0.1619 - acc: 0.9647 Epoch 38/100 1615/1615 [==============================] - 0s 293us/sample - loss: 0.1666 - acc: 0.9622 Epoch 39/100 1615/1615 [==============================] - 0s 287us/sample - loss: 0.1522 - acc: 0.9616 Epoch 40/100 1615/1615 [==============================] - 0s 295us/sample - loss: 0.1444 - acc: 0.9684 Epoch 41/100 1615/1615 [==============================] - 0s 290us/sample - loss: 0.1447 - acc: 0.9690 Epoch 42/100 1615/1615 [==============================] - 0s 286us/sample - loss: 0.1330 - acc: 0.9765 Epoch 43/100 1615/1615 [==============================] - 0s 291us/sample - loss: 0.1403 - acc: 0.9641 Epoch 44/100 1615/1615 [==============================] - 0s 289us/sample - loss: 0.1189 - acc: 0.9734 Epoch 45/100 1615/1615 [==============================] - 0s 288us/sample - loss: 0.1293 - acc: 0.9709 Epoch 46/100 1615/1615 [==============================] - 0s 289us/sample - loss: 0.1202 - acc: 0.9752 Epoch 47/100 1615/1615 [==============================] - 0s 280us/sample - loss: 0.1208 - acc: 0.9728 Epoch 48/100 1615/1615 [==============================] - 0s 288us/sample - loss: 0.1108 - acc: 0.9740 Epoch 49/100 1615/1615 [==============================] - 0s 286us/sample - loss: 0.1033 - acc: 0.9765 Epoch 50/100 1615/1615 [==============================] - 0s 286us/sample - loss: 0.1000 - acc: 0.9827 Epoch 51/100 1615/1615 [==============================] - 0s 279us/sample - loss: 0.1066 - acc: 0.9721 Epoch 52/100 1615/1615 [==============================] - 0s 289us/sample - loss: 0.0903 - acc: 0.9796 Epoch 53/100 1615/1615 [==============================] - 0s 289us/sample - loss: 0.0924 - acc: 0.9802 Epoch 54/100 1615/1615 [==============================] - 0s 286us/sample - loss: 0.0958 - acc: 0.9746 Epoch 55/100 1615/1615 [==============================] - 0s 284us/sample - loss: 0.0931 - acc: 0.9789 Epoch 56/100 1615/1615 [==============================] - 0s 280us/sample - loss: 0.0810 - acc: 0.9808 Epoch 57/100 1615/1615 [==============================] - 0s 283us/sample - loss: 0.0855 - acc: 0.9802 Epoch 58/100 1615/1615 [==============================] - 0s 287us/sample - loss: 0.0859 - acc: 0.9771 Epoch 59/100 1615/1615 [==============================] - 0s 279us/sample - loss: 0.0745 - acc: 0.9783 Epoch 60/100 1615/1615 [==============================] - 0s 284us/sample - loss: 0.0764 - acc: 0.9839 Epoch 61/100 1615/1615 [==============================] - 0s 287us/sample - loss: 0.0817 - acc: 0.9808 Epoch 62/100 1615/1615 [==============================] - 0s 281us/sample - loss: 0.0686 - acc: 0.9827 Epoch 63/100 1615/1615 [==============================] - 0s 287us/sample - loss: 0.0706 - acc: 0.9858 Epoch 64/100 1615/1615 [==============================] - 0s 286us/sample - loss: 0.0681 - acc: 0.9827 Epoch 65/100 1615/1615 [==============================] - 0s 280us/sample - loss: 0.0778 - acc: 0.9808 Epoch 66/100 1615/1615 [==============================] - 0s 291us/sample - loss: 0.0660 - acc: 0.9845 Epoch 67/100 1615/1615 [==============================] - 0s 281us/sample - loss: 0.0675 - acc: 0.9802 Epoch 68/100 1615/1615 [==============================] - 0s 286us/sample - loss: 0.0748 - acc: 0.9771 Epoch 69/100 1615/1615 [==============================] - 0s 288us/sample - loss: 0.0600 - acc: 0.9851 Epoch 70/100 1615/1615 [==============================] - 0s 291us/sample - loss: 0.0669 - acc: 0.9845 Epoch 71/100 1615/1615 [==============================] - 0s 293us/sample - loss: 0.0659 - acc: 0.9808 Epoch 72/100 1615/1615 [==============================] - 0s 289us/sample - loss: 0.0654 - acc: 0.9851 Epoch 73/100 1615/1615 [==============================] - 0s 278us/sample - loss: 0.0598 - acc: 0.9876 Epoch 74/100 1615/1615 [==============================] - 0s 286us/sample - loss: 0.0599 - acc: 0.9839 Epoch 75/100 1615/1615 [==============================] - 0s 286us/sample - loss: 0.0626 - acc: 0.9851 Epoch 76/100 1615/1615 [==============================] - 0s 281us/sample - loss: 0.0577 - acc: 0.9858 Epoch 77/100 1615/1615 [==============================] - 0s 290us/sample - loss: 0.0586 - acc: 0.9839 Epoch 78/100 1615/1615 [==============================] - 0s 280us/sample - loss: 0.0727 - acc: 0.9789 Epoch 79/100 1615/1615 [==============================] - 0s 290us/sample - loss: 0.0501 - acc: 0.9882 Epoch 80/100 1615/1615 [==============================] - 0s 289us/sample - loss: 0.0659 - acc: 0.9845 Epoch 81/100 1615/1615 [==============================] - 0s 302us/sample - loss: 0.0535 - acc: 0.9827 Epoch 82/100 1615/1615 [==============================] - 0s 282us/sample - loss: 0.0513 - acc: 0.9864 Epoch 83/100 1615/1615 [==============================] - 0s 291us/sample - loss: 0.0463 - acc: 0.9870 Epoch 84/100 1615/1615 [==============================] - 0s 286us/sample - loss: 0.0518 - acc: 0.9864 Epoch 85/100 1615/1615 [==============================] - 0s 281us/sample - loss: 0.0482 - acc: 0.9851 Epoch 86/100 1615/1615 [==============================] - 0s 288us/sample - loss: 0.0532 - acc: 0.9858 Epoch 87/100 1615/1615 [==============================] - 0s 279us/sample - loss: 0.0652 - acc: 0.9814 Epoch 88/100 1615/1615 [==============================] - 0s 288us/sample - loss: 0.0495 - acc: 0.9851 Epoch 89/100 1615/1615 [==============================] - 0s 289us/sample - loss: 0.0531 - acc: 0.9851 Epoch 90/100 1615/1615 [==============================] - 0s 291us/sample - loss: 0.0465 - acc: 0.9913 Epoch 91/100 1615/1615 [==============================] - 0s 287us/sample - loss: 0.0442 - acc: 0.9907 Epoch 92/100 1615/1615 [==============================] - 0s 286us/sample - loss: 0.0506 - acc: 0.9851 Epoch 93/100 1615/1615 [==============================] - 0s 287us/sample - loss: 0.0521 - acc: 0.9839 Epoch 94/100 1615/1615 [==============================] - 0s 290us/sample - loss: 0.0389 - acc: 0.9901 Epoch 95/100 1615/1615 [==============================] - 0s 284us/sample - loss: 0.0452 - acc: 0.9839 Epoch 96/100 1615/1615 [==============================] - 0s 290us/sample - loss: 0.0480 - acc: 0.9839 Epoch 97/100 1615/1615 [==============================] - 0s 286us/sample - loss: 0.0431 - acc: 0.9870 Epoch 98/100 1615/1615 [==============================] - 0s 293us/sample - loss: 0.0391 - acc: 0.9913 Epoch 99/100 1615/1615 [==============================] - 0s 294us/sample - loss: 0.0471 - acc: 0.9845 Epoch 100/100 1615/1615 [==============================] - 0s 293us/sample - loss: 0.0484 - acc: 0.9851 2019-08-30 11:27:01 INFO rasa.core.policies.keras_policy - Done fitting keras policy model Processed trackers: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 23/23 [00:00<00:00, 453.11it/s, # actions=153] Processed actions: 153it [00:00, 3542.31it/s, # examples=153] 2019-08-30 11:27:03 INFO rasa.core.agent - Persisted model to ‘/tmp/tmplq4j9prj/core’ Core model training completed. Traceback (most recent call last): File “/home/vinbox/Documents/environments/my_env/bin/rasa”, line 11, in sys.exit(main()) File “/home/vinbox/Documents/environments/my_env/lib/python3.6/site-packages/rasa/main.py”, line 76, in main cmdline_arguments.func(cmdline_arguments) File “/home/vinbox/Documents/environments/my_env/lib/python3.6/site-packages/rasa/cli/train.py”, line 112, in train_core kwargs=extract_additional_arguments(args), File “/home/vinbox/Documents/environments/my_env/lib/python3.6/site-packages/rasa/train.py”, line 243, in train_core kwargs=kwargs, File “uvloop/loop.pyx”, line 1417, in uvloop.loop.Loop.run_until_complete File “/home/vinbox/Documents/environments/my_env/lib/python3.6/site-packages/rasa/train.py”, line 299, in train_core_async kwargs=kwargs, File “/home/vinbox/Documents/environments/my_env/lib/python3.6/site-packages/rasa/train.py”, line 337, in _train_core_with_validated_data new_fingerprint = await model.model_fingerprint(file_importer) File “/home/vinbox/Documents/environments/my_env/lib/python3.6/site-packages/rasa/model.py”, line 216, in model_fingerprint config, include_keys=CONFIG_MANDATORY_KEYS_CORE File “/home/vinbox/Documents/environments/my_env/lib/python3.6/site-packages/rasa/model.py”, line 241, in _get_hash_of_config return get_dict_hash(sub_config) File “/home/vinbox/Documents/environments/my_env/lib/python3.6/site-packages/rasa/core/utils.py”, line 357, in get_dict_hash return md5(json.dumps(data, sort_keys=True).encode(encoding)).hexdigest() File “/usr/lib/python3.6/json/init.py”, line 238, in dumps **kw).encode(obj) File “/usr/lib/python3.6/json/encoder.py”, line 199, in encode chunks = self.iterencode(o, _one_shot=True) File “/usr/lib/python3.6/json/encoder.py”, line 257, in iterencode return _iterencode(o, 0) File “/usr/lib/python3.6/json/encoder.py”, line 180, in default o.class.name) TypeError: Object of type ‘MaxHistoryTrackerFeaturizer’ is not JSON serializable (my_env) (base) vinbox@vinbox-virtual-machine:~/rasa-master$

Sorry no solution yet.

@Ghostvv has been looking into this and can probably give an update

ok thankyou.

currently, there is a bug, Do you mind creating an issue in rasa GitHub?

for now, you can directly pass max history:

policies:
  - name: "KerasPolicy"
    max_history: 5

I solved this problem like this:
Add a class to the rasa/core/utils.py file, this class is like this:

class MyEncoder(json.JSONEncoder):

    def default(self, obj):
        if not isinstance(obj, Dict):
            # return obj.__class__.__name__
            return hash(obj)
        else:
            return super(MyEncoder, self).default(obj)

then modify get_dict_hash() to

 def get_dict_hash(data: Dict, encoding: Text = "utf-8") -> Text:
     """Calculate the md5 hash of a dictionary.
        这里加入了cls=MyEncoder的类作为参数
     """
    return md5(json.dumps(data, cls=MyEncoder, sort_keys=True).encode(encoding)).hexdigest()

The whole file rasa/core/utils.py is like this:

# -*- coding: utf-8 -*-
import argparse
import asyncio
import json
import logging
import re
import sys
from pathlib import Path
from typing import Union
from asyncio import Future
from hashlib import md5, sha1
from io import StringIO
from typing import Any, Dict, List, Optional, Set, TYPE_CHECKING, Text, Tuple, Callable

import aiohttp
from aiohttp import InvalidURL
from sanic import Sanic
from sanic.views import CompositionView

import rasa.utils.io as io_utils
from rasa.utils.endpoints import read_endpoint_config


# backwards compatibility 1.0.x
# noinspection PyUnresolvedReferences
from rasa.utils.endpoints import concat_url

logger = logging.getLogger(__name__)

if TYPE_CHECKING:
   from random import Random


def configure_file_logging(logger_obj: logging.Logger, log_file: Optional[Text]):
   if not log_file:
       return

   formatter = logging.Formatter("%(asctime)s [%(levelname)-5.5s]  %(message)s")
   file_handler = logging.FileHandler(log_file)
   file_handler.setLevel(logger_obj.level)
   file_handler.setFormatter(formatter)
   logger_obj.addHandler(file_handler)


def module_path_from_instance(inst: Any) -> Text:
   """Return the module path of an instance's class."""
   return inst.__module__ + "." + inst.__class__.__name__


def dump_obj_as_json_to_file(filename: Text, obj: Any) -> None:
   """Dump an object as a json string to a file."""

   dump_obj_as_str_to_file(filename, json.dumps(obj, indent=2))


def dump_obj_as_str_to_file(filename: Text, text: Text) -> None:
   """Dump a text to a file."""

   with open(filename, "w", encoding="utf-8") as f:
       # noinspection PyTypeChecker
       f.write(str(text))


def subsample_array(
   arr: List[Any],
   max_values: int,
   can_modify_incoming_array: bool = True,
   rand: Optional["Random"] = None,
) -> List[Any]:
   """Shuffles the array and returns `max_values` number of elements."""
   import random

   if not can_modify_incoming_array:
       arr = arr[:]
   if rand is not None:
       rand.shuffle(arr)
   else:
       random.shuffle(arr)
   return arr[:max_values]


def is_int(value: Any) -> bool:
   """Checks if a value is an integer.

   The type of the value is not important, it might be an int or a float."""

   # noinspection PyBroadException
   try:
       return value == int(value)
   except Exception:
       return False


def lazyproperty(fn):
   """Allows to avoid recomputing a property over and over.

   Instead the result gets stored in a local var. Computation of the property
   will happen once, on the first call of the property. All succeeding calls
   will use the value stored in the private property."""

   attr_name = "_lazy_" + fn.__name__

   @property
   def _lazyprop(self):
       if not hasattr(self, attr_name):
           setattr(self, attr_name, fn(self))
       return getattr(self, attr_name)

   return _lazyprop


def one_hot(hot_idx, length, dtype=None):
   import numpy

   if hot_idx >= length:
       raise ValueError(
           "Can't create one hot. Index '{}' is out "
           "of range (length '{}')".format(hot_idx, length)
       )
   r = numpy.zeros(length, dtype)
   r[hot_idx] = 1
   return r


def str_range_list(start, end):
   return [str(e) for e in range(start, end)]


def generate_id(prefix="", max_chars=None):
   import uuid

   gid = uuid.uuid4().hex
   if max_chars:
       gid = gid[:max_chars]

   return "{}{}".format(prefix, gid)


def request_input(valid_values=None, prompt=None, max_suggested=3):
   def wrong_input_message():
       print (
           "Invalid answer, only {}{} allowed\n".format(
               ", ".join(valid_values[:max_suggested]),
               ",..." if len(valid_values) > max_suggested else "",
           )
       )

   while True:
       try:
           input_value = input(prompt) if prompt else input()
           if valid_values is not None and input_value not in valid_values:
               wrong_input_message()
               continue
       except ValueError:
           wrong_input_message()
           continue
       return input_value


class MyEncoder(json.JSONEncoder):
   """模型指纹化时,检查字典类型"""

   def default(self, obj):
       if not isinstance(obj, Dict):
           # return obj.__class__.__name__
           return hash(obj)
       else:
           return super(MyEncoder, self).default(obj)


# noinspection PyPep8Naming
class HashableNDArray(object):
   """Hashable wrapper for ndarray objects.

   Instances of ndarray are not hashable, meaning they cannot be added to
   sets, nor used as keys in dictionaries. This is by design - ndarray
   objects are mutable, and therefore cannot reliably implement the
   __hash__() method.

   The hashable class allows a way around this limitation. It implements
   the required methods for hashable objects in terms of an encapsulated
   ndarray object. This can be either a copied instance (which is safer)
   or the original object (which requires the user to be careful enough
   not to modify it)."""

   def __init__(self, wrapped, tight=False):
       """Creates a new hashable object encapsulating an ndarray.

       wrapped
           The wrapped ndarray.

       tight
           Optional. If True, a copy of the input ndaray is created.
           Defaults to False.
       """
       from numpy import array

       self.__tight = tight
       self.__wrapped = array(wrapped) if tight else wrapped
       self.__hash = int(sha1(wrapped.view()).hexdigest(), 16)

   def __eq__(self, other):
       from numpy import all

       return all(self.__wrapped == other.__wrapped)

   def __hash__(self):
       return self.__hash

   def unwrap(self):
       """Returns the encapsulated ndarray.

       If the wrapper is "tight", a copy of the encapsulated ndarray is
       returned. Otherwise, the encapsulated ndarray itself is returned."""
       from numpy import array

       if self.__tight:
           return array(self.__wrapped)

       return self.__wrapped


def _dump_yaml(obj, output):
   import ruamel.yaml

   yaml_writer = ruamel.yaml.YAML(pure=True, typ="safe")
   yaml_writer.unicode_supplementary = True
   yaml_writer.default_flow_style = False
   yaml_writer.version = "1.1"

   yaml_writer.dump(obj, output)


def dump_obj_as_yaml_to_file(filename: Union[Text, Path], obj: Dict) -> None:
   """Writes data (python dict) to the filename in yaml repr."""
   with open(str(filename), "w", encoding="utf-8") as output:
       _dump_yaml(obj, output)


def dump_obj_as_yaml_to_string(obj: Dict) -> Text:
   """Writes data (python dict) to a yaml string."""
   str_io = StringIO()
   _dump_yaml(obj, str_io)
   return str_io.getvalue()


def list_routes(app: Sanic):
   """List all the routes of a sanic application.

   Mainly used for debugging."""
   from urllib.parse import unquote

   output = {}

   def find_route(suffix, path):
       for name, (uri, _) in app.router.routes_names.items():
           if name.split(".")[-1] == suffix and uri == path:
               return name
       return None

   for endpoint, route in app.router.routes_all.items():
       if endpoint[:-1] in app.router.routes_all and endpoint[-1] == "/":
           continue

       options = {}
       for arg in route.parameters:
           options[arg] = "[{0}]".format(arg)

       if not isinstance(route.handler, CompositionView):
           handlers = [(list(route.methods)[0], route.name)]
       else:
           handlers = [
               (method, find_route(v.__name__, endpoint) or v.__name__)
               for method, v in route.handler.handlers.items()
           ]

       for method, name in handlers:
           line = unquote("{:50s} {:30s} {}".format(endpoint, method, name))
           output[name] = line

   url_table = "\n".join(output[url] for url in sorted(output))
   logger.debug("Available web server routes: \n{}".format(url_table))

   return output


def cap_length(s, char_limit=20, append_ellipsis=True):
   """Makes sure the string doesn't exceed the passed char limit.

   Appends an ellipsis if the string is to long."""

   if len(s) > char_limit:
       if append_ellipsis:
           return s[: char_limit - 3] + "..."
       else:
           return s[:char_limit]
   else:
       return s


def extract_args(
   kwargs: Dict[Text, Any], keys_to_extract: Set[Text]
) -> Tuple[Dict[Text, Any], Dict[Text, Any]]:
   """Go through the kwargs and filter out the specified keys.

   Return both, the filtered kwargs as well as the remaining kwargs."""

   remaining = {}
   extracted = {}
   for k, v in kwargs.items():
       if k in keys_to_extract:
           extracted[k] = v
       else:
           remaining[k] = v

   return extracted, remaining


def all_subclasses(cls: Any) -> List[Any]:
   """Returns all known (imported) subclasses of a class."""

   return cls.__subclasses__() + [
       g for s in cls.__subclasses__() for g in all_subclasses(s)
   ]


def is_limit_reached(num_messages, limit):
   return limit is not None and num_messages >= limit


def read_lines(filename, max_line_limit=None, line_pattern=".*"):
   """Read messages from the command line and print bot responses."""

   line_filter = re.compile(line_pattern)

   with open(filename, "r", encoding="utf-8") as f:
       num_messages = 0
       for line in f:
           m = line_filter.match(line)
           if m is not None:
               yield m.group(1 if m.lastindex else 0)
               num_messages += 1

           if is_limit_reached(num_messages, max_line_limit):
               break


def file_as_bytes(path: Text) -> bytes:
   """Read in a file as a byte array."""
   with open(path, "rb") as f:
       return f.read()


def get_file_hash(path: Text) -> Text:
   """Calculate the md5 hash of a file."""
   return md5(file_as_bytes(path)).hexdigest()


def get_text_hash(text: Text, encoding: Text = "utf-8") -> Text:
   """Calculate the md5 hash for a text."""
   return md5(text.encode(encoding)).hexdigest()


def get_dict_hash(data: Dict, encoding: Text = "utf-8") -> Text:
   """Calculate the md5 hash of a dictionary.
       这里加入了cls=MyEncoder的自建类
   """
   return md5(json.dumps(data, cls=MyEncoder, sort_keys=True).encode(encoding)).hexdigest()


async def download_file_from_url(url: Text) -> Text:
   """Download a story file from a url and persists it into a temp file.

   Returns the file path of the temp file that contains the
   downloaded content."""
   from rasa.nlu import utils as nlu_utils

   if not nlu_utils.is_url(url):
       raise InvalidURL(url)

   async with aiohttp.ClientSession() as session:
       async with session.get(url, raise_for_status=True) as resp:
           filename = io_utils.create_temporary_file(await resp.read(), mode="w+b")

   return filename


def remove_none_values(obj: Dict[Text, Any]) -> Dict[Text, Any]:
   """Remove all keys that store a `None` value."""
   return {k: v for k, v in obj.items() if v is not None}


def pad_list_to_size(_list, size, padding_value=None):
   """Pads _list with padding_value up to size"""
   return _list + [padding_value] * (size - len(_list))


class AvailableEndpoints(object):
   """Collection of configured endpoints."""

   @classmethod
   def read_endpoints(cls, endpoint_file):
       nlg = read_endpoint_config(endpoint_file, endpoint_type="nlg")
       nlu = read_endpoint_config(endpoint_file, endpoint_type="nlu")
       action = read_endpoint_config(endpoint_file, endpoint_type="action_endpoint")
       model = read_endpoint_config(endpoint_file, endpoint_type="models")
       tracker_store = read_endpoint_config(
           endpoint_file, endpoint_type="tracker_store"
       )
       event_broker = read_endpoint_config(endpoint_file, endpoint_type="event_broker")

       return cls(nlg, nlu, action, model, tracker_store, event_broker)

   def __init__(
       self,
       nlg=None,
       nlu=None,
       action=None,
       model=None,
       tracker_store=None,
       event_broker=None,
   ):
       self.model = model
       self.action = action
       self.nlu = nlu
       self.nlg = nlg
       self.tracker_store = tracker_store
       self.event_broker = event_broker


# noinspection PyProtectedMember
def set_default_subparser(parser, default_subparser):
   """default subparser selection. Call after setup, just before parse_args()

   parser: the name of the parser you're making changes to
   default_subparser: the name of the subparser to call by default"""
   subparser_found = False
   for arg in sys.argv[1:]:
       if arg in ["-h", "--help"]:  # global help if no subparser
           break
   else:
       for x in parser._subparsers._actions:
           if not isinstance(x, argparse._SubParsersAction):
               continue
           for sp_name in x._name_parser_map.keys():
               if sp_name in sys.argv[1:]:
                   subparser_found = True
       if not subparser_found:
           # insert default in first position before all other arguments
           sys.argv.insert(1, default_subparser)


def create_task_error_logger(error_message: Text = "") -> Callable[[Future], None]:
   """Error logger to be attached to a task.

   This will ensure exceptions are properly logged and won't get lost."""

   def handler(fut: Future) -> None:
       # noinspection PyBroadException
       try:
           fut.result()
       except Exception:
           logger.exception(
               "An exception was raised while running task. "
               "{}".format(error_message)
           )

   return handler


class LockCounter(asyncio.Lock):
   """Decorated asyncio lock that counts how many coroutines are waiting.

   The counter can be used to discard the lock when there is no coroutine
   waiting for it. For this to work, there should not be any execution yield
   between retrieving the lock and acquiring it, otherwise there might be
   race conditions."""

   def __init__(self) -> None:
       super().__init__()
       self.wait_counter = 0

   async def acquire(self) -> bool:
       """Acquire the lock, makes sure only one coroutine can retrieve it."""

       self.wait_counter += 1
       try:
           return await super(LockCounter, self).acquire()  # type: ignore
       finally:
           self.wait_counter -= 1

   def is_someone_waiting(self) -> bool:
       """Check if a coroutine is waiting for this lock to be freed."""
       return self.wait_counter != 0


what version of Rasa are you using, there was a bug, but we fixed it in the latest version