Tokenizer_jieba dictionary_path does not work

for example ABC sometimes AB , C ,sometimes BC i add them to dictionary_path . and when i train-nlu "entities must span whole tokens Wrong entity start " and “Wrong entity end” .And many training data reported this error,not just this situation.

Please submit an issue on GitHub, that’s the better place for such things.

@howlanderson thanks for sending them to github :slight_smile: @weilingfeng1996 yeah please open a github issue, if there’s an actual bug with our code it’s always best to put it there