I have a few very long training examples in ‘nlu.md’. When I train my NLU pipeline, it exhausts all the system resources(I guess size of model depends on longest sequence of nlu.md
) and crashes my system. How to fix the input length in the NLU without worrying about the training data examples?
hi @sainimohit23 - we currently don’t have this as a parmeter to configure, but that would make a lot of sense!
Would you be up for creating a PR to add this? Would be a super nice contribution
@amn41 ok I will go through the code and try to implement the functionality if I could.
1 Like