is there any tool within the platform helping to select proper training data like for NER_CRF etc. and removing like similiar sentences (overfitting)?
As I read from the page the general pipeline is to add examples to data and evaluate the overall model performance in a graphical user interface. Is this correct? Do you have already a tool selecting the examples best for best model performance so to say (both for NER and intent)