Is it possible to records logs of incorrectly predicted errors while cross validation ?
Not at the moment, it’s a little trickier to implement because you’re creating a large amount of models and at that point it’s kind of messy to print out all the wrong predictions. It may be something we’ll look into in future though