Training always aborts with killed

Hello Community,

I’m new on Rasa. I tried do built my first NLU-Project for my BA-Thesis. Now, I have a problem where I need your help.

I want to built an application, who classify mailings and extract some data. I produce training examples: 100.000 samples with 6 distinct intents and 3 distinct entities.

Now i try to train the NLU. I use 80% from my Data (20% later with same result). After some time the training aborts with Message “Killed”. On the Web i read, i could be possible that there is not enough Memory.

I have 50GB RAM and some space for swapping (On my local maschine I have 8GB and it returns MemoryError)

I try to configure a smaller batchsize. Put i’m not sure, which Policy will by used.

I use “rasa train --augmentation 0 nlu” to train.

My Config-File looks like:

# Configuration for Rasa NLU.
# https://rasa.com/docs/rasa/nlu/components/
language: de
pipeline:
    -   name: "WhitespaceTokenizer"
    -   name: "RegexFeaturizer"
    -   name: "CRFEntityExtractor"
    -   name: "EntitySynonymMapper"
    -   name: "CountVectorsFeaturizer"
    -   name: "EmbeddingIntentClassifier"
# Configuration for Rasa Core.
# https://rasa.com/docs/rasa/core/policies/
policies:
#    -   name: MemoizationPolicy
#        max_history: 1
    -   name: KerasPolicy
        max_history: 1
    -   name: MappingPolicy
    -   name: EmbeddingPolicy

I try to configure Batchsize on Keras and EmbeddingPolicy in the config file. It didn’t work, so i tried to edit the keras_policy.py and embedding_policy.py directly. I’m not sure, which policy used firstly.

Why my Training aborts with killed. And how I config rasa, that it runs? I use the current Rasa (1.1.4)

I look forward to hearing from you.

Julian

Hi @Julian57

as Julians need to stick together, here a friendly hint:

The theme of rasa, cited from their website, is:

Create assistants that go beyond basic FAQs

Machine learning tools for developers to build, improve, and deploy contextual chatbots and assistants. Powered by open source.

I’d suggest not to use rasa for a “simple” document classification task since it is meant as a conversational AI framework.

What you really want to use is for example a convolutional neural network built with Keras, maybe even with some good word embeddings for your case. If you need advice for building that, feel free to ask.

Regards Julian

1 Like

Hey Julian :slight_smile:

I’m know that the main reason for using Rasa is building Chat-Bots. But I think, the only difference is, that my messages are longer and i dont need some action after intent-classification. I choose rasa, because I hope, that I get some benefits from pretrained language models. But I learned i dont works, for big datasets.

Can I dont use Rasa for this task?

Hi @Julian57

of course you are able to use Rasa for such task. Let’s try something. I have attached a custom pipeline element that you could embedd in config.yml. Please remove anything unnecessary for your purposes, then train only NLU and use the parse endpoint to check, if it worked and suits your needs.

keras_nn.py (6.5 KB)

You can use it with:

- name: "keras_nn.KerasNN"

in your pipeline. See if this fixes your memory problems.

Regards

Hi @JulianGerhard

thank you for your great answer.

I try to use your custom pipeline. At the moment i got an Exception.

Traceback (most recent call last):
  File "/anaconda3/envs/KI/bin/rasa", line 11, in <module>
    load_entry_point('rasa', 'console_scripts', 'rasa')()
  File "/root/rasa/rasa/__main__.py", line 70, in main
    cmdline_arguments.func(cmdline_arguments)
  File "/root/rasa/rasa/cli/train.py", line 147, in train_nlu
    fixed_model_name=args.fixed_model_name,
  File "/root/rasa/rasa/train.py", line 363, in train_nlu
    fixed_model_name=fixed_model_name,
  File "/root/rasa/rasa/train.py", line 382, in _train_nlu_with_validated_data
    config, nlu_data_directory, _train_path, fixed_model_name="nlu"
  File "/root/rasa/rasa/nlu/train.py", line 89, in train
    interpreter = trainer.train(training_data, **kwargs)
  File "/root/rasa/rasa/nlu/model.py", line 192, in train
    updates = component.train(working_data, self.config, **context)
  File "/root/rasa/rasa/nlu/custom/keras_nn.py", line 95, in train
    self.model = self._create_model(X.shape[1:])
  File "/root/rasa/rasa/nlu/custom/keras_nn.py", line 59, in _create_model
    x = layers.Dense(units, activation='relu')(x)
  File "/anaconda3/envs/KI/lib/python3.6/site-packages/tensorflow/python/keras/e                                                ngine/base_layer.py", line 538, in __call__
    self._maybe_build(inputs)
  File "/anaconda3/envs/KI/lib/python3.6/site-packages/tensorflow/python/keras/e                                                ngine/base_layer.py", line 1591, in _maybe_build
    self.input_spec, inputs, self.name)
  File "/anaconda3/envs/KI/lib/python3.6/site-packages/tensorflow/python/keras/e                                                ngine/input_spec.py", line 139, in assert_input_compatibility
    str(x.shape.as_list()))
ValueError: Input 0 of layer dense is incompatible with the layer: : expected mi                                                n_ndim=2, found ndim=1. Full shape received: [None]

I register the pipeline in component_classes-list of the registry.py. My config looks like:

# Configuration for Rasa NLU.
# https://rasa.com/docs/rasa/nlu/components/
language: de
pipeline:
    -   name: "WhitespaceTokenizer"
    -   name: "RegexFeaturizer"
    -   name: "KerasNN"
# Configuration for Rasa Core.
# https://rasa.com/docs/rasa/core/policies/
policies:

Something I missed to write in my first post is, that the aborting happens on training the crf-entity-extractor. Im not sure, but I think your pipeline do intent-classification. But I want to use custom entity extraction too.

Hi @Julian57,

please try to add:

  - name: CountVectorsFeaturizer
    analyzer: char_wb
    max_featurizes: 10000
    max_ngram: 15
    min_ngram: 2

to your config and see, if it works then.

Regards

@JulianGerhard Thank you for your fast answer.

I looks like it works. At the momemt I got an MemoryError. I reduce the Batchsize from 128 to 64. And I hope it trains now.

Whats about Entity Extraction? Can I implement it on custom pipeline too?

Hi @Julian57

of course you can add Entity Extraction. Did you specify those entities in your training data in the corresponding rasa format? If so, I’d recommend adding:

- name: CRFEntityExtractor

to your pipeline. Or are those entities really “custom”? Then maybe a regex-feature would help.

Regards

@JulianGerhard I specified it in my data. I will try it out.

Thank you very much for your help. :slight_smile:

@JulianGerhard I’m still getting an memory error.

 rasa.nlu.model  - Starting to train component CountVectorsFeaturizer
Traceback (most recent call last):
  File "/anaconda3/envs/KI/bin/rasa", line 11, in <module>
    load_entry_point('rasa', 'console_scripts', 'rasa')()
  File "/root/rasa/rasa/__main__.py", line 70, in main
    cmdline_arguments.func(cmdline_arguments)
  File "/root/rasa/rasa/cli/train.py", line 147, in train_nlu
    fixed_model_name=args.fixed_model_name,
  File "/root/rasa/rasa/train.py", line 363, in train_nlu
    fixed_model_name=fixed_model_name,
  File "/root/rasa/rasa/train.py", line 382, in _train_nlu_with_validated_data
    config, nlu_data_directory, _train_path, fixed_model_name="nlu"
  File "/root/rasa/rasa/nlu/train.py", line 89, in train
    interpreter = trainer.train(training_data, **kwargs)
  File "/root/rasa/rasa/nlu/model.py", line 192, in train
    updates = component.train(working_data, self.config, **context)
  File "/root/rasa/rasa/nlu/featurizers/count_vectors_featurizer.py", line 242, in train
    X = self.vectorizer.fit_transform(lem_exs).toarray()
  File "/anaconda3/envs/KI/lib/python3.6/site-packages/scipy/sparse/compressed.py", line 962, in toarray
    out = self._process_toarray_args(order, out)
  File "/anaconda3/envs/KI/lib/python3.6/site-packages/scipy/sparse/base.py", line 1187, in _process_toarray_args
    return np.zeros(self.shape, dtype=self.dtype, order=order)
MemoryError

I set set bachsize in KerasNN down to 32. Is there any other setting, there I need to look at?

Hi,

which version of rasa are you using?

Regards

@JulianGerhard I use the current Rasa (1.1.4)

The Problem should seems to be on SciKit Learn: Look here:

Or: https://de.switch-case.com/58876872

Because the CountVectorizer try to convert an dictionary to an array. Is there a way to skip the array conversion or use HashingVectorizer?

Hi @Julian57

interesting… thanks for those postings. I’ll figure out another way and get back to you!

Regards

Hello @JulianGerhard

i try out modify the count vectors feature. If i use HashingVectorizer instead of CountVectorizer, i got a Memory Error too.

After that i try to remove the array conversation. SciSkit Learn CountVectorsFeaturizer returns a CSR/CSC Sparse Matrix. I try to pick up matrix row directly instead of use array row.

CountVectorsFeaturizer works now and finished Training. But i get an Error from KeraNN

rasa.nlu.model  - Starting to train component KerasNN
Traceback (most recent call last):
  File "/anaconda3/envs/KI/bin/rasa", line 11, in <module>
    load_entry_point('rasa', 'console_scripts', 'rasa')()
  File "/root/rasa/rasa/__main__.py", line 70, in main
    cmdline_arguments.func(cmdline_arguments)
  File "/root/rasa/rasa/cli/train.py", line 147, in train_nlu
    fixed_model_name=args.fixed_model_name,
  File "/root/rasa/rasa/train.py", line 363, in train_nlu
    fixed_model_name=fixed_model_name,
  File "/root/rasa/rasa/train.py", line 382, in _train_nlu_with_validated_data
    config, nlu_data_directory, _train_path, fixed_model_name="nlu"
  File "/root/rasa/rasa/nlu/train.py", line 89, in train
    interpreter = trainer.train(training_data, **kwargs)
  File "/root/rasa/rasa/nlu/model.py", line 192, in train
    updates = component.train(working_data, self.config, **context)
  File "/root/rasa/rasa/nlu/custom/keras_nn.py", line 95, in train
    self.model = self._create_model(X.shape[1:])
  File "/root/rasa/rasa/nlu/custom/keras_nn.py", line 59, in _create_model
    x = layers.Dense(units, activation='relu')(x)
  File "/anaconda3/envs/KI/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 538, in __call__
    self._maybe_build(inputs)
  File "/anaconda3/envs/KI/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1591, in _maybe_build
    self.input_spec, inputs, self.name)
  File "/anaconda3/envs/KI/lib/python3.6/site-packages/tensorflow/python/keras/engine/input_spec.py", line 139, in assert_input_compatibility
    str(x.shape.as_list()))
ValueError: Input 0 of layer dense is incompatible with the layer: : expected min_ndim=2, found ndim=1. Full shape received: [None]

I’m not sure if I use the matrix row directly, because it is compressed.

Hi @Julian57

first of all, I made a mistake:

 - name: CountVectorsFeaturizer
    analyzer: char_wb
    max_featurizes: 10000
    max_ngram: 15
    min_ngram: 2

please change that to “max_features”.

Since the output of CountVectorizer is a sparse matrix and the classifiers downstream may not be able to support sparse matrices, I’d leave the to_array() statements as they are meaning don’t change the class itself.

It seems that all features were extracted due to my copy/paste mistake. Depending on how big your matrix will become in the end, this might have caused the error (though the matrix might even become big after fixing max_features).

Please try that and post the result here!

Regards

@JulianGerhard It works. I can train CountVectorsFeaturizer and intent classification.

Now I try to add Entity Extraction. It returns again Killed.

What should I do?

> # Configuration for Rasa NLU.
> # https://rasa.com/docs/rasa/nlu/components/
> language: de
> pipeline:
>     -   name: "WhitespaceTokenizer"
> #    -   name: "RegexFeaturizer"
>     -   name: CountVectorsFeaturizer
>         analyzer: char_wb
>         max_features: 10000
>         max_ngram: 15
>         min_ngram: 2
>     -   name: "CRFEntityExtractor"
>     -   name: "KerasNN"
> # Configuration for Rasa Core.
> # https://rasa.com/docs/rasa/core/policies/
> policies:

Hi @Julian57

according to this post of @akelad here, the branch with the sparse arrays is not ready yet - so I’d recommend either to split up your training data or to wait for the fix.

Regards Julian

Hey Julian Gerhard, thank you for your postings.

It works with a smaller dataset. I will wait for the patch, to train more data.

Thank you for your great support. :slight_smile: You got me a lot.

I am still looking for someone with DeepL Know How who is interested in proofreading my Bachelor thesis. And pays attention to professional correctness. I think you’d be the perfect person for the job. I know this is a big pleas. But I’d appreciate it if you’d be interested. I will submit the thesis on 19.9.

I send you an contact request on Xing. If you are interested you can add me and send me a message.

I’m be happy to hear from you. Regards Julian