Unable to train an HFT-based NLU model

I’m currently using Rasa 2.3.4 and I’m trying to compare two models based on HFT. I’m using beRT and GPT-2 as the models, but I’m not sure what I’m doing wrong here. The training fails each time and I get a blank set of results.

Here’s my pipeline:

language: "en"  # your two-letter language code

pipeline:
  - name: HFTransformersNLP
    model_name: "bert"
    model_weights: "rasa/LaBSE"
    cache_dir: null
  - name: LanguageModelTokenizer
  - name: LanguageModelFeaturizer
    model_name: "bert"
    model_weights: "rasa/LaBSE"
  - name: RegexFeaturizer
  - name: LexicalSyntacticFeaturizer
  - name: CountVectorsFeaturizer
  - name: CountVectorsFeaturizer
    analyzer: "char_wb"
    min_ngram: 1
    max_ngram: 4
  - name: DIETClassifier
    epochs: 100
  - name: EntitySynonymMapper

The other pipeline has model_name: gpt2 and model_weights: gpt2 as indicated here under both the initializer and the featurizer.

After trying to run this command -

rasa test nlu --nlu data/nlu.yml --config available_config/hft_gpt2.yml available_config/hft_beRT.yml

I get this error for each attempt -

Training model 'hft_beRT' failed. Error: in user code:

    /<project-path>/rasa/env/lib/python3.7/site-packages/rasa/utils/tensorflow/models.py:295 train_on_batch  *
        prediction_loss = self.batch_loss(batch_in)
    /<project-path>/rasa/env/lib/python3.7/site-packages/rasa/nlu/classifiers/diet_classifier.py:1442 batch_loss  *
        sequence_lengths = self._get_sequence_lengths(
    /<project-path>/rasa/env/lib/python3.7/site-packages/rasa/utils/tensorflow/models.py:1122 _get_sequence_lengths  *
        sequence_lengths = tf.ones([batch_dim], dtype=tf.int32)
    /<project-path>/rasa/env/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py:201 wrapper  **
        return target(*args, **kwargs)
    /<project-path>/rasa/env/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py:3041 ones
        output = _constant_if_small(one, shape, dtype, name)
    /<project-path>/rasa/env/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py:2732 _constant_if_small
        if np.prod(shape) < 1000:
    <__array_function__ internals>:6 prod

    /<project-path>/rasa/env/lib/python3.7/site-packages/numpy/core/fromnumeric.py:3031 prod
        keepdims=keepdims, initial=initial, where=where)
    /<project-path>/rasa/env/lib/python3.7/site-packages/numpy/core/fromnumeric.py:87 _wrapreduction
        return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
    /<project-path>/rasa/env/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:848 __array__
        " a NumPy call, which is not supported".format(self.name))

    NotImplementedError: Cannot convert a symbolic Tensor (strided_slice_6:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported

I get a similar error for the gpt pipeline too. What am I doing wrong? :sob:

I suspect that it’s some kind of version conflict between tensorflow, transformers and numpy. I’ll report more in a bit.

The fix for this is -

pip install transformers==3.5.1 && pip install numpy==1.16.6
2 Likes