Model/parse response is the same as the payload

So, when I use spacy or bert configurations, the rasa intent classification works perfectly fine in command line [ie rasa shell].

However, when I publish the model together with rasa server to a url and when I call the model/parse endpoint, I receive the original text that I sent as payload.

What is crazy is that when I use the default config.yml … aka I don’t use spacy and I don’t use bert … the url endpoint returns the expected response.

Here are my configs:

SPACY… works in command line, but not in url

language: en
pipeline:
  - name: SpacyNLP
  - name: "SpacyTokenizer"
  - name: "SpacyFeaturizer"
    model: "en_core_web_md"
  - name: DIETClassifier
    epochs: 50

BERT… works in command line, but not in url:

language: en
pipeline:
  - name: HFTransformersNLP
  - name: LanguageModelTokenizer
  - name: LanguageModelFeaturizer    
    model_name: "bert"
    model_weights: "rasa/LaBSE"
  - name: DIETClassifier
    epochs: 50

DEFAULT… works in command line and in url:

language: en
pipeline:
  - name: WhitespaceTokenizer
  - name: RegexFeaturizer
    case_sensitive: False
  - name: LexicalSyntacticFeaturizer
  - name: CountVectorsFeaturizer
    analyzer: word
  - name: DIETClassifier
    epochs: 50

Is the response code still 200 with spacy and bert?

Yes

What version are you on? I tried to reproduce this on version 2.1.3, but the parse endpoint returns parse info as expected for both spacy and default models (using the configs you posted)

I would be super happy to send you a zip file which includes Rasa’s web endpoints website using my trained model so you can check it out.

Or… if you want to test it for yourself … try this end point - I just published it with brand new trained model using Spacy: https://rasa-ella-50-spacy-bpslyfsgba-uc.a.run.app/model/parse

I send this payload: { “text”: “test” }

I receive this response:

{
  "text": "\/test",
  "intent": {
    "name": "test",
    "confidence": 1.0
  },
  "intent_ranking": [
    {
      "name": "test",
      "confidence": 1.0
    }
  ],
  "entities": []
}

Here is the config file contents: language: en pipeline:

  • name: SpacyNLP
  • name: “SpacyTokenizer”
  • name: “SpacyFeaturizer” model: “en_core_web_lg”
  • name: DIETClassifier epochs: 50

No domain yml contents - this is pure NLU/intent classification exercise.

Using bert configuration does the same thing. Using your default settings works just fine.

I tried to test this without the use of the end points, using your Python API… that just opened a brand new can of bug worms and issues with your product - I’ll be having other topics and github issues dedicated to that alone. WHAT A MESS

I’d be happy to provide the same model but with Bert training and url so you can experience it yourself.