Rasa Open Source Deployment

Hi,

@Arjaan

Thanks for your advanced deployment workshop!

In our orgaisation, we are using Openshift and Jenkins pipeline for deployment. I have my pipeline created, and I have docker and docker-compose.yml files in my bit-bucket repository.

We are not using Rasa-X, its Rasa Open Source with customized UI.

Currently, I am stuck at pytest.ini and getting below error. Can you please help to resolve the issue? What should be the contents of pytest.ini?

Successfully installed coverage-5.3 iniconfig-1.1.1 pluggy-0.13.1 py-1.9.0 pytest-6.1.2 pytest-cov-2.10.1 toml-0.10.2

  • ‘[’ -d tests ‘]’
  • python -m pytest

============================= test session starts ==============================

platform linux – Python 3.6.9, pytest-6.1.2, py-1.9.0, pluggy-0.13.1

rootdir: /pathof rootdir, configfile: pytest.ini

plugins: cov-2.10.1

collected 0 items

============================ no tests ran in 0.04s =============================

script returned exit code 5

@poojadeshmukh,

They pytest.ini can potentially just be empty. The pytest documentation explains the configuration options here.

It appears you have no actual tests defined, because pytest is not finding any tests to run.

Thanks @Arjaan. Yeah this worked. But now I am stuck at below step in the pipeline Uploading directory “.” As binary input for the build… …Sending interrupt signal to the process… script.sh: line2: 43376 Terminated…

Script returned exit code 143

In Docker file I am using our Artifactory path that has rasa-core in it. Can you please help to resolve this?

I am using rasa core 0.11.3 for deployment.

In docker file , in FROM I have given the

Also, When I use pip install rasa, I get the usual tensorflow add-ons session error in pipeline. When I use rasa_core, it works. How to get rid of tensorflow add-ons error?

@poojadeshmukh, could you provide a more detailed description, in a way that I would be able to reproduce your steps?

Hi @Arjaan ,

Would it be possible to provide the presentation slides associated with the advanced rasa workshop on deployment on Udemy?

Thanks

Hey @Arjaan I am still stuck on deployment using dockerfile. We use Openshift RedHat for deployment. With below dockerfile, the build is successful but after deploying the pkg it finally gives error “rasa is not recognized as a command” Below is how my dockerfile looks like. dockerfile FROM docker-enterprise-prod-local.artifactrepository.XXXX/XXXX-python-ai/redhat-python3.6/rhel7/redhat-python-rhel7:latest

mandatory metadata

LABEL app-description=“XXXX”
app-name=“XXXX”
id=“XXXX”
image-maintainer=“XXXX”

set up environment

ENV APP_HOME=/app
PORT=8080

RUN mkdir /opt/rh/rh-python36 -p && ln -s /opt/middleware/redhat_python/3.6.3 /opt/rh/rh-python36/root &&
mkdir -p $APP_HOME

RUN pip install --upgrade pip RUN pip install rasa_nlu[tensorflow] RUN pip install rasa_core==0.11.3

#Add application ADD src/ $APP_HOME/

fix permissions for openshift

RUN chgrp -R 0 $APP_HOME && chmod -R g+rwx $APP_HOME

expose port and run application

EXPOSE 5005

Setup User specifics, and run

WORKDIR $APP_HOME ENTRYPOINT [“rasa”] CMD ["–help"] dockercompose.yml version: ‘3.4’ services: rasa_core: image: RASARepoTest/rasa_core==0.11.3 ports: - 5005:5005 volumes: - - ./models/rasa_core:/app/models command: - start - --core - models - -c - rest

rasa_nlu: image: RASARepoTest/rasa_nlu:latest-full volumes: - ./models/rasa_nlu:/app/models command: - start - --path - models app: image: localhost:32000/RASARepoTest-action-server:0.0.1 expose: 5055 ENTRYPOINT: [“src/engine_start.sh”]

@Arjaan , would you be able to help here?