How to talk to chatbot in helm deployment

At the end of the installation, it should tell you to open localhost to talk to your bot, as well as the password to access Rasa X.

If you missed your password, you need to reinstall and please read the terminal output after running the command.

Its without the installation of rasax. I’m trying to get OS & Rasax setup seperately.

So how do i talk to the bot in shell from the deployment

I think you can’t.

You should only deploy in a production environment. Testing should be done on a local installation, then using GitHub to update the Helm deployment on the server.

You can also try communicating with /webhooks/rest/webhook.


I think the Docker Compose installation method would be more suitable for the things you want to do.

This is what i’m trying to do. Seperate Rasa X deployment , OS Deployment Link them together. Rn i want to test out the OS deployment.

I solved it through implementing a chat somewhere and linking it to the socket.io channel. But i did have to use kubectl port-forward --namespace rasa svc/rasa 5005:5005 to acces it. Is there any way i can acces it directly without port forwarding?

I tried the url of the cluster-ip , i.e 10.0.73.221:5005 But that didn’t work.

1 Like

I found how to do it.

By default the rasa service is available only within the Kubernetes cluster. In order to make it accessible outside the cluster via a load balancer, update your rasa-values.yaml file with the following configuration:

service:
    type: LoadBalancer

Enabling TLS for NGINX (self-signed) To use a self-signed TLS certificate for NGINX, update your rasa-values.yaml with the following NGINX TLS self-signed configuration:

nginx:
  tls:
    enabled: true
    generateSelfSignedCert: true
1 Like

hello, can you help me ? what ip did you use in order to access it ? and can i see your values.yml fle ?