This is error thats showing up. It previously worked on 0.39.3 and 2.8.2 combination but I took the cluster down for some infra changes and its stopped working since.
Rasa X: 0.39.2/0.42.1/0.42.2
Rasa: 2.8.3
This is error thats showing up. It previously worked on 0.39.3 and 2.8.2 combination but I took the cluster down for some infra changes and its stopped working since.
Rasa X: 0.39.2/0.42.1/0.42.2
Rasa: 2.8.3
The Helm installation. I had a functioning installation that used nginx deployment for reverse proxy but I took it down for adding SSL support through cert manager. When I restarted the cluster, I didn’t use nginx and let production and rasa x have their own ingress
Sure. Dev, thanks for the information pinging Dr @ChrisRahme for the help and error suggestion
Hello!
Did you use the Quick or Helm installation? Which pod is giving this error? And does this error come up with all three types of training?
I used the helm charts directly for installation. The error is in the rasa-x pod. And I have tried both the curl request to upload and the upload from files and both have failed.
I also had many issues with Helm, seems like Docker is more stable (can you confirm, @nik202?).
I always found myself having to delete the namespace and install again.
I use the Quick Installation, then did helm upgrade
to values.yml
if I wanted some changes, and even wrote a bash script to do it all in a single command since I had to repeat the process multiple times.
So first try to just completely delete the namespace and reinstall. If it still doesn’t work, try a Quick Installation instead and try to upload a model. If it works, you can proceed to do any changes you want since a Quick and Helm Installations are equivalent.
Vincent from the Rasa Team is working on a Deployments playlist on YouTube (here), so if you decide you want to switch to a Docker Installation instead, watching this playlist is the easiest way to learn.
Right let me try that and see.
Dev, personally I haven’t implemented your use case whilst using Helm install, can you follow the suggestions of @ChrisRahme If you required docker or docker-compose installation please update me the same. Thanks.
We require the k8s installation because of scaling requirements so we did test out stuff in docker compose but we’ve moved to the k8s now. Also @ChrisRahme I tried deleting the namespace and reinstalling but that has failed too. I think there’s something wrong with the urls
If you see there’s a double // before projects. How do these urls get configured?
That looks really weird indeed. I don’t have access to a server anymore to test out Rasa X on Helm and see what the link looks like, but there was no //
if I remember correctly.
Does it work if you just use a single slash?
I don’t know why that’s happening to be honest…
Can you show me your values.yml
?
# Default values for rasa-x.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# rasax specific settings
rasax:
# override the default command to run in the container
command: []
# override the default arguments to run in the container
args: []
# name of the Rasa X image to use
name: "rasa/rasa-x" # gcr.io/rasa-platform/rasa-x-ee
# tag refers to the Rasa X image tag (uses `appVersion` by default)
tag: ""
# port on which Rasa X runs
port: 5002
# scheme by which Rasa X is accessible
scheme: http
# passwordSalt Rasa X uses to salt the user passwords
passwordSalt: ""
# token Rasa X accepts as authentication token from other Rasa services
token: ""
# jwtSecret which is used to sign the jwtTokens of the users
jwtSecret: ""
# databaseName Rasa X uses to store data
# (uses the value of global.postgresql.postgresqlDatabase by default)
databaseName: "rasa"
# disableTelemetry permanently disables telemetry
disableTelemetry: false
# Jaeger Sidecar
jaegerSidecar: "false"
# initialUser is the user which is created upon the initial start of Rasa X
initialUser:
# username specifies a name of this user
username: "me"
# password for this user (leave it empty to skip the user creation)
password: "test_password"
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
# access Modes of the pvc
accessModes:
- ReadWriteOnce
# size of the Rasa X volume claim
size: 10Gi
# annotations for the Rasa X pvc
annotations: {}
# finalizers for the pvc
finalizers:
- kubernetes.io/pvc-protection
# existingClaim which should be used instead of a new one
existingClaim: ""
# livenessProbe checks whether rasa x needs to be restarted
livenessProbe:
# initialProbeDelay for the `livenessProbe`
initialProbeDelay: 10
# scheme to be used by the `livenessProbe`
scheme: "HTTP"
# readinessProbe checks whether rasa x can receive traffic
readinessProbe:
# initialProbeDelay for the `readinessProbe`
initialProbeDelay: 10
# scheme to be used by the `readinessProbe`
scheme: "HTTP"
# resources which Rasa X is required / allowed to use
resources: {}
# extraEnvs are environment variables which can be added to the Rasa X deployment
extraEnvs: []
# - name: SOME_CUSTOM_ENV_VAR
# value: "custom value"
# additional volumeMounts to the main container
extraVolumeMounts: []
# - name: tmpdir
# mountPath: /var/lib/mypath
# additional volumes to the pod
extraVolumes: []
# - name: tmpdir
# emptyDir: {}
# tolerations can be used to control the pod to node assignment
# https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# - key: "nvidia.com/gpu"
# operator: "Exists"
# nodeSelector to specify which node the pods should run on
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
nodeSelector: {}
# "beta.kubernetes.io/instance-type": "g3.8xlarge"
# automountServiceAccountToken specifies whether the Kubernetes service account
# credentials should be automatically mounted into the pods. See more about it in
# https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server
automountServiceAccountToken: false
# service specifies settings for exposing rasa x to other services
service:
# annotations for the service
annotations: {}
# podLabels adds additional pod labels
# https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
podLabels: {}
# hostNetwork controls whether the pod may use the node network namespace
hostNetwork: false
# dnsPolicy specifies Pod's DNS policy
# ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy
dnsPolicy: ""
# rasa: Settings common for all Rasa containers
rasa:
# version is the Rasa Open Source version which should be used.
# Used to ensure backward compatibility with older Rasa Open Source versions.
version: "2.8.1" # Please update the default value in the Readme when updating this
# disableTelemetry permanently disables telemetry
disableTelemetry: false
# override the default command to run in the container
command: []
# override the default arguments to run in the container
args: []
# add extra arguments to the command in the container
extraArgs: []
# name of the Rasa image to use
name: "rasa/rasa"
# tag refers to the Rasa image tag. If empty `.Values.rasa.version-full` is used.
tag: ""
# port on which Rasa runs
port: 5005
# scheme by which Rasa services are accessible
scheme: http
# token Rasa accepts as authentication token from other Rasa services
token: ""
# rabbitQueue it should use to dispatch events to Rasa X
rabbitQueue: "rasa_production_events"
# Optional additional rabbit queues for e.g. connecting to an analytics stack
additionalRabbitQueues: []
# additionalChannelCredentials which should be used by Rasa to connect to various
# input channels
additionalChannelCredentials:
rest:
socketio:
user_message_evt: user_uttered
bot_message_evt: bot_uttered
# facebook:
# verify: "rasa-bot"
# secret: "3e34709d01ea89032asdebfe5a74518"
# page-access-token: "EAAbHPa7H9rEBAAuFk4Q3gPKbDedQnx4djJJ1JmQ7CAqO4iJKrQcNT0wtD"
# input channels
additionalEndpoints: {}
# telemetry:
# type: jaeger
# service_name: rasa
# Jaeger Sidecar
jaegerSidecar: "false"
livenessProbe:
# initialProbeDelay for the `livenessProbe`
initialProbeDelay: 10
# scheme to be used by the `livenessProbe`
scheme: "HTTP"
# useLoginDatabase will use the Rasa X database to log in and create the database
# for the tracker store. If `false` the tracker store database must have been created
# previously.
useLoginDatabase: true
# lockStoreDatabase is the database in redis which Rasa uses to store the conversation locks
lockStoreDatabase: "1"
# cacheDatabase is the database in redis which Rasa X uses to store cached values
cacheDatabase: "2"
# extraEnvs are environment variables which can be added to the Rasa deployment
extraEnvs: []
# example which sets env variables in each Rasa Open Source service from a separate k8s secret
# - name: "TWILIO_ACCOUNT_SID"
# valueFrom:
# secretKeyRef:
# name: twilio-auth
# key: twilio_account_sid
# - name: TWILIO_AUTH_TOKEN
# valueFrom:
# secretKeyRef:
# name: twilio-auth
# key: twilio_auth_token
# tolerations can be used to control the pod to node assignment
# https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# - key: "nvidia.com/gpu"
# operator: "Exists"
# automountServiceAccountToken specifies whether the Kubernetes service account
# credentials should be automatically mounted into the pods. See more about it in
# https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server
automountServiceAccountToken: false
# podLabels adds additional pod labels
# https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
podLabels:
compute: fargate
# versions of the Rasa container which are running
versions:
# rasaProduction is the container which serves the production environment
rasaProduction:
# enable the rasa-production deployment
# You can disable the rasa-production deployment in order to use external Rasa OSS deployment instead.
enabled: true
# Define if external Rasa OSS should be used.
external:
# enable external Rasa OSS
enabled: false
# host of external Rasa OSS deployment
host: "http://rasa-bot"
# nodeSelector to specify which node the pods should run on
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
nodeSelector: {}
# "beta.kubernetes.io/instance-type": "g3.8xlarge"
# replicaCount of the Rasa Production container
replicaCount: 1
# serviceName with which the Rasa production deployment is exposed to other containers
serviceName: "rasa-production"
# service specifies settings for exposing rasa production to other services
service:
# annotations for the service
annotations: {}
# modelTag of the model Rasa should pull from the the model server
modelTag: "production"
# trackerDatabase it should use to to store conversation trackers
trackerDatabase: "rasa"
# rasaEnvironment it used to indicate the origin of events published to RabbitMQ (App ID message property)
rasaEnvironment: "production"
# resources which rasaProduction is required / allowed to use
resources:
requests:
cpu: 2
memory: 4G
# additional volumeMounts to the main container
extraVolumeMounts: []
# - name: tmpdir
# mountPath: /var/lib/mypath
# additional volumes to the pod
extraVolumes: []
# - name: tmpdir
# emptyDir: {}
# rasaWorker is the container which does computational heavy tasks such as training
rasaWorker:
# enable the rasa-worker deployment
enabled: false
# nodeSelector to specify which node the pods should run on
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
nodeSelector: {}
# "beta.kubernetes.io/instance-type": "g3.8xlarge"
# replicaCount of the Rasa worker container
replicaCount: 1
# serviceName with which the Rasa worker deployment is exposed to other containers
serviceName: "rasa-worker"
# service specifies settings for exposing rasa worker to other services
service:
# annotations for the service
annotations: {}
# modelTag of the model Rasa should pull from the the model server
modelTag: "production"
# trackerDatabase it should use to to store conversation trackers
trackerDatabase: "worker_tracker"
# rasaEnvironment it used to indicate the origin of events published to RabbitMQ (App ID message property)
rasaEnvironment: "worker"
# resources which rasaWorker is required / allowed to use
resources: {}
# additional volumeMounts to the main container
extraVolumeMounts: []
# - name: tmpdir
# mountPath: /var/lib/mypath
# additional volumes to the pod
extraVolumes: []
# - name: tmpdir
# emptyDir: {}
# dbMigrationService specifies settings for the database migration service
# The database migration service requires Rasa X >= 0.33.0
dbMigrationService:
# initContainer describes settings related to the init-db container used as a init container for deployments
initContainer:
# image is the Docker image which is used by the init container
image: alpine:3.12.3
# command overrides the default command to run in the container
command: []
# args overrides the default arguments to run in the container
args: []
# name is the Docker image name which is used by the migration service (uses `rasax.name` by default)
name: "" # gcr.io/rasa-platform/rasa-x-ee
# tag refers to the Rasa X image tag (uses `appVersion` by default)
tag: ""
# ignoreVersionCheck defines if check required minimum Rasa X version that is required to run the service
ignoreVersionCheck: false
# port on which which to run the readiness endpoint
port: 8000
# tolerations can be used to control the pod to node assignment
# https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# - key: "nvidia.com/gpu"
# operator: "Exists"
# nodeSelector to specify which node the pods should run on
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
nodeSelector: {}
# "beta.kubernetes.io/instance-type": "g3.8xlarge"
# resources which the event service is required / allowed to use
resources: {}
# extraEnvs are environment variables which can be added to the dbMigrationService deployment
extraEnvs: []
# - name: SOME_CUSTOM_ENV_VAR
# value: "custom value"
# extraVolumeMounts defines additional volumeMounts to the main container
extraVolumeMounts: []
# - name: tmpdir
# mountPath: /var/lib/mypath
# extraVolumes defines additional volumes to the pod
extraVolumes: []
# - name: tmpdir
# emptyDir: {}
# automountServiceAccountToken specifies whether the Kubernetes service account
# credentials should be automatically mounted into the pods. See more about it in
# https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server
automountServiceAccountToken: false
# service specifies settings for exposing the db migration service to other services
service:
# annotations for the service
annotations: {}
livenessProbe:
# initialProbeDelay for the `livenessProbe`
initialProbeDelay: 10
# scheme to be used by the `livenessProbe`
scheme: "HTTP"
# readinessProbe checks whether rasa x can receive traffic
readinessProbe:
# initialProbeDelay for the `readinessProbe`
initialProbeDelay: 10
# scheme to be used by the `readinessProbe`
scheme: "HTTP"
# podLabels adds additional pod labels
# https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
podLabels: {}
# event-service specific settings
eventService:
# override the default command to run in the container
command: []
# override the default arguments to run in the container
args: []
# event service just uses the Rasa X image
name: "rasa/rasa-x" # gcr.io/rasa-platform/rasa-x-ee
# tag refers to the Rasa X image tag (uses `appVersion` by default)
tag: ""
# port on which which to run the readiness endpoint
port: 5673
# replicaCount of the event-service container
replicaCount: 1
# databaseName the event service uses to store data
databaseName: "rasa"
# tolerations can be used to control the pod to node assignment
# https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# - key: "nvidia.com/gpu"
# operator: "Exists"
# nodeSelector to specify which node the pods should run on
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
nodeSelector: {}
# "beta.kubernetes.io/instance-type": "g3.8xlarge"
# resources which the event service is required / allowed to use
resources: {}
# extraEnvs are environment variables which can be added to the eventService deployment
extraEnvs: []
# - name: SOME_CUSTOM_ENV_VAR
# value: "custom value"
# additional volumeMounts to the main container
extraVolumeMounts: []
# - name: tmpdir
# mountPath: /var/lib/mypath
# additional volumes to the pod
extraVolumes: []
# - name: tmpdir
# emptyDir: {}
# livenessProbe checks whether the event service needs to be restarted
livenessProbe:
# initialProbeDelay for the `livenessProbe`
initialProbeDelay: 10
scheme: "HTTP"
# readinessProbe checks whether the event service can receive traffic
readinessProbe:
# initialProbeDelay for the `readinessProbe`
initialProbeDelay: 10
scheme: "HTTP"
# automountServiceAccountToken specifies whether the Kubernetes service account
# credentials should be automatically mounted into the pods. See more about it in
# https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server
automountServiceAccountToken: false
# podLabels adds additional pod labels
# https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
podLabels: {}
# app (custom action server) specific settings
app:
# default is to install action server from image.
install: true
# if install is set to false, the url to the existing action server can be configured by setting existingUrl.
# #existingUrl: http://myactionserver:5055/webhook
#
# override the default command to run in the container
command: []
# override the default arguments to run in the container
args: []
# name of the custom action server image to use
name: "rasa/rasa-x-demo"
# tag refers to the custom action server image tag
tag: "0.38.0"
# replicaCount of the custom action server container
replicaCount: 1
# port on which the custom action server runs
port: 5055
# scheme by which custom action server is accessible
scheme: http
# resources which app is required / allowed to use
resources: {}
# Jaeger Sidecar
jaegerSidecar: "false"
# extraEnvs are environment variables which can be added to the app deployment
extraEnvs: []
# - name: DATABASE_URL
# valueFrom:
# secretKeyRef:
# name: app-secret
# key: database_url
# tolerations can be used to control the pod to node assignment
# https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# - key: "nvidia.com/gpu"
# operator: "Exists"
# nodeSelector to specify which node the pods should run on
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
nodeSelector: {}
# "beta.kubernetes.io/instance-type": "g3.8xlarge"
# endpoints specifies the webhook and health check url paths of the action server app
endpoints:
# actionEndpointUrl is the URL which Rasa Open Source calls to execute custom actions
actionEndpointUrl: /webhook
# healthCheckURL is the URL which is used to check the pod health status
healthCheckUrl: /health
# additional volumeMounts to the main container
extraVolumeMounts: []
# - name: tmpdir
# mountPath: /var/lib/mypath
# additional volumes to the pod
extraVolumes: []
# - name: tmpdir
# emptyDir: {}
# automountServiceAccountToken specifies whether the Kubernetes service account
# credentials should be automatically mounted into the pods. See more about it in
# https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server
automountServiceAccountToken: true
# service specifies settings for exposing app to other services
service:
# annotations for the service
annotations: {}
# livenessProbe checks whether app needs to be restarted
livenessProbe:
# initialProbeDelay for the `livenessProbe`
initialProbeDelay: 10
# scheme to be used by the `livenessProbe`
scheme: "HTTP"
# readinessProbe checks whether app can receive traffic
readinessProbe:
# initialProbeDelay for the `readinessProbe`
initialProbeDelay: 10
# scheme to be used by the `readinessProbe`
scheme: "HTTP"
# podLabels adds additional pod labels
# https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
podLabels:
compute: fargate
# nginx specific settings
nginx:
# enabled should be `true` if you want to use nginx
# if you set false, you will need to set up some other method of routing (VirtualService/Ingress controller)
enabled: false
# override the default command to run in the container
command: []
# override the default arguments to run in the container
args: []
# name of the nginx image to use
name: "nginx"
# tag refers to the nginx image tag (uses `appVersion` by default)
tag: "1.19"
# custom config map containing nginx.conf, ssl.conf.template, rasax.nginx.template
customConfConfigMap: ""
# replicaCount of nginx containers to run
replicaCount: 1
# certificateSecret which nginx uses to mount the certificate files
certificateSecret: ""
# service which is to expose nginx
service:
# annotations for the service
annotations: {}
# type of the service (https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types)
type: LoadBalancer
# loadBalancerSourceRange for AWS deployments (https://kubernetes.io/docs/concepts/services-networking/service/#aws-nlb-support)
loadBalancerSourceRanges: []
# port is the port which the nginx service exposes for HTTP connections
port: 8000
# nodePort can be used with a service of type `NodePort` to expose the service on a certain port of the node (https://kubernetes.io/docs/concepts/services-networking/service/#nodeport)
nodePort: ""
# externalIPs can be used to expose the service to certain IPs (https://kubernetes.io/docs/concepts/services-networking/service/#external-ips)
externalIPs: []
livenessProbe:
# command for the `livenessProbe`
command: []
# initialProbeDelay for the `livenessProbe`
initialProbeDelay: 10
# readinessProbe checks whether rasa x can receive traffic
readinessProbe:
# command for the `readinessProbe`
command: []
# initialProbeDelay for the `readinessProbe`
initialProbeDelay: 10
# tolerations can be used to control the pod to node assignment
# https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# - key: "nvidia.com/gpu"
# operator: "Exists"
# nodeSelector to specify which node the pods should run on
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
nodeSelector: {}
# "beta.kubernetes.io/instance-type": "g3.8xlarge"
# resources which nginx is required / allowed to use
resources: {}
# additional volumeMounts to the main container
extraVolumeMounts: []
# - name: tmpdir
# mountPath: /var/lib/mypath
# additional volumes to the pod
extraVolumes: []
# - name: tmpdir
# emptyDir: {}
# automountServiceAccountToken specifies whether the Kubernetes service account
# credentials should be automatically mounted into the pods. See more about it in
# https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server
automountServiceAccountToken: false
# podLabels adds additional pod labels
# https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
podLabels: {}
# Duckling specific settings
duckling:
# override the default command to run in the container
command: []
# override the default arguments to run in the container
args: []
# Enable or disable duckling
enabled: true
# name of the Duckling image to use
name: "rasa/duckling"
# tag refers to the duckling image tag
tag: "0.1.6.3"
# replicaCount of duckling containers to run
replicaCount: 1
# port on which duckling should run
port: 8000
# scheme by which duckling is accessible
scheme: http
extraEnvs: []
# tolerations can be used to control the pod to node assignment
# https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# - key: "nvidia.com/gpu"
# operator: "Exists"
# nodeSelector to specify which node the pods should run on
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
nodeSelector: {}
# "beta.kubernetes.io/instance-type": "g3.8xlarge"
# resources which duckling is required / allowed to use
resources: {}
readinessProbe:
# initialProbeDelay for the `readinessProbe`
initialProbeDelay: 10
# scheme to be used by the `readinessProbe`
scheme: "HTTP"
livenessProbe:
# initialProbeDelay for the `livenessProbe`
initialProbeDelay: 10
# scheme to be used by the `livenessProbe`
scheme: "HTTP"
# additional volumeMounts to the main container
extraVolumeMounts: []
# - name: tmpdir
# mountPath: /var/lib/mypath
# additional volumes to the pod
extraVolumes: []
# - name: tmpdir
# emptyDir: {}
# automountServiceAccountToken specifies whether the Kubernetes service account
# credentials should be automatically mounted into the pods. See more about it in
# https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server
automountServiceAccountToken: false
# service specifies settings for exposing duckling to other services
service:
# annotations for the service
annotations: {}
# podLabels adds additional pod labels
# https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
podLabels:
compute: fargate
# rasaSecret object which supplies passwords, tokens, etc. See
# https://rasa.com/docs/rasa-x/openshift-kubernetes/#providing-access-credentials-using-an-external-secret
# to see which values are required in the secret in case you want to provide your own.
# If no secret is provided, a secret will be generated.
rasaSecret: ""
# debugMode enables / disables the debug mode for Rasa and Rasa X
debugMode: false
# separateEventService value determines whether the eventService will be run as a separate service.
# If set to 'false', Rasa X will run an event service as a subprocess (not recommended
# high-load setups).
separateEventService: "true"
# separateDBMigrationService value determines whether the dbMigrationService will be run as a separate service.
# If set to 'false', Rasa X will run a database migration service as a subprocess.
separateDBMigrationService: true
# postgresql specific settings (https://hub.helm.sh/charts/bitnami/postgresql/8.6.13)
postgresql:
# Install should be `true` if the postgres subchart should be used
install: false
# postgresqlPostgresPassword is the password when .Values.global.postgresql.postgresqlUsername does not equal "postgres"
postgresqlPostgresPassword: "test_password"
# existingHost is the host which is used when an external postgresql instance is provided (`install: false`)
existingHost: "terraform-.casjhc6sexdo.ap-south-1.rds.amazonaws.com"
# existingSecretKey is the key to get the password when an external postgresql instance is provided (`install: false`)
existingSecretKey: "rasa-postgresql"
# Configure security context for the postgresql init container
# volumePermissions:
## Init container Security Context
# securityContext:
# runAsUser: 0
## Configure security context for the rabbitmq container
# securityContext:
# enabled: true
# fsGroup: 1001
# runAsUser: 1001
# RabbitMQ specific settings (https://hub.helm.sh/charts/bitnami/rabbitmq/6.19.2)
rabbitmq:
# Install should be `true` if the rabbitmq subchart should be used
install: true
# Enabled should be `true` if any version of rabbit is used
enabled: true
# rabbitmq settings of the subchart
rabbitmq:
# username which is used for the authentication
username: "rabbit"
# password which is used for the authentication
password: "rabbitmq"
# existingPasswordSecret which should be used for the password instead of putting it in the values file
existingPasswordSecret: ""
# service specifies settings for exposing rabbit to other services
service:
# port on which rabbitmq is exposed to Rasa
port: 5672
# existingHost is the host which is used when an external rabbitmq instance is provided (`install: false`)
existingHost: ""
# existingPasswordSecretKey is the key to get the password when an external rabbitmq instance is provided (`install: false`)
existingPasswordSecretKey: ""
# # security context for the rabbitmq container (please see the documentation of the subchart)
# securityContext:
# enabled: true
# fsGroup: 1001
# runAsUser: 1001
# redis specific settings (https://hub.helm.sh/charts/bitnami/redis/10.5.14)
redis:
# Install should be `true` if the redis subchart should be used
install: false
# cluster settings for redis (Rasa does currently not support redis sentinels)
cluster:
# set up a single Redis instance, as `redis-py` does not support clusters (https://github.com/andymccurdy/redis-py#cluster-mode)
enabled: false
# redisPort: port which should be used to expose redis to the other components
redisPort: 6379
# existingSecret which should be used for the password instead of putting it in the values file
existingSecret: ""
# existingSecretPasswordKey is the key to get the password when an external redis instance is provided
existingSecretPasswordKey: ""
# existingHost is the host which is used when an external redis instance is provided (`install: false`)
existingHost: "redis.amazonaws.com"
# # security context for the redis container (please see the documentation of the subchart)
# securityContext:
# enabled: true
# fsGroup: 1001
# runAsUser: 1001
# ingress settings
ingress:
# enabled should be `true` if you want to use this ingress.
# Note that if `nginx.enabled` is `true` the `nginx` image is used as reverse proxy.
# In order to use nginx ingress you have to set `nginx.enabled=false`.
enabled: true
# annotations for the ingress - annotations are applied for the rasa and rasax ingresses
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: "letsencrypt-prod"
kubernetes.io/tls-acme: "true"
# annotationsRasa is extra annotations for the rasa nginx ingress
annotationsRasa: {}
# annotationsRasaX is extra annotations for the rasa x nginx ingress
annotationsRasaX:
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
# hosts for this ingress
hosts:
- host: rasawulu.ddns.net
paths:
- /
tls:
- secretName: rasa-prod
hosts:
- rasawulu.ddns.net
networkPolicy:
# Enable creation of NetworkPolicy resources. When set to true, explicit ingress & egress
# network policies will be generated for the required inter-pod connections
enabled: false
# Allow for traffic from a given CIDR - it's required in order to make kubelet able to run live and readiness probes
nodeCIDR: {}
# - ipBlock:
# cidr: 0.0.0.0/0
# images: Settings for the images
images:
# pullPolicy to use when deploying images
pullPolicy: "IfNotPresent"
# imagePullSecrets which are required to pull images for private registries
imagePullSecrets: []
# - name:
# securityContext to use
securityContext:
# runAsUser: 1000
fsGroup: 1000
# nameOverride replaces the Chart's name
nameOverride: ""
# fullNameOverride replace the Chart's fullname
fullnameOverride: ""
# global settings of the used subcharts
global:
# specifies the number of seconds you want to wait for your Deployment to progress before
# the system reports back that the Deployment has failed progressing - surfaced as a condition
# with Type=Progressing, Status=False. and Reason=ProgressDeadlineExceeded in the status of the resource
# source: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#progress-deadline-seconds
progressDeadlineSeconds: 600
# storageClass the volume claims should use
storageClass: "ebs-sc"
# postgresql: global settings of the postgresql subchart
postgresql:
# postgresqlUsername which should be used by Rasa to connect to Postgres
postgresqlUsername: "postgres"
# postgresqlPassword is the password which is used when the postgresqlUsername equals "postgres"
postgresqlPassword: "testpass"
# existingSecret which should be used for the password instead of putting it in the values file
existingSecret: "rasa-postgresql"
# postgresDatabase which should be used by Rasa X
postgresqlDatabase: "rasa"
# servicePort which is used to expose postgres to the other components
servicePort: 5432
# host: postgresql.hostedsomewhere.else
# redis: global settings of the postgresql subchart
redis:
# password to use in case there no external secret was provided
password: "redis-password"
# additionalDeploymentLabels can be used to map organizational structures onto system objects
# https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
additionalDeploymentLabels: {}
Hoping I didn’t accidentally reveal any passwords. Please let me know xD
Not sure at all (worst case just revert the changes), but towards the end of the file, under ingress
, try changing
hosts:
- host: rasawulu.ddns.net
paths:
- /
to just
hosts:
- host: rasawulu.ddns.net
and then upgrade the deployment with the modified values:
@thundersparkf This is similar issue Dev ? Now you able to train the model manually as suggested or and cluster pod working or this is different issue.
Nono solved. Adding the worker pod seemed to solve it. Thanks a lot guys! But this is an abnormal behaviour worth looking into more
Ah glad to know!
Why didn’t you have a worker pod? Especially even after reinstalling…
May I ask you to explain what you did in more detail (discovering and fixing the error)? Just in case someone gets into the same problem. After doing that, you can mark it as solution
Yes so we were using GitLab runners as a part CI/CD to build models and actions docker images and upload them. And since we can run it on gitlab ci for free we decided that separately running a worker pod might lead to extra expenses. And for even while reinstalling I’d disabled the worker alone, because worker pod being the reason for this doesn’t seem logical.
So @nik202 told me to separately try adding worker and training. And seemingly it worked. So I went ahead and removed to worker pod to see if it will still work and it did.