Script started on 2019-11-02 15:35:00-04:00 [TERM="xterm-256color" TTY="/dev/pts/2" COLUMNS="211" LINES="46"] (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> export RASA_X_VERSION=0.22.1 (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> wget -qO rasa-x-helm.tgz https://storage.googleapis.com/rasa-x-releases/${RASA_X_VERSION}/rasa-x-${RASA_X_VERSION}.tgz (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> helm intstall rasa-x-helm.tgz NAME: odd-zebra LAST DEPLOYED: Sat Nov 2 15:35:54 2019 NAMESPACE: rasa-x STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE agreement 1 1s database-config 4 1s rabbitmq-config 2 1s rasa-config 3 1s rasa-configuration-files 3 1s rasa-x-config 1 1s rasa-x-configuration-files 1 1s ==> v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE odd-zebra-app 0/1 0 0 1s odd-zebra-database 0/1 0 0 1s odd-zebra-duckling 0/1 0 0 1s odd-zebra-nginx 0/1 0 0 1s odd-zebra-rabbit 0/1 0 0 1s odd-zebra-rasa-production 0/1 0 0 1s odd-zebra-rasa-worker 0/1 0 0 1s odd-zebra-rasa-x 0/1 0 0 1s odd-zebra-redis 0/1 0 0 1s ==> v1/PersistentVolumeClaim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE odd-zebra-database-claim Bound pvc-017e7860-7359-4284-afad-4eee72ccb2e5 1Gi RWO nfs-client 1s odd-zebra-rasa-x-claim Bound pvc-9fd9cde8-2656-4683-9e01-5783169ba01b 1Gi RWO nfs-client 1s ==> v1/Secret NAME TYPE DATA AGE rasa-secret Opaque 7 1s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE app ClusterIP 10.233.31.185 5055/TCP,80/TCP 1s db ClusterIP 10.233.59.236 5432/TCP 1s duckling ClusterIP 10.233.36.132 8000/TCP 1s nginx LoadBalancer 10.233.39.151 192.168.192.11 8000:30949/TCP,8443:31375/TCP 1s rabbit ClusterIP 10.233.7.109 5672/TCP 1s rasa-production ClusterIP 10.233.21.218 5005/TCP 1s rasa-worker ClusterIP 10.233.19.65 5005/TCP 1s rasa-x ClusterIP 10.233.58.67 5002/TCP 1s redis ClusterIP 10.233.26.230 6379/TCP 1s NOTES: Thanks for installing Rasa X ! Creating a Rasa X user: - Rasa X CE: go to the terminal of the `rasa-x` pod and then execute `python scripts/manage_users.py create --update me admin` to set your password - Rasa X EE: go to the terminal of the `rasa-x` pod and then execute `python scripts/manage_users.py create `. You can then log in using these credentials. Also check out the Rasa X docs here for more help: https://rasa.com/docs/rasa-x/ (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl get pods NAME READY STATUS RESTARTS AGE odd-zebra-app-77f8bff987-72kbc 1/1 Running 0 2m14s odd-zebra-database-5ddff6c7bf-msb58 1/1 Running 0 2m14s odd-zebra-duckling-d59964cc9-7wbwr 1/1 Running 0 2m14s odd-zebra-nginx-77f4cb5ff7-gsrsc 1/1 Running 0 2m14s odd-zebra-rabbit-6896557b6-scgpx 1/1 Running 0 2m14s odd-zebra-rasa-production-645c7b97c5-5k62p 0/1 ContainerCreating 0 2m13s odd-zebra-rasa-worker-7f798fc558-vf6fq 0/1 ContainerCreating 0 2m13s odd-zebra-rasa-x-68bc94bc49-cshn2 0/1 ContainerCreating 0 2m12s odd-zebra-redis-7787bd4ddc-nnq85 0/1 ContainerCreating 0 2m12s (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl get pods NAME READY STATUS RESTARTS AGE odd-zebra-app-77f8bff987-72kbc 1/1 Running 0 5m49s odd-zebra-database-5ddff6c7bf-msb58 1/1 Running 0 5m49s odd-zebra-duckling-d59964cc9-7wbwr 1/1 Running 0 5m49s odd-zebra-nginx-77f4cb5ff7-gsrsc 1/1 Running 0 5m49s odd-zebra-rabbit-6896557b6-scgpx 1/1 Running 0 5m49s odd-zebra-rasa-production-645c7b97c5-5k62p 1/1 Running 0 5m48s odd-zebra-rasa-worker-7f798fc558-vf6fq 1/1 Running 0 5m48s odd-zebra-rasa-x-68bc94bc49-cshn2 0/1 ContainerCreating 0 5m47s odd-zebra-redis-7787bd4ddc-nnq85 0/1 ContainerCreating 0 5m47s (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl get pods NAME READY STATUS RESTARTS AGE odd-zebra-app-77f8bff987-72kbc 1/1 Running 0 8m49s odd-zebra-database-5ddff6c7bf-msb58 1/1 Running 1 8m49s odd-zebra-duckling-d59964cc9-7wbwr 1/1 Running 0 8m49s odd-zebra-nginx-77f4cb5ff7-gsrsc 1/1 Running 0 8m49s odd-zebra-rabbit-6896557b6-scgpx 1/1 Running 0 8m49s odd-zebra-rasa-production-645c7b97c5-5k62p 1/1 Running 1 8m48s odd-zebra-rasa-worker-7f798fc558-vf6fq 1/1 Running 1 8m48s odd-zebra-rasa-x-68bc94bc49-cshn2 0/1 CrashLoopBackOff 3 8m47s odd-zebra-redis-7787bd4ddc-nnq85 1/1 Running 0 8m47s (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl describe odd-zebra-database-5ddff6c7bf-msb58 error: the server doesn't have a resource type "odd-zebra-database-5ddff6c7bf-msb58" (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl describe odd-zebra-database-5ddff6c7bf-msb58podd-zebra-database-5ddff6c7bf-msb58odd-zebra-database-5ddff6c7bf-msb58dodd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58 Name: odd-zebra-database-5ddff6c7bf-msb58 Namespace: rasa-x Priority: 0 Node: clinario-worker3/192.168.192.57 Start Time: Sat, 02 Nov 2019 15:35:55 -0400 Labels: app.kubernetes.io/instance=odd-zebra app.kubernetes.io/managed-by=Tiller app.kubernetes.io/name=rasa-x app.kubernetes.io/version=0.22.1 app.name=database-deployment.yaml helm.sh/chart=rasa-x-0.22.1 pod-template-hash=5ddff6c7bf Annotations: Status: Running IP: 10.233.68.198 IPs: IP: 10.233.68.198 Controlled By: ReplicaSet/odd-zebra-database-5ddff6c7bf Containers: rasa-x: Container ID: docker://1440a7379881dbafdfe85ed4b7d5e1f625e8e8861fc7f678e6945ad0d71140d7 Image: bitnami/postgresql:11.2.0 Image ID: docker-pullable://bitnami/postgresql@sha256:0031128f9451ab3243e115ecb7ade6ba9386b0df061ae2e704ad8e22aed681f8 Port: 5432/TCP Host Port: 0/TCP State: Running Started: Sat, 02 Nov 2019 15:43:23 -0400 Last State: Terminated Reason: Error Exit Code: 137 Started: Sat, 02 Nov 2019 15:36:00 -0400 Finished: Sat, 02 Nov 2019 15:36:51 -0400 Ready: True Restart Count: 1 Liveness: exec [pg_isready -U admin -d rasa] delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: POSTGRESQL_USERNAME: Optional: false POSTGRESQL_PASSWORD: Optional: false POSTGRESQL_DATABASE: Optional: false Mounts: /bitnami/postgresql from database-claim (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-2msc7 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: database-claim: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: odd-zebra-database-claim ReadOnly: false default-token-2msc7: Type: Secret (a volume populated by a Secret) SecretName: default-token-2msc7 Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled default-scheduler Successfully assigned rasa-x/odd-zebra-database-5ddff6c7bf-msb58 to clinario-worker3 Warning Unhealthy 8m47s (x3 over 9m7s) kubelet, clinario-worker3 Liveness probe failed: /tmp:5432 - no response Normal Killing 8m47s kubelet, clinario-worker3 Container rasa-x failed liveness probe, will be restarted Normal Pulling 8m17s (x2 over 9m10s) kubelet, clinario-worker3 Pulling image "bitnami/postgresql:11.2.0" Normal Pulled 108s (x2 over 9m8s) kubelet, clinario-worker3 Successfully pulled image "bitnami/postgresql:11.2.0" Normal Created 106s (x2 over 9m8s) kubelet, clinario-worker3 Created container rasa-x Normal Started 104s (x2 over 9m8s) kubelet, clinario-worker3 Started container rasa-x (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl describe pod odd-zebra-rasa-x-68bc94bc49-cshn2 Name: odd-zebra-rasa-x-68bc94bc49-cshn2 Namespace: rasa-x Priority: 0 Node: clinario-worker3/192.168.192.57 Start Time: Sat, 02 Nov 2019 15:35:57 -0400 Labels: app.kubernetes.io/instance=odd-zebra app.kubernetes.io/managed-by=Tiller app.kubernetes.io/name=rasa-x app.kubernetes.io/version=0.22.1 app.name=rasa-x-deployment.yaml helm.sh/chart=rasa-x-0.22.1 pod-template-hash=68bc94bc49 Annotations: Status: Running IP: 10.233.68.111 IPs: IP: 10.233.68.111 Controlled By: ReplicaSet/odd-zebra-rasa-x-68bc94bc49 Containers: rasa-x: Container ID: docker://32cce638acac522a6c6a20447d1b47354dde56c714292f9634f2a0d98c147ff8 Image: rasa/rasa-x:0.22.1 Image ID: docker-pullable://rasa/rasa-x@sha256:a48538cb19d4e0e830dd14c82f767b004bcb3902aeb9d96532d2dccddec82c20 Port: 5002/TCP Host Port: 0/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Sat, 02 Nov 2019 15:45:02 -0400 Finished: Sat, 02 Nov 2019 15:45:04 -0400 Ready: False Restart Count: 4 Liveness: http-get http://:http/ delay=10s timeout=1s period=10s #success=1 #failure=10 Environment Variables from: rasa-x-config ConfigMap Optional: false rabbitmq-config ConfigMap Optional: false rasa-config ConfigMap Optional: false database-config ConfigMap Optional: false Environment: RASA_MODEL_DIR: /app/models RABBITMQ_QUEUE: rasa_production_events RABBITMQ_PASSWORD: Optional: false PASSWORD_SALT: Optional: false RASA_X_USER_ANALYTICS: 0 SANIC_RESPONSE_TIMEOUT: 3600 LOCAL_MODE: false JWT_SECRET: Optional: false RASA_TOKEN: Optional: false RASA_WORKER_TOKEN: Optional: false RASA_X_TOKEN: Optional: false DB_PASSWORD: Optional: false RASA_WORKER_HOST: http://rasa-worker:5005 Mounts: /app/auth from rasa-x-claim (rw,path="auth") /app/credentials.yml from rasa-configuration (rw,path="credentials.yml") /app/endpoints.yml from rasa-configuration (rw,path="endpoints.yml") /app/environments.yml from environments (rw,path="environments.yml") /app/logs from rasa-x-claim (rw,path="logs") /app/models from rasa-x-claim (rw,path="models") /var/run/secrets/kubernetes.io/serviceaccount from default-token-2msc7 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: environments: Type: ConfigMap (a volume populated by a ConfigMap) Name: rasa-x-configuration-files Optional: false rasa-configuration: Type: ConfigMap (a volume populated by a ConfigMap) Name: rasa-configuration-files Optional: false rasa-x-claim: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: odd-zebra-rasa-x-claim ReadOnly: false default-token-2msc7: Type: Secret (a volume populated by a Secret) SecretName: default-token-2msc7 Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled default-scheduler Successfully assigned rasa-x/odd-zebra-rasa-x-68bc94bc49-cshn2 to clinario-worker3 Normal Pulled 92s (x4 over 3m20s) kubelet, clinario-worker3 Successfully pulled image "rasa/rasa-x:0.22.1" Normal Created 92s (x4 over 2m41s) kubelet, clinario-worker3 Created container rasa-x Normal Started 92s (x4 over 2m40s) kubelet, clinario-worker3 Started container rasa-x Warning BackOff 57s (x9 over 2m18s) kubelet, clinario-worker3 Back-off restarting failed container Normal Pulling 46s (x5 over 9m47s) kubelet, clinario-worker3 Pulling image "rasa/rasa-x:0.22.1" (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl describe pod odd-zebra-rasa-x-68bc94bc49-cshn2 database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58l odd-zebra-database-5ddff6c7bf-msb58o odd-zebra-database-5ddff6c7bf-msb58g odd-zebra-database-5ddff6c7bf-msb58s odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58  Welcome to the Bitnami postgresql container Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues Send us your feedback at containers@bitnami.com  INFO  ==> ** Starting PostgreSQL setup ** INFO  ==> Validating settings in POSTGRESQL_* env vars.. INFO  ==> Initializing PostgreSQL database... INFO  ==> postgresql.conf file not detected. Generating it... INFO  ==> pg_hba.conf file not detected. Generating it... INFO  ==> Deploying PostgreSQL with persisted data... INFO  ==> Cleaning stale postmaster.pid file INFO  ==> Configuring replication parameters INFO  ==> Loading custom scripts... INFO  ==> Enabling remote connections INFO  ==> Stopping PostgreSQL... INFO  ==> ** PostgreSQL setup finished! ** INFO  ==> ** Starting PostgreSQL ** 2019-11-02 19:43:24.078 GMT [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 2019-11-02 19:43:24.078 GMT [1] LOG: listening on IPv6 address "::", port 5432 2019-11-02 19:43:24.247 GMT [1] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432" 2019-11-02 19:43:24.522 GMT [142] LOG: database system was interrupted; last known up at 2019-11-02 19:36:36 GMT 2019-11-02 19:43:26.501 GMT [142] LOG: database system was not properly shut down; automatic recovery in progress 2019-11-02 19:43:26.596 GMT [142] LOG: redo starts at 0/15D7228 2019-11-02 19:43:26.596 GMT [142] LOG: invalid record length at 0/15D72D0: wanted 24, got 0 2019-11-02 19:43:26.596 GMT [142] LOG: redo done at 0/15D7260 2019-11-02 19:43:26.817 GMT [1] LOG: database system is ready to accept connections 2019-11-02 19:43:27.461 GMT [149] FATAL: password authentication failed for user "admin" 2019-11-02 19:43:27.461 GMT [149] DETAIL: Role "admin" does not exist. Connection matched pg_hba.conf line 1: "host all all 0.0.0.0/0 md5" 2019-11-02 19:43:31.498 GMT [155] FATAL: password authentication failed for user "admin" 2019-11-02 19:43:31.498 GMT [155] DETAIL: Role "admin" does not exist. Connection matched pg_hba.conf line 3: "local all all md5" 2019-11-02 19:43:31.498 GMT [155] LOG: could not send data to client: Broken pipe 2019-11-02 19:43:41.529 GMT [161] FATAL: password authentication failed for user "admin" 2019-11-02 19:43:41.529 GMT [161] DETAIL: Role "admin" does not exist. Connection matched pg_hba.conf line 3: "local all all md5" 2019-11-02 19:43:41.530 GMT [161] LOG: could not send data to client: Broken pipe 2019-11-02 19:43:46.118 GMT [162] FATAL: password authentication failed for user "admin" 2019-11-02 19:43:46.118 GMT [162] DETAIL: Role "admin" does not exist. Connection matched pg_hba.conf line 1: "host all all 0.0.0.0/0 md5" 2019-11-02 19:43:51.491 GMT [168] FATAL: password authentication failed for user "admin" 2019-11-02 19:43:51.491 GMT [168] DETAIL: Role "admin" does not exist. Connection matched pg_hba.conf line 3: "local all all md5" 2019-11-02 19:43:51.491 GMT [168] LOG: could not send data to client: Broken pipe 2019-11-02 19:44:01.514 GMT [175] FATAL: password authentication failed for user "admin" 2019-11-02 19:44:01.514 GMT [175] DETAIL: Role "admin" does not exist. Connection matched pg_hba.conf line 3: "local all all md5" 2019-11-02 19:44:01.514 GMT [175] LOG: could not send data to client: Broken pipe 2019-11-02 19:44:11.513 GMT [182] FATAL: password authentication failed for user "admin" 2019-11-02 19:44:11.513 GMT [182] DETAIL: Role "admin" does not exist. Connection matched pg_hba.conf line 3: "local all all md5" 2019-11-02 19:44:11.514 GMT [182] LOG: could not send data to client: Broken pipe 2019-11-02 19:44:17.544 GMT [183] FATAL: password authentication failed for user "admin" 2019-11-02 19:44:17.544 GMT [183] DETAIL: Role "admin" does not exist. Connection matched pg_hba.conf line 1: "host all all 0.0.0.0/0 md5" 2019-11-02 19:44:21.494 GMT [190] FATAL: password authentication failed for user "admin" 2019-11-02 19:44:21.494 GMT [190] DETAIL: Role "admin" does not exist. Connection matched pg_hba.conf line 3: "local all all md5" 2019-11-02 19:44:21.494 GMT [190] LOG: could not send data to client: Broken pipe 2019-11-02 19:44:31.489 GMT [196] FATAL: password authentication failed for user "admin" 2019-11-02 19:44:31.489 GMT [196] DETAIL: Role "admin" does not exist. Connection matched pg_hba.conf line 3: "local all all md5" 2019-11-02 19:44:31.489 GMT [196] LOG: could not send data to client: Broken pipe 2019-11-02 19:44:41.488 GMT [202] FATAL: password authentication failed for user "admin" 2019-11-02 19:44:41.488 GMT [202] DETAIL: Role "admin" does not exist. Connection matched pg_hba.conf line 3: "local all all md5" 2019-11-02 19:44:41.488 GMT [202] LOG: could not send data to client: Broken pipe 2019-11-02 19:44:51.506 GMT [209] FATAL: password authentication failed for user "admin" 2019-11-02 19:44:51.506 GMT [209] DETAIL: Role "admin" does not exist. Connection matched pg_hba.conf line 3: "local all all md5" 2019-11-02 19:44:51.506 GMT [209] LOG: could not send data to client: Broken pipe 2019-11-02 19:45:01.498 GMT [216] FATAL: password authentication failed for user "admin" 2019-11-02 19:45:01.498 GMT [216] DETAIL: Role "admin" does not exist. Connection matched pg_hba.conf line 3: "local all all md5" 2019-11-02 19:45:01.498 GMT [216] LOG: could not send data to client: Broken pipe 2019-11-02 19:45:04.546 GMT [217] FATAL: password authentication failed for user "admin" 2019-11-02 19:45:04.546 GMT [217] DETAIL: Role "admin" does not exist. Connection matched pg_hba.conf line 1: "host all all 0.0.0.0/0 md5" 2019-11-02 19:45:11.496 GMT [224] FATAL: password authentication failed for user "admin" 2019-11-02 19:45:11.496 GMT [224] DETAIL: Role "admin" does not exist. Connection matched pg_hba.conf line 3: "local all all md5" 2019-11-02 19:45:11.496 GMT [224] LOG: could not send data to client: Broken pipe 2019-11-02 19:45:21.506 GMT [232] FATAL: password authentication failed for user "admin" 2019-11-02 19:45:21.506 GMT [232] DETAIL: Role "admin" does not exist. Connection matched pg_hba.conf line 3: "local all all md5" 2019-11-02 19:45:21.506 GMT [232] LOG: could not send data to client: Broken pipe 2019-11-02 19:45:31.501 GMT [238] FATAL: password authentication failed for user "admin" 2019-11-02 19:45:31.501 GMT [238] DETAIL: Role "admin" does not exist. Connection matched pg_hba.conf line 3: "local all all md5" 2019-11-02 19:45:31.501 GMT [238] LOG: could not send data to client: Broken pipe 2019-11-02 19:45:41.528 GMT [244] FATAL: password authentication failed for user "admin" 2019-11-02 19:45:41.528 GMT [244] DETAIL: Role "admin" does not exist. Connection matched pg_hba.conf line 3: "local all all md5" 2019-11-02 19:45:41.529 GMT [244] LOG: could not send data to client: Broken pipe 2019-11-02 19:45:51.506 GMT [250] FATAL: password authentication failed for user "admin" 2019-11-02 19:45:51.506 GMT [250] DETAIL: Role "admin" does not exist. Connection matched pg_hba.conf line 3: "local all all md5" 2019-11-02 19:45:51.506 GMT [250] LOG: could not send data to client: Broken pipe (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl logs odd-zebra-database-5ddff6c7bf-msb58describe pod odd-zebra-rasa-x-68bc94bc49-cshn2 database-5ddff6c7bf-msb58rasa-x-68bc94bc49-cshn2 logs odd-zebra-database-5ddff6c7bf-msb58kubectl logs odd-zebra-database-5ddff6c7bf-msb58describe pod odd-zebra-rasa-x-68bc94bc49-cshn2 [1@l[1@o[1@g[1@s INFO:rasax.community.services.event_service:Start consuming pika host 'rabbit'. Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 2262, in _wrap_pool_connect return fn() File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 303, in unique_connection return _ConnectionFairy._checkout(self) File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 760, in _checkout fairy = _ConnectionRecord.checkout(pool) File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 492, in checkout rec = pool._do_get() File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool/impl.py", line 139, in _do_get self._dec_overflow() File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/langhelpers.py", line 68, in __exit__ compat.reraise(exc_type, exc_value, exc_tb) File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 129, in reraise raise value File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool/impl.py", line 136, in _do_get return self._create_connection() File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 308, in _create_connection return _ConnectionRecord(self) File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 437, in __init__ self.__connect(first_connect_check=True) File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 639, in __connect connection = pool._invoke_creator(self) File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/strategies.py", line 114, in connect return dialect.connect(*cargs, **cparams) File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 453, in connect return self.dbapi.connect(*cargs, **cparams) File "/usr/local/lib/python3.6/site-packages/psycopg2/__init__.py", line 126, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) psycopg2.OperationalError: FATAL: password authentication failed for user "admin" The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/local/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.6/site-packages/rasax/community/server.py", line 88, in main() File "/usr/local/lib/python3.6/site-packages/rasax/community/server.py", line 27, in main sql_migrations.run_migrations(session) File "/usr/local/lib/python3.6/site-packages/rasax/community/sql_migrations.py", line 25, in run_migrations _run_schema_migrations(session) File "/usr/local/lib/python3.6/site-packages/rasax/community/sql_migrations.py", line 50, in _run_schema_migrations command.upgrade(alembic_config, "head") File "/usr/local/lib/python3.6/site-packages/alembic/command.py", line 276, in upgrade script.run_env() File "/usr/local/lib/python3.6/site-packages/alembic/script/base.py", line 475, in run_env util.load_python_file(self.dir, "env.py") File "/usr/local/lib/python3.6/site-packages/alembic/util/pyfiles.py", line 90, in load_python_file module = load_module_py(module_id, path) File "/usr/local/lib/python3.6/site-packages/alembic/util/compat.py", line 156, in load_module_py spec.loader.exec_module(module) File "", line 678, in exec_module File "", line 219, in _call_with_frames_removed File "/usr/local/lib/python3.6/site-packages/rasax/community/database/schema_migrations/alembic/env.py", line 89, in run_migrations_online() File "/usr/local/lib/python3.6/site-packages/rasax/community/database/schema_migrations/alembic/env.py", line 67, in run_migrations_online with connectable.connect() as connection: File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 2193, in connect return self._connection_cls(self, **kwargs) File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 103, in __init__ else engine.raw_connection() File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 2293, in raw_connection self.pool.unique_connection, _connection File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 2266, in _wrap_pool_connect e, dialect, self File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1536, in _handle_dbapi_exception_noconnection util.raise_from_cause(sqlalchemy_exception, exc_info) File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 383, in raise_from_cause reraise(type(exception), exception, tb=exc_tb, cause=cause) File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 128, in reraise raise value.with_traceback(tb) File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 2262, in _wrap_pool_connect return fn() File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 303, in unique_connection return _ConnectionFairy._checkout(self) File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 760, in _checkout fairy = _ConnectionRecord.checkout(pool) File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 492, in checkout rec = pool._do_get() File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool/impl.py", line 139, in _do_get self._dec_overflow() File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/langhelpers.py", line 68, in __exit__ compat.reraise(exc_type, exc_value, exc_tb) File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 129, in reraise raise value File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool/impl.py", line 136, in _do_get return self._create_connection() File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 308, in _create_connection return _ConnectionRecord(self) File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 437, in __init__ self.__connect(first_connect_check=True) File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 639, in __connect connection = pool._invoke_creator(self) File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/strategies.py", line 114, in connect return dialect.connect(*cargs, **cparams) File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 453, in connect return self.dbapi.connect(*cargs, **cparams) File "/usr/local/lib/python3.6/site-packages/psycopg2/__init__.py", line 126, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) FATAL: password authentication failed for user "admin" (Background on this error at: http://sqlalche.me/e/e3q8) (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl logs odd-zebra-rasa-x-68bc94bc49-cshn2  odd-zebra-database-5ddff6c7bf-msb58describe pod odd-zebra-rasa-x-68bc94bc49-cshn2 database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58kubectl exec -it odd-zebra-database-5ddff6c7bf-msb58 odd-zebra-database-5ddff6c7bf-msb58 -- /bin/bash I have no name!@odd-zebra-database-5ddff6c7bf-msb58:/$ I have no name!@odd-zebra-database-5ddff6c7bf-msb58:/$ export declare -x APP_PORT="tcp://10.233.31.185:5055" declare -x APP_PORT_5055_TCP="tcp://10.233.31.185:5055" declare -x APP_PORT_5055_TCP_ADDR="10.233.31.185" declare -x APP_PORT_5055_TCP_PORT="5055" declare -x APP_PORT_5055_TCP_PROTO="tcp" declare -x APP_PORT_80_TCP="tcp://10.233.31.185:80" declare -x APP_PORT_80_TCP_ADDR="10.233.31.185" declare -x APP_PORT_80_TCP_PORT="80" declare -x APP_PORT_80_TCP_PROTO="tcp" declare -x APP_SERVICE_HOST="10.233.31.185" declare -x APP_SERVICE_PORT="5055" declare -x APP_SERVICE_PORT_HTTP="5055" declare -x APP_SERVICE_PORT_WORKAROUND="80" declare -x BITNAMI_APP_NAME="postgresql" declare -x BITNAMI_IMAGE_VERSION="11.2.0-debian-9-r74" declare -x BITNAMI_PKG_CHMOD="-R g+rwX" declare -x DB_PORT="tcp://10.233.59.236:5432" declare -x DB_PORT_5432_TCP="tcp://10.233.59.236:5432" declare -x DB_PORT_5432_TCP_ADDR="10.233.59.236" declare -x DB_PORT_5432_TCP_PORT="5432" declare -x DB_PORT_5432_TCP_PROTO="tcp" declare -x DB_SERVICE_HOST="10.233.59.236" declare -x DB_SERVICE_PORT="5432" declare -x DUCKLING_PORT="tcp://10.233.36.132:8000" declare -x DUCKLING_PORT_8000_TCP="tcp://10.233.36.132:8000" declare -x DUCKLING_PORT_8000_TCP_ADDR="10.233.36.132" declare -x DUCKLING_PORT_8000_TCP_PORT="8000" declare -x DUCKLING_PORT_8000_TCP_PROTO="tcp" declare -x DUCKLING_SERVICE_HOST="10.233.36.132" declare -x DUCKLING_SERVICE_PORT="8000" declare -x HOME="/" declare -x HOSTNAME="odd-zebra-database-5ddff6c7bf-msb58" declare -x IMAGE_OS="debian-9" declare -x KUBERNETES_PORT="tcp://10.233.0.1:443" declare -x KUBERNETES_PORT_443_TCP="tcp://10.233.0.1:443" declare -x KUBERNETES_PORT_443_TCP_ADDR="10.233.0.1" declare -x KUBERNETES_PORT_443_TCP_PORT="443" declare -x KUBERNETES_PORT_443_TCP_PROTO="tcp" declare -x KUBERNETES_SERVICE_HOST="10.233.0.1" declare -x KUBERNETES_SERVICE_PORT="443" declare -x KUBERNETES_SERVICE_PORT_HTTPS="443" declare -x LANG="en_US.UTF-8" declare -x LANGUAGE="en_US:en" declare -x NAMI_PREFIX="/.nami" declare -x NGINX_PORT="tcp://10.233.39.151:8000" declare -x NGINX_PORT_8000_TCP="tcp://10.233.39.151:8000" declare -x NGINX_PORT_8000_TCP_ADDR="10.233.39.151" declare -x NGINX_PORT_8000_TCP_PORT="8000" declare -x NGINX_PORT_8000_TCP_PROTO="tcp" declare -x NGINX_PORT_8443_TCP="tcp://10.233.39.151:8443" declare -x NGINX_PORT_8443_TCP_ADDR="10.233.39.151" declare -x NGINX_PORT_8443_TCP_PORT="8443" declare -x NGINX_PORT_8443_TCP_PROTO="tcp" declare -x NGINX_SERVICE_HOST="10.233.39.151" declare -x NGINX_SERVICE_PORT="8000" declare -x NGINX_SERVICE_PORT_HTTP="8000" declare -x NGINX_SERVICE_PORT_HTTPS="8443" declare -x NSS_WRAPPER_LIB="/usr/lib/libnss_wrapper.so" declare -x OLDPWD declare -x OS_ARCH="amd64" declare -x OS_FLAVOUR="debian-9" declare -x OS_NAME="linux" declare -x PATH="/opt/bitnami/postgresql/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" declare -x POSTGRESQL_DATABASE="rasa" declare -x POSTGRESQL_PASSWORD="rJO7we2SKwCexD08C6PN20TytRq17qVL" declare -x POSTGRESQL_USERNAME="admin" declare -x PWD="/" declare -x RABBIT_PORT="tcp://10.233.7.109:5672" declare -x RABBIT_PORT_5672_TCP="tcp://10.233.7.109:5672" declare -x RABBIT_PORT_5672_TCP_ADDR="10.233.7.109" declare -x RABBIT_PORT_5672_TCP_PORT="5672" declare -x RABBIT_PORT_5672_TCP_PROTO="tcp" declare -x RABBIT_SERVICE_HOST="10.233.7.109" declare -x RABBIT_SERVICE_PORT="5672" declare -x RASA_PRODUCTION_PORT="tcp://10.233.21.218:5005" declare -x RASA_PRODUCTION_PORT_5005_TCP="tcp://10.233.21.218:5005" declare -x RASA_PRODUCTION_PORT_5005_TCP_ADDR="10.233.21.218" declare -x RASA_PRODUCTION_PORT_5005_TCP_PORT="5005" declare -x RASA_PRODUCTION_PORT_5005_TCP_PROTO="tcp" declare -x RASA_PRODUCTION_SERVICE_HOST="10.233.21.218" declare -x RASA_PRODUCTION_SERVICE_PORT="5005" declare -x RASA_WORKER_PORT="tcp://10.233.19.65:5005" declare -x RASA_WORKER_PORT_5005_TCP="tcp://10.233.19.65:5005" declare -x RASA_WORKER_PORT_5005_TCP_ADDR="10.233.19.65" declare -x RASA_WORKER_PORT_5005_TCP_PORT="5005" declare -x RASA_WORKER_PORT_5005_TCP_PROTO="tcp" declare -x RASA_WORKER_SERVICE_HOST="10.233.19.65" declare -x RASA_WORKER_SERVICE_PORT="5005" declare -x RASA_X_PORT="tcp://10.233.58.67:5002" declare -x RASA_X_PORT_5002_TCP="tcp://10.233.58.67:5002" declare -x RASA_X_PORT_5002_TCP_ADDR="10.233.58.67" declare -x RASA_X_PORT_5002_TCP_PORT="5002" declare -x RASA_X_PORT_5002_TCP_PROTO="tcp" declare -x RASA_X_SERVICE_HOST="10.233.58.67" declare -x RASA_X_SERVICE_PORT="5002" declare -x RASA_X_SERVICE_PORT_HTTP="5002" declare -x REDIS_PORT="tcp://10.233.26.230:6379" declare -x REDIS_PORT_6379_TCP="tcp://10.233.26.230:6379" declare -x REDIS_PORT_6379_TCP_ADDR="10.233.26.230" declare -x REDIS_PORT_6379_TCP_PORT="6379" declare -x REDIS_PORT_6379_TCP_PROTO="tcp" declare -x REDIS_SERVICE_HOST="10.233.26.230" declare -x REDIS_SERVICE_PORT="6379" declare -x SHLVL="1" declare -x TERM="xterm" I have no name!@odd-zebra-database-5ddff6c7bf-msb58:/$ exit exit (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl exec -it odd-zebra-database-5ddff6c7bf-msb58 -- /bin/bashlogs odd-zebra-rasa-x-68bc94bc49-cshn2 exec -it odd-zebra-database-5ddff6c7bf-msb58 -- /bin/bashlogs odd-zebra-rasa-x-68bc94bc49-cshn2 exec -it odd-zebra-database-5ddff6c7bf-msb58 -- /bin/bash[1@ [33@odd-zebra-rasa-x-68bc94bc49-cshn2^C (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl exec -it odd-zebra-database-5ddff6c7bf-msb58 -- /bin/bashkubectl exec -it odd-zebra-database-5ddff6c7bf-msb58 -- /bin/bash[33@odd-zebra-rasa-x-68bc94bc49-cshn2 error: unable to upgrade connection: container not found ("rasa-x") (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl exec -it odd-zebra-rasa-x-68bc94bc49-cshn2 -- /bin/bash error: unable to upgrade connection: container not found ("rasa-x") (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl get pods NAME READY STATUS RESTARTS AGE odd-zebra-app-77f8bff987-72kbc 1/1 Running 0 15m odd-zebra-database-5ddff6c7bf-msb58 1/1 Running 1 15m odd-zebra-duckling-d59964cc9-7wbwr 1/1 Running 0 15m odd-zebra-nginx-77f4cb5ff7-gsrsc 1/1 Running 0 15m odd-zebra-rabbit-6896557b6-scgpx 1/1 Running 0 15m odd-zebra-rasa-production-645c7b97c5-5k62p 1/1 Running 4 15m odd-zebra-rasa-worker-7f798fc558-vf6fq 1/1 Running 4 15m odd-zebra-rasa-x-68bc94bc49-cshn2 0/1 CrashLoopBackOff 6 15m odd-zebra-redis-7787bd4ddc-nnq85 1/1 Running 0 15m (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl get podsexec -it odd-zebra-rasa-x-68bc94bc49-cshn2 -- /bin/bash error: unable to upgrade connection: container not found ("rasa-x") (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl exec -it odd-zebra-rasa-x-68bc94bc49-cshn2 -- /bin/bash error: unable to upgrade connection: container not found ("rasa-x") (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl exec -it odd-zebra-rasa-x-68bc94bc49-cshn2 -- /bin/bash error: unable to upgrade connection: container not found ("rasa-x") (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl exec -it odd-zebra-rasa-x-68bc94bc49-cshn2 -- /bin/bashget podsexec -it odd-zebra-rasa-x-68bc94bc49-cshn2 -- /bin/bashget pods NAME READY STATUS RESTARTS AGE odd-zebra-app-77f8bff987-72kbc 1/1 Running 0 19m odd-zebra-database-5ddff6c7bf-msb58 1/1 Running 1 19m odd-zebra-duckling-d59964cc9-7wbwr 1/1 Running 0 19m odd-zebra-nginx-77f4cb5ff7-gsrsc 1/1 Running 0 19m odd-zebra-rabbit-6896557b6-scgpx 1/1 Running 0 19m odd-zebra-rasa-production-645c7b97c5-5k62p 1/1 Running 6 19m odd-zebra-rasa-worker-7f798fc558-vf6fq 1/1 Running 6 19m odd-zebra-rasa-x-68bc94bc49-cshn2 0/1 CrashLoopBackOff 7 19m odd-zebra-redis-7787bd4ddc-nnq85 1/1 Running 0 19m (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl get podsexec -it odd-zebra-rasa-x-68bc94bc49-cshn2 -- /bin/bash error: unable to upgrade connection: container not found ("rasa-x") (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl exec -it odd-zebra-rasa-x-68bc94bc49-cshn2 -- /bin/bash error: unable to upgrade connection: container not found ("rasa-x") (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl exec -it odd-zebra-rasa-x-68bc94bc49-cshn2 -- /bin/bash error: unable to upgrade connection: container not found ("rasa-x") (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl exec -it odd-zebra-rasa-x-68bc94bc49-cshn2 -- /bin/bash error: unable to upgrade connection: container not found ("rasa-x") (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl exec -it odd-zebra-rasa-x-68bc94bc49-cshn2 -- /bin/bash error: unable to upgrade connection: container not found ("rasa-x") (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl exec -it odd-zebra-rasa-x-68bc94bc49-cshn2 -- /bin/bash error: unable to upgrade connection: container not found ("rasa-x") (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl exec -it odd-zebra-rasa-x-68bc94bc49-cshn2 -- /bin/bash error: unable to upgrade connection: container not found ("rasa-x") (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl exec -it odd-zebra-rasa-x-68bc94bc49-cshn2 -- /bin/bashget pods NAME READY STATUS RESTARTS AGE odd-zebra-app-77f8bff987-72kbc 1/1 Running 0 25m odd-zebra-database-5ddff6c7bf-msb58 1/1 Running 1 25m odd-zebra-duckling-d59964cc9-7wbwr 1/1 Running 0 25m odd-zebra-nginx-77f4cb5ff7-gsrsc 1/1 Running 0 25m odd-zebra-rabbit-6896557b6-scgpx 1/1 Running 0 25m odd-zebra-rasa-production-645c7b97c5-5k62p 1/1 Running 7 25m odd-zebra-rasa-worker-7f798fc558-vf6fq 1/1 Running 7 25m odd-zebra-rasa-x-68bc94bc49-cshn2 0/1 CrashLoopBackOff 8 25m odd-zebra-redis-7787bd4ddc-nnq85 1/1 Running 0 25m (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl get podsexec -it odd-zebra-rasa-x-68bc94bc49-cshn2 -- /bin/bash error: unable to upgrade connection: container not found ("rasa-x") (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl exec -it odd-zebra-rasa-x-68bc94bc49-cshn2 -- /bin/bash error: unable to upgrade connection: container not found ("rasa-x") (base) nkane@XPS15-Kane:~/Software-Projects/Clinario/rasa-helm-chart> kubectl exec -it odd-zebra-rasa-x-68bc94bc49-cshn2 -- /bin/bashget pods NAME READY STATUS RESTARTS AGE odd-zebra-app-77f8bff987-72kbc 1/1 Running 0 31m odd-zebra-database-5ddff6c7bf-msb58 1/1 Running 1 31m odd-zebra-duckling-d59964cc9-7wbwr 1/1 Running 0 31m odd-zebra-nginx-77f4cb5ff7-gsrsc 1/1 Running 0 31m odd-zebra-rabbit-6896557b6-scgpx 1/1 Running 0 31m odd-zebra-rasa-production-645c7b97c5-5k62p 0/1 CrashLoopBackOff 8 31m odd-zebra-rasa-worker-7f798fc558-vf6fq 0/1 CrashLoopBackOff 8 31m odd-zebra-rasa-x-68bc94bc49-cshn2 0/1 CrashLoopBackOff 9 31m odd-zebra-redis-7787bd4ddc-nnq85 1/1 Running 0 31m Script done on 2019-11-02 16:13:45-04:00 [COMMAND_EXIT_CODE="0"]