Blank page after installation of rasax 0.25.1 on ubuntu 18.04 on AWS

Dear there,

Installation starts from a fresh ubuntu 18.04 on AWS. Before install rasax, I’ve installed rasa on seperate enviroment “venv-rasa” as below:

sudo apt update

sudo apt install python3-dev python3-pip python3-venv

python3 -m venv --system-site-packages ./venv-rasa

source ./venv-rasa/bin/activate

pip3 install -U pip

pip3 install rasa

pip3 install jieba mitie

Then start to install rasax as below:

python3 -m venv --system-site-packages ./venv-rasax

source ./venv-rasax/bin/activate

mkdir i-rasax

cd i-rasax

curl -sSL -o install.sh https://storage.googleapis.com/rasa-x-releases/0.25.1/install.sh

sudo -H bash ./install.sh

cd /etc/rasa

sudo docker-compose up -d

sudo python3 rasa_x_commands.py create --update admin me

Everything seems fine during installation, however, after installation, when open browser with the ip of aws instance, there shows a blank page. Actually, I tried some many times with different order or minor changes. Sometimes working, sometimes don’t. If do exactly as I did up, it doesn’t work.

Need help to fix this problem.

btw, the 22, 80, 443 port is open as mentioned in the rasax doc.

Thanks.

If you can post the logs from docker-compose logs primarily any errors or info you see from the nginx container I can help you with what might be wrong.

Also as a FYI you don’t need a VENV and everything when using the install script technically since it uses docker anyways but it doesn’t hurt anything.

db_1               |
db_1               | Welcome to the Bitnami postgresql container
db_1               | Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql
db_1               | Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues
db_1               | Send us your feedback at containers@bitnami.com
db_1               |
db_1               | INFO  ==> ** Starting PostgreSQL setup **
db_1               | INFO  ==> Validating settings in POSTGRESQL_* env vars..
db_1               | INFO  ==> Initializing PostgreSQL database...
db_1               | INFO  ==> postgresql.conf file not detected. Generating it...
db_1               | INFO  ==> pg_hba.conf file not detected. Generating it...
db_1               | INFO  ==> Deploying PostgreSQL with persisted data...
db_1               | INFO  ==> Configuring replication parameters
db_1               | INFO  ==> Configuring fsync
db_1               | INFO  ==> Loading custom scripts...
db_1               | INFO  ==> Enabling remote connections
db_1               | INFO  ==> Stopping PostgreSQL...
db_1               | INFO  ==> ** PostgreSQL setup finished! **
db_1               |
db_1               | INFO  ==> ** Starting PostgreSQL **
db_1               | 2020-02-10 17:13:30.138 GMT [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
db_1               | 2020-02-10 17:13:30.138 GMT [1] LOG:  listening on IPv6 address "::", port 5432
db_1               | 2020-02-10 17:13:30.149 GMT [1] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
db_1               | 2020-02-10 17:13:30.211 GMT [150] LOG:  database system was shut down at 2020-02-08 17:16:03 GMT
db_1               | 2020-02-10 17:13:30.244 GMT [1] LOG:  database system is ready to accept connections
db_1               | 2020-02-10 17:14:06.964 GMT [165] WARNING:  there is no transaction in progress
db_1               | 2020-02-10 17:14:06.964 GMT [164] WARNING:  there is no transaction in progress
nginx_1            | SSL encryption is not used since no certificates were provided.
nginx_1            |   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
nginx_1            |                                  Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (7) Failed to connect to app port 80: Connection refused
nginx_1            | 7
nginx_1            | >> exec docker CMD
nginx_1            | nginx
nginx_1            | 2020/02/10 17:13:43 [notice] 1#0: using the "epoll" event method
nginx_1            | 2020/02/10 17:13:43 [notice] 1#0: nginx/1.14.2
nginx_1            | 2020/02/10 17:13:43 [notice] 1#0: built by gcc 6.3.0 20170516 (Debian 6.3.0-18+deb9u1)
nginx_1            | 2020/02/10 17:13:43 [notice] 1#0: OS: Linux 4.15.0-1058-aws
nginx_1            | 2020/02/10 17:13:43 [notice] 1#0: getrlimit(RLIMIT_NOFILE): 1048576:1048576
nginx_1            | 2020/02/10 17:13:43 [notice] 1#0: start worker processes
nginx_1            | 2020/02/10 17:13:43 [notice] 1#0: start worker process 13
nginx_1            | 2020/02/10 17:13:43 [notice] 1#0: start worker process 14
nginx_1            | 2020/02/10 17:13:43 [notice] 1#0: start worker process 15
nginx_1            | 2020/02/10 17:13:43 [notice] 1#0: start worker process 16
nginx_1            | 2020/02/10 17:13:43 [notice] 1#0: start worker process 17
nginx_1            | 2020/02/10 17:13:43 [notice] 1#0: start worker process 18
nginx_1            | 2020/02/10 17:13:43 [notice] 1#0: start worker process 19
nginx_1            | 2020/02/10 17:13:43 [notice] 1#0: start worker process 20
nginx_1            | 113.246.182.62 - - [10/Feb/2020:17:14:14 +0000] "GET / HTTP/1.1" 200 1578 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36"
nginx_1            | 2020/02/10 17:14:14 [info] 13#0: *1 client 113.246.182.62 closed keepalive connection
nginx_1            | 113.246.182.62 - - [10/Feb/2020:17:14:14 +0000] "GET /static/css/2.20a0f44c.chunk.css HTTP/1.1" 200 3033 "http://18.224.31.117/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36"
nginx_1            | 2020/02/10 17:14:14 [warn] 13#0: *4 an upstream response is buffered to a temporary file /opt/bitnami/nginx/proxy_temp/1/00/0000000001 while reading upstream, client: 113.246.182.62, server: , request: "GET /static/js/2.8eaefa71.chunk.js HTTP/1.1", upstream: "http://172.18.0.6:5002/static/js/2.8eaefa71.chunk.js", host: "18.224.31.117", referrer: "http://18.224.31.117/"
nginx_1            | 2020/02/10 17:14:14 [warn] 13#0: *5 an upstream response is buffered to a temporary file /opt/bitnami/nginx/proxy_temp/2/00/0000000002 while reading upstream, client: 113.246.182.62, server: , request: "GET /static/js/main.c89e414c.chunk.js HTTP/1.1", upstream: "http://172.18.0.6:5002/static/js/main.c89e414c.chunk.js", host: "18.224.31.117", referrer: "http://18.224.31.117/"
nginx_1            | 2020/02/10 17:14:16 [info] 13#0: *4 client timed out (110: Connection timed out) while sending to client, client: 113.246.182.62, server: , request: "GET /static/js/2.8eaefa71.chunk.js HTTP/1.1", upstream: "http://172.18.0.6:5002/static/js/2.8eaefa71.chunk.js", host: "18.224.31.117", referrer: "http://18.224.31.117/"
nginx_1            | 113.246.182.62 - - [10/Feb/2020:17:14:16 +0000] "GET /static/js/2.8eaefa71.chunk.js HTTP/1.1" 200 76461 "http://18.224.31.117/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36"
nginx_1            | 2020/02/10 17:14:16 [info] 13#0: *5 client timed out (110: Connection timed out) while sending to client, client: 113.246.182.62, server: , request: "GET /static/js/main.c89e414c.chunk.js HTTP/1.1", upstream: "http://172.18.0.6:5002/static/js/main.c89e414c.chunk.js", host: "18.224.31.117", referrer: "http://18.224.31.117/"
nginx_1            | 113.246.182.62 - - [10/Feb/2020:17:14:16 +0000] "GET /static/js/main.c89e414c.chunk.js HTTP/1.1" 200 76461 "http://18.224.31.117/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36"
nginx_1            | 2020/02/10 17:14:20 [info] 13#0: *3 client 113.246.182.62 closed keepalive connection
nginx_1            | 113.246.182.62 - - [10/Feb/2020:17:14:20 +0000] "GET /icons/favicon.ico HTTP/1.1" 200 6518 "http://18.224.31.117/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36"
nginx_1            | 2020/02/10 17:14:22 [info] 13#0: *9 client 113.246.182.62 closed keepalive connection
nginx_1            | 2020/02/10 17:14:43 [info] 13#0: *11 recv() failed (104: Connection reset by peer) while waiting for request, client: 92.38.224.242, server: 0.0.0.0:8080
nginx_1            | 2020/02/10 17:14:44 [info] 13#0: *12 recv() failed (104: Connection reset by peer) while waiting for request, client: 89.222.164.163, server: 0.0.0.0:8080
nginx_1            | 2020/02/10 17:14:50 [info] 13#0: *13 recv() failed (104: Connection reset by peer) while waiting for request, client: 85.93.145.217, server: 0.0.0.0:8080
nginx_1            | 2020/02/10 17:14:50 [info] 13#0: *14 recv() failed (104: Connection reset by peer) while waiting for request, client: 176.121.62.234, server: 0.0.0.0:8080
nginx_1            | 2020/02/10 17:14:56 [info] 13#0: *15 recv() failed (104: Connection reset by peer) while waiting for request, client: 91.228.97.85, server: 0.0.0.0:8080
nginx_1            | 2020/02/10 17:15:04 [info] 13#0: *16 recv() failed (104: Connection reset by peer) while waiting for request, client: 91.226.223.79, server: 0.0.0.0:8080
nginx_1            | 2020/02/10 17:15:38 [info] 13#0: *17 recv() failed (104: Connection reset by peer) while waiting for request, client: 217.197.255.242, server: 0.0.0.0:8080
nginx_1            | 2020/02/10 17:16:17 [info] 13#0: *18 recv() failed (104: Connection reset by peer) while waiting for request, client: 188.191.0.128, server: 0.0.0.0:8080
nginx_1            | 2020/02/10 17:16:33 [info] 13#0: *19 recv() failed (104: Connection reset by peer) while waiting for request, client: 188.35.20.144, server: 0.0.0.0:8080
nginx_1            | 2020/02/10 17:16:52 [info] 13#0: *21 recv() failed (104: Connection reset by peer) while waiting for request, client: 81.200.82.125, server: 0.0.0.0:8080
nginx_1            | 2020/02/10 17:17:05 [info] 13#0: *22 recv() failed (104: Connection reset by peer) while waiting for request, client: 91.210.179.5, server: 0.0.0.0:8080
nginx_1            | 188.170.160.102 - - [10/Feb/2020:17:17:06 +0000] "GET / HTTP/1.1" 200 3173 "-" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36"
nginx_1            | 2020/02/10 17:17:29 [info] 13#0: *25 recv() failed (104: Connection reset by peer) while waiting for request, client: 81.95.135.10, server: 0.0.0.0:8080
nginx_1            | 217.15.130.200 - - [10/Feb/2020:17:17:33 +0000] "GET / HTTP/1.1" 200 3173 "-" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36"
nginx_1            | 2020/02/10 17:17:37 [info] 13#0: *28 recv() failed (104: Connection reset by peer) while waiting for request, client: 37.18.17.18, server: 0.0.0.0:8080
nginx_1            | 188.170.160.100 - - [10/Feb/2020:17:17:44 +0000] "GET / HTTP/1.1" 200 3173 "-" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36"
nginx_1            | 2020/02/10 17:17:48 [info] 13#0: *20 client timed out (110: Connection timed out) while waiting for request, client: 185.32.185.100, server: 0.0.0.0:8080
nginx_1            | 2020/02/10 17:17:55 [info] 13#0: *31 recv() failed (104: Connection reset by peer) while waiting for request, client: 178.217.26.59, server: 0.0.0.0:8080
nginx_1            | 2020/02/10 17:18:11 [info] 13#0: *32 recv() failed (104: Connection reset by peer) while waiting for request, client: 109.197.112.1, server: 0.0.0.0:8080
nginx_1            | 2020/02/10 17:18:55 [info] 13#0: *33 recv() failed (104: Connection reset by peer) while waiting for request, client: 176.115.104.12, server: 0.0.0.0:8080
nginx_1            | 2020/02/10 17:19:22 [info] 13#0: *34 recv() failed (104: Connection reset by peer) while waiting for request, client: 80.245.242.138, server: 0.0.0.0:8080
nginx_1            | 91.195.210.20 - - [10/Feb/2020:17:19:25 +0000] "GET / HTTP/1.1" 200 3173 "-" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36"
nginx_1            | 2020/02/10 17:19:31 [info] 13#0: *37 recv() failed (104: Connection reset by peer) while waiting for request, client: 91.135.148.122, server: 0.0.0.0:8080
nginx_1            | 2020/02/10 17:19:49 [info] 13#0: *38 recv() failed (104: Connection reset by peer) while waiting for request, client: 91.204.233.50, server: 0.0.0.0:8080
nginx_1            | 2020/02/10 17:20:31 [info] 13#0: *39 recv() failed (104: Connection reset by peer) while waiting for request, client: 89.222.132.66, server: 0.0.0.0:8080
nginx_1            | 178.22.52.10 - - [10/Feb/2020:17:20:34 +0000] "GET / HTTP/1.1" 200 3173 "-" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36"
nginx_1            | 185.83.240.30 - - [10/Feb/2020:17:20:46 +0000] "GET / HTTP/1.1" 200 3173 "-" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36"
nginx_1            | 2020/02/10 17:21:23 [info] 13#0: *44 recv() failed (104: Connection reset by peer) while waiting for request, client: 178.217.26.58, server: 0.0.0.0:8080
nginx_1            | 178.22.52.11 - - [10/Feb/2020:17:21:55 +0000] "GET / HTTP/1.1" 200 3173 "-" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36"
nginx_1            | 5.61.239.42 - - [10/Feb/2020:17:21:59 +0000] "GET / HTTP/1.1" 200 3173 "-" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36"
nginx_1            | 2020/02/10 17:22:00 [info] 13#0: *49 recv() failed (104: Connection reset by peer) while waiting for request, client: 89.222.181.100, server: 0.0.0.0:8080
nginx_1            | 188.112.252.18 - - [10/Feb/2020:17:22:10 +0000] "GET / HTTP/1.1" 200 3173 "-" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36"
nginx_1            | 2020/02/10 17:22:53 [info] 13#0: *52 recv() failed (104: Connection reset by peer) while waiting for request, client: 185.16.28.125, server: 0.0.0.0:8080
duckling_1         | no port specified, defaulting to port 8000
duckling_1         | Listening on http://0.0.0.0:8000
app_1              | INFO:rasa_sdk.endpoint:Starting action endpoint server...
app_1              | DEBUG:sanic.root:CORS: Configuring CORS with resources: {'/*': {'origins': ['.*'], 'methods': 'DELETE, GET, HEAD, OPTIONS, PATCH, POST, PUT', 'allow_headers': ['.*'], 'expose_headers': None, 'supports_credentials': False, 'max_age': None, 'send_wildcard': False, 'automatic_options': True, 'vary_header': True, 'resources': {'/*': {'origins': '*'}}, 'intercept_exceptions': True, 'always_send': True}}
app_1              | DEBUG:rasa_sdk.utils:Using the default number of Sanic workers (1).
app_1              | DEBUG:sanic.root:
app_1              |
app_1              |                  Sanic
app_1              |          Build Fast. Run Fast.
app_1              |
app_1              |
app_1              | INFO:sanic.root:Goin' Fast @ http://0.0.0.0:5055
app_1              | INFO:sanic.root:Starting worker [1]
rabbit_1           |  17:13:27.53
rabbit_1           |  17:13:27.54 Welcome to the Bitnami rabbitmq container
rabbit_1           |  17:13:27.54 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-rabbitmq
rabbit_1           |  17:13:27.54 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-rabbitmq/issues
rabbit_1           |  17:13:27.54 Send us your feedback at containers@bitnami.com
rabbit_1           |  17:13:27.54
rabbit_1           |  17:13:27.54 INFO  ==> ** Starting RabbitMQ setup **
rabbit_1           |  17:13:27.55 INFO  ==> Validating settings in RABBITMQ_* env vars..
rabbit_1           |  17:13:27.57 INFO  ==> Initializing RabbitMQ...
rabbit_1           |  17:13:27.58 INFO  ==> Generating random cookie
rabbit_1           |  17:13:27.60 INFO  ==> Starting RabbitMQ in background...
rabbit_1           | Waiting for erlang distribution on node 'rabbit@localhost' while OS process '43' is running
rabbit_1           | Waiting for applications 'rabbit_and_plugins' to start on node 'rabbit@localhost'
rabbit_1           | Applications 'rabbit_and_plugins' are running on node 'rabbit@localhost'
rabbit_1           |  17:13:41.31 INFO  ==> Stopping RabbitMQ...
rabbit_1           |  17:13:45.59 INFO  ==> ** RabbitMQ setup finished! **
rabbit_1           |
rabbit_1           |  17:13:45.61 INFO  ==> ** Starting RabbitMQ **
rabbit_1           | 2020-02-10 17:13:50.300 [info] <0.8.0> Feature flags: list of feature flags found:
rabbit_1           | 2020-02-10 17:13:50.300 [info] <0.8.0> Feature flags: feature flag states written to disk: yes
rabbit_1           | 2020-02-10 17:13:50.589 [info] <0.278.0>
rabbit_1           |  Starting RabbitMQ 3.7.17 on Erlang 22.0
rabbit_1           |  Copyright (C) 2007-2019 Pivotal Software, Inc.
rabbit_1           |  Licensed under the MPL.  See https://www.rabbitmq.com/
rabbit_1           |
rabbit_1           |   ##  ##
rabbit_1           |   ##  ##      RabbitMQ 3.7.17. Copyright (C) 2007-2019 Pivotal Software, Inc.
rabbit_1           |   ##########  Licensed under the MPL.  See https://www.rabbitmq.com/
rabbit_1           |   ######  ##
rabbit_1           |   ##########  Logs: <stdout>
rabbit_1           |
rabbit_1           |               Starting broker...
rabbit_1           | 2020-02-10 17:13:50.590 [info] <0.278.0>
rabbit_1           |  node           : rabbit@localhost
rabbit_1           |  home dir       : /opt/bitnami/rabbitmq/.rabbitmq
rabbit_1           |  config file(s) : /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq.config
rabbit_1           |  cookie hash    : jXa39m9HNvpTEExTc89mGA==
rabbit_1           |  log(s)         : <stdout>
rabbit_1           |  database dir   : /bitnami/rabbitmq/mnesia/rabbit@localhost
rabbit_1           | 2020-02-10 17:13:50.607 [info] <0.278.0> Running boot step pre_boot defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.607 [info] <0.278.0> Running boot step rabbit_core_metrics defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.608 [info] <0.278.0> Running boot step rabbit_alarm defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.611 [info] <0.304.0> Memory high watermark set to 12866 MiB (13491947110 bytes) of 32167 MiB (33729867776 bytes) total
rabbit_1           | 2020-02-10 17:13:50.614 [info] <0.306.0> Enabling free disk space monitoring
rabbit_1           | 2020-02-10 17:13:50.614 [info] <0.306.0> Disk free limit set to 3372MB
rabbit_1           | 2020-02-10 17:13:50.617 [info] <0.278.0> Running boot step code_server_cache defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.617 [info] <0.278.0> Running boot step file_handle_cache defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.617 [info] <0.309.0> Limiting to approx 1048476 file handles (943626 sockets)
rabbit_1           | 2020-02-10 17:13:50.617 [info] <0.310.0> FHC read buffering:  OFF
rabbit_1           | 2020-02-10 17:13:50.617 [info] <0.310.0> FHC write buffering: ON
rabbit_1           | 2020-02-10 17:13:50.617 [info] <0.278.0> Running boot step worker_pool defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.617 [info] <0.281.0> Will use 8 processes for default worker pool
rabbit_1           | 2020-02-10 17:13:50.617 [info] <0.281.0> Starting worker pool 'worker_pool' with 8 processes in it
rabbit_1           | 2020-02-10 17:13:50.618 [info] <0.278.0> Running boot step database defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.619 [info] <0.278.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
rabbit_1           | 2020-02-10 17:13:50.751 [info] <0.278.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
rabbit_1           | 2020-02-10 17:13:50.751 [info] <0.278.0> Peer discovery backend rabbit_peer_discovery_classic_config does not support registration, skipping registration.
rabbit_1           | 2020-02-10 17:13:50.751 [info] <0.278.0> Running boot step database_sync defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.751 [info] <0.278.0> Running boot step feature_flags defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.752 [info] <0.278.0> Running boot step codec_correctness_check defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.752 [info] <0.278.0> Running boot step external_infrastructure defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.752 [info] <0.278.0> Running boot step rabbit_registry defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.752 [info] <0.278.0> Running boot step rabbit_auth_mechanism_cr_demo defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.752 [info] <0.278.0> Running boot step rabbit_queue_location_random defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.752 [info] <0.278.0> Running boot step rabbit_event defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.752 [info] <0.278.0> Running boot step rabbit_auth_mechanism_amqplain defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.752 [info] <0.278.0> Running boot step rabbit_auth_mechanism_plain defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.752 [info] <0.278.0> Running boot step rabbit_exchange_type_direct defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.753 [info] <0.278.0> Running boot step rabbit_exchange_type_fanout defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.753 [info] <0.278.0> Running boot step rabbit_exchange_type_headers defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.753 [info] <0.278.0> Running boot step rabbit_exchange_type_topic defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.753 [info] <0.278.0> Running boot step rabbit_mirror_queue_mode_all defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.753 [info] <0.278.0> Running boot step rabbit_mirror_queue_mode_exactly defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.753 [info] <0.278.0> Running boot step rabbit_mirror_queue_mode_nodes defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.753 [info] <0.278.0> Running boot step rabbit_priority_queue defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.753 [info] <0.278.0> Priority queues enabled, real BQ is rabbit_variable_queue
rabbit_1           | 2020-02-10 17:13:50.753 [info] <0.278.0> Running boot step rabbit_queue_location_client_local defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.754 [info] <0.278.0> Running boot step rabbit_queue_location_min_masters defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.754 [info] <0.278.0> Running boot step kernel_ready defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.754 [info] <0.278.0> Running boot step rabbit_sysmon_minder defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.754 [info] <0.278.0> Running boot step rabbit_epmd_monitor defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.756 [info] <0.278.0> Running boot step guid_generator defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.768 [info] <0.278.0> Running boot step rabbit_node_monitor defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.768 [info] <0.377.0> Starting rabbit_node_monitor
rabbit_1           | 2020-02-10 17:13:50.768 [info] <0.278.0> Running boot step delegate_sup defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.769 [info] <0.278.0> Running boot step rabbit_memory_monitor defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.769 [info] <0.278.0> Running boot step core_initialized defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.769 [info] <0.278.0> Running boot step upgrade_queues defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.792 [info] <0.278.0> Running boot step rabbit_connection_tracking defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.792 [info] <0.278.0> Running boot step rabbit_connection_tracking_handler defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.793 [info] <0.278.0> Running boot step rabbit_exchange_parameters defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.793 [info] <0.278.0> Running boot step rabbit_mirror_queue_misc defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.793 [info] <0.278.0> Running boot step rabbit_policies defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.794 [info] <0.278.0> Running boot step rabbit_policy defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.794 [info] <0.278.0> Running boot step rabbit_queue_location_validator defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.794 [info] <0.278.0> Running boot step rabbit_vhost_limit defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.794 [info] <0.278.0> Running boot step rabbit_mgmt_reset_handler defined by app rabbitmq_management
rabbit_1           | 2020-02-10 17:13:50.794 [info] <0.278.0> Running boot step rabbit_mgmt_db_handler defined by app rabbitmq_management_agent
rabbit_1           | 2020-02-10 17:13:50.794 [info] <0.278.0> Management plugin: using rates mode 'basic'
rabbit_1           | 2020-02-10 17:13:50.794 [info] <0.278.0> Running boot step recovery defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.796 [info] <0.411.0> Making sure data directory '/bitnami/rabbitmq/mnesia/rabbit@localhost/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L' for vhost '/' exists
rabbit_1           | 2020-02-10 17:13:50.799 [info] <0.411.0> Starting message stores for vhost '/'
rabbit_1           | 2020-02-10 17:13:50.799 [info] <0.415.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_transient": using rabbit_msg_store_ets_index to provide index
rabbit_1           | 2020-02-10 17:13:50.801 [info] <0.411.0> Started message store of type transient for vhost '/'
rabbit_1           | 2020-02-10 17:13:50.801 [info] <0.418.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": using rabbit_msg_store_ets_index to provide index
rabbit_1           | 2020-02-10 17:13:50.802 [info] <0.411.0> Started message store of type persistent for vhost '/'
rabbit_1           | 2020-02-10 17:13:50.821 [info] <0.278.0> Running boot step load_definitions defined by app rabbitmq_management
rabbit_1           | 2020-02-10 17:13:50.821 [info] <0.278.0> Running boot step empty_db_check defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.821 [info] <0.278.0> Running boot step rabbit_looking_glass defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.821 [info] <0.278.0> Running boot step rabbit_core_metrics_gc defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.821 [info] <0.278.0> Running boot step background_gc defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.821 [info] <0.278.0> Running boot step connection_tracking defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.822 [info] <0.278.0> Setting up a table for connection tracking on this node: tracked_connection_on_node_rabbit@localhost
rabbit_1           | 2020-02-10 17:13:50.822 [info] <0.278.0> Setting up a table for per-vhost connection counting on this node: tracked_connection_per_vhost_on_node_rabbit@localhost
rabbit_1           | 2020-02-10 17:13:50.822 [info] <0.278.0> Running boot step routing_ready defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.822 [info] <0.278.0> Running boot step pre_flight defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.822 [info] <0.278.0> Running boot step notify_cluster defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.822 [info] <0.278.0> Running boot step networking defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.823 [warning] <0.445.0> Setting Ranch options together with socket options is deprecated. Please use the new map syntax that allows specifying socket options separately from other options.
rabbit_1           | 2020-02-10 17:13:50.824 [info] <0.459.0> started TCP listener on [::]:5672
rabbit_1           | 2020-02-10 17:13:50.824 [info] <0.278.0> Running boot step direct_client defined by app rabbit
rabbit_1           | 2020-02-10 17:13:50.853 [info] <0.509.0> Management plugin: HTTP (non-TLS) listener started on port 15672
rabbit_1           | 2020-02-10 17:13:50.853 [info] <0.615.0> Statistics database started.
rabbit_1           | 2020-02-10 17:13:50.853 [info] <0.614.0> Starting worker pool 'management_worker_pool' with 3 processes in it
rabbit_1           |  completed with 3 plugins.
rabbit_1           | 2020-02-10 17:13:50.950 [info] <0.8.0> Server startup complete; 3 plugins started.
rabbit_1           |  * rabbitmq_management
rabbit_1           |  * rabbitmq_management_agent
rabbit_1           |  * rabbitmq_web_dispatch
rabbit_1           | 2020-02-10 17:13:52.347 [info] <0.625.0> accepting AMQP connection <0.625.0> (172.18.0.6:47938 -> 172.18.0.3:5672)
rabbit_1           | 2020-02-10 17:13:52.349 [info] <0.625.0> connection <0.625.0> (172.18.0.6:47938 -> 172.18.0.3:5672): user 'user' authenticated and granted access to vhost '/'
rabbit_1           | 2020-02-10 17:14:06.385 [info] <0.636.0> accepting AMQP connection <0.636.0> (172.18.0.7:56602 -> 172.18.0.3:5672)
rabbit_1           | 2020-02-10 17:14:06.386 [info] <0.639.0> accepting AMQP connection <0.639.0> (172.18.0.8:47972 -> 172.18.0.3:5672)
rabbit_1           | 2020-02-10 17:14:06.388 [info] <0.636.0> connection <0.636.0> (172.18.0.7:56602 -> 172.18.0.3:5672): user 'user' authenticated and granted access to vhost '/'
rabbit_1           | 2020-02-10 17:14:06.389 [info] <0.639.0> connection <0.639.0> (172.18.0.8:47972 -> 172.18.0.3:5672): user 'user' authenticated and granted access to vhost '/'
redis_1            |  17:13:28.79
redis_1            |  17:13:28.80 Welcome to the Bitnami redis container
redis_1            |  17:13:28.80 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-redis
redis_1            |  17:13:28.80 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-redis/issues
redis_1            |  17:13:28.80 Send us your feedback at containers@bitnami.com
redis_1            |  17:13:28.80
redis_1            |  17:13:28.80 INFO  ==> ** Starting Redis setup **
redis_1            |  17:13:28.84 INFO  ==> Initializing Redis...
redis_1            |  17:13:28.91 INFO  ==> ** Redis setup finished! **
redis_1            |
redis_1            |  17:13:28.92 INFO  ==> ** Starting Redis **
redis_1            | 1:C 10 Feb 2020 17:13:28.952 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1            | 1:C 10 Feb 2020 17:13:28.952 # Redis version=5.0.5, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1            | 1:C 10 Feb 2020 17:13:28.952 # Configuration loaded
redis_1            |                 _._
redis_1            |            _.-``__ ''-._
redis_1            |       _.-``    `.  `_.  ''-._           Redis 5.0.5 (00000000/0) 64 bit
redis_1            |   .-`` .-```.  ```\/    _.,_ ''-._
redis_1            |  (    '      ,       .-`  | `,    )     Running in standalone mode
redis_1            |  |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
redis_1            |  |    `-._   `._    /     _.-'    |     PID: 1
redis_1            |   `-._    `-._  `-./  _.-'    _.-'
redis_1            |  |`-._`-._    `-.__.-'    _.-'_.-'|
redis_1            |  |    `-._`-._        _.-'_.-'    |           http://redis.io
redis_1            |   `-._    `-._`-.__.-'_.-'    _.-'
redis_1            |  |`-._`-._    `-.__.-'    _.-'_.-'|
redis_1            |  |    `-._`-._        _.-'_.-'    |
redis_1            |   `-._    `-._`-.__.-'_.-'    _.-'
redis_1            |       `-._    `-.__.-'    _.-'
redis_1            |           `-._        _.-'
redis_1            |               `-.__.-'
redis_1            |
redis_1            | 1:M 10 Feb 2020 17:13:28.955 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1            | 1:M 10 Feb 2020 17:13:28.955 # Server initialized
redis_1            | 1:M 10 Feb 2020 17:13:28.955 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1            | 1:M 10 Feb 2020 17:13:28.955 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_1            | 1:M 10 Feb 2020 17:13:28.955 * Ready to accept connections
rasa-x_1           | INFO:rasax.community.services.event_service:Waiting until database migrations have been executed...
rasa-x_1           | INFO:alembic.runtime.migration:Context impl PostgresqlImpl.
rasa-x_1           | INFO:alembic.runtime.migration:Will assume transactional DDL.
rasa-x_1           | INFO:rasax.community.services.event_service:Check for database migrations completed.
rasa-x_1           | INFO:alembic.runtime.migration:Context impl PostgresqlImpl.
rasa-x_1           | INFO:alembic.runtime.migration:Will assume transactional DDL.
rasa-x_1           | INFO:rasax.community.services.event_consumers.pika_consumer:Start consuming queue 'rasa_production_events' on pika host 'rabbit'.
rasa-x_1           | Starting Rasa X server... 🚀
rasa-x_1           | ERROR:pika.adapters.blocking_connection:Unexpected connection close detected: ConnectionClosedByBroker: (320) "CONNECTION_FORCED - broker forced connection closure with reason 'shutdown'"
rasa-x_1           | ERROR:rasax.community.services.event_service:Caught an exception while consuming events. Will retry in 5 s.
rasa-x_1           | Traceback (most recent call last):
rasa-x_1           |   File "/usr/local/lib/python3.6/site-packages/rasax/community/services/event_service.py", line 1080, in continuously_consume
rasa-x_1           |     consumer.consume()
rasa-x_1           |   File "/usr/local/lib/python3.6/site-packages/rasax/community/services/event_consumers/pika_consumer.py", line 102, in consume
rasa-x_1           |     self.channel.start_consuming()
rasa-x_1           |   File "/usr/local/lib/python3.6/site-packages/pika/adapters/blocking_connection.py", line 1857, in start_consuming
rasa-x_1           |     self._process_data_events(time_limit=None)
rasa-x_1           |   File "/usr/local/lib/python3.6/site-packages/pika/adapters/blocking_connection.py", line 2018, in _process_data_events
rasa-x_1           |     self.connection.process_data_events(time_limit=time_limit)
rasa-x_1           |   File "/usr/local/lib/python3.6/site-packages/pika/adapters/blocking_connection.py", line 826, in process_data_events
rasa-x_1           |     self._flush_output(common_terminator)
rasa-x_1           |   File "/usr/local/lib/python3.6/site-packages/pika/adapters/blocking_connection.py", line 523, in _flush_output
rasa-x_1           |     raise self._closed_result.value.error
rasa-x_1           | pika.exceptions.ConnectionClosedByBroker: (320, "CONNECTION_FORCED - broker forced connection closure with reason 'shutdown'")
rasa-x_1           | ERROR:pika.adapters.utils.io_services_utils:Socket failed to connect: <socket.socket fd=18, family=AddressFamily.AF_INET, type=2049, proto=6, laddr=('172.18.0.6', 47936)>; error=111 (Connection refused)
rasa-x_1           | ERROR:pika.adapters.utils.connection_workflow:TCP Connection attempt failed: ConnectionRefusedError(111, 'Connection refused'); dest=(<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_STREAM: 1>, 6, '', ('172.18.0.3', 5672))
rasa-x_1           | ERROR:pika.adapters.utils.connection_workflow:AMQPConnector - reporting failure: AMQPConnectorSocketConnectError: ConnectionRefusedError(111, 'Connection refused')
rasa-x_1           | INFO:rasax.community.services.event_consumers.pika_consumer:Start consuming queue 'rasa_production_events' on pika host 'rabbit'.
ubuntu@ip-172-31-5-88:~$

Sorry, I don’t know how to make it more readable. @btotharye

sudo netstat -nlp

ubuntu@ip-172-31-5-88:~$ sudo netstat -nlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      820/systemd-resolve
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1120/sshd
tcp        0      0 127.0.0.1:6010          0.0.0.0:*               LISTEN      2885/sshd: ubuntu@p
tcp6       0      0 :::80                   :::*                    LISTEN      4986/docker-proxy
tcp6       0      0 :::22                   :::*                    LISTEN      1120/sshd
tcp6       0      0 ::1:6010                :::*                    LISTEN      2885/sshd: ubuntu@p
tcp6       0      0 :::443                  :::*                    LISTEN      4974/docker-proxy
udp        0      0 127.0.0.53:53           0.0.0.0:*                           820/systemd-resolve
udp        0      0 172.31.5.88:68          0.0.0.0:*                           798/systemd-network
raw6       0      0 :::58                   :::*                    7           798/systemd-network
Active UNIX domain sockets (only servers)
Proto RefCnt Flags       Type       State         I-Node   PID/Program name     Path
unix  2      [ ACC ]     SEQPACKET  LISTENING     18700    1/init               /run/udev/control
unix  2      [ ACC ]     STREAM     LISTENING     19978    2658/systemd         /run/user/1000/systemd/private
unix  2      [ ACC ]     STREAM     LISTENING     19982    2658/systemd         /run/user/1000/gnupg/S.gpg-agent.browser
unix  2      [ ACC ]     STREAM     LISTENING     19983    2658/systemd         /run/user/1000/gnupg/S.gpg-agent
unix  2      [ ACC ]     STREAM     LISTENING     19984    2658/systemd         /run/user/1000/gnupg/S.gpg-agent.extra
unix  2      [ ACC ]     STREAM     LISTENING     19985    2658/systemd         /run/user/1000/gnupg/S.dirmngr
unix  2      [ ACC ]     STREAM     LISTENING     19986    2658/systemd         /run/user/1000/gnupg/S.gpg-agent.ssh
unix  2      [ ACC ]     STREAM     LISTENING     19465    1035/irqbalance      @irqbalance1035.sock
unix  2      [ ACC ]     STREAM     LISTENING     1626     1/init               /var/lib/lxd/unix.socket
unix  2      [ ACC ]     STREAM     LISTENING     31826    3115/containerd-shi  @/containerd-shim/moby/a9ee3a62fccddfe1c3f572809cd373f5f15cf3395bd72e5f56d5415e1eb9a018/shim.sock@
unix  2      [ ACC ]     STREAM     LISTENING     1622     1/init               /run/snapd.socket
unix  2      [ ACC ]     STREAM     LISTENING     1624     1/init               /run/snapd-snap.socket
unix  2      [ ACC ]     STREAM     LISTENING     1629     1/init               /run/acpid.socket
unix  2      [ ACC ]     STREAM     LISTENING     1631     1/init               /run/uuidd/request
unix  2      [ ACC ]     STREAM     LISTENING     1633     1/init               /var/run/dbus/system_bus_socket
unix  2      [ ACC ]     STREAM     LISTENING     40560    4585/containerd-shi  @/containerd-shim/moby/e2196901a4310d4492665db36ec4f4857f0084978f2dfed624b9f26f6a3e2c2e/shim.sock@
unix  2      [ ACC ]     STREAM     LISTENING     1635     1/init               /var/run/docker.sock
unix  2      [ ACC ]     STREAM     LISTENING     30625    4997/containerd-shi  @/containerd-shim/moby/a7850a80947cacb874e8affdf98db5165028d526a885a2fed26357159286ebf2/shim.sock@
unix  2      [ ACC ]     STREAM     LISTENING     37311    4359/containerd-shi  @/containerd-shim/moby/654ec13d9cb4d665b315eaac4926bfa016b7dc28ade959c3832b9dd464802c31/shim.sock@
unix  2      [ ACC ]     STREAM     LISTENING     25535    3064/containerd-shi  @/containerd-shim/moby/315f1792b377139d0b267d092d3174ca705a1de9825c189be3643c8378565e11/shim.sock@
unix  2      [ ACC ]     STREAM     LISTENING     1628     1/init               @ISCSIADM_ABSTRACT_NAMESPACE
unix  2      [ ACC ]     STREAM     LISTENING     19593    1125/containerd      /run/containerd/containerd.sock
unix  2      [ ACC ]     STREAM     LISTENING     34981    4157/containerd-shi  @/containerd-shim/moby/20c5fc80e83fc76be01455804e6359d7a3d094c71a3b0d6892e4854457ee9df2/shim.sock@
unix  2      [ ACC ]     STREAM     LISTENING     19613    1156/dockerd         /var/run/docker/metrics.sock
unix  2      [ ACC ]     STREAM     LISTENING     19686    1156/dockerd         /var/run/docker/libnetwork/e31b6cf099d0abb08fbbc64a3201c8df98efa80c07c4163b029d5d359b8f9084.sock
unix  2      [ ACC ]     STREAM     LISTENING     18691    1/init               /run/systemd/private
unix  2      [ ACC ]     STREAM     LISTENING     25566    3125/containerd-shi  @/containerd-shim/moby/2e51d2ed70e33d4fda0a3eddbd7de6bbad5b9c806060e75c1e97e1f1f0bd2443/shim.sock@
unix  2      [ ACC ]     STREAM     LISTENING     18698    1/init               /run/lvm/lvmetad.socket
unix  2      [ ACC ]     STREAM     LISTENING     20445    3030/containerd-shi  @/containerd-shim/moby/73fd6bf4af62d121a9eb1a5af7ccd9a5b1d4b5074a0041f8565d23ad26e627c0/shim.sock@
unix  2      [ ACC ]     STREAM     LISTENING     18703    1/init               /run/lvm/lvmpolld.socket
unix  2      [ ACC ]     STREAM     LISTENING     18713    1/init               /run/systemd/journal/stdout
unix  2      [ ACC ]     STREAM     LISTENING     35880    4345/containerd-shi  @/containerd-shim/moby/fe03e019c8a383b87e6d39844cfed0e777abc09da7ff4e0fe3d90074358b7384/shim.sock@
ubuntu@ip-172-31-5-88:~$

I fixed the formatting for you, you can just use the ``` between your text and it will clean up the formatting or add for script text, code, etc. Let me review your logs and get back to you.

Thanks

@yiouyou On the first set of logs is that where the logs end or does that ever connect for the pika stuff?

Yes, that’s where it ends. I’ve posted all the print out.

Is it possible that the ports for the AWS host are not open?

From inside the host can you try a curl against Rasa & Rasa-X

curl 'http://localhost:5005/'
curl 'http://localhost:5005/status'
curl 'http://localhost:5002/api/version'
curl 'http://localhost:5002/api/health'
curl 'http://localhost:80/api/version'
curl 'http://localhost:80/api/health'

Can you also do a docker-compose ps to confirm the ports and status.

To run those curl commands:

ubuntu@ip-172-31-9-98:/etc/rasa$ sudo docker-compose up -d
Creating network "rasa_default" with the default driver
Creating rasa_redis_1    ... done
Creating rasa_duckling_1 ... done
Creating rasa_db_1       ... done
Creating rasa_rabbit_1   ... done
Creating rasa_rasa-x_1   ... done
Creating rasa_rasa-worker_1     ... done
Creating rasa_rasa-production_1 ... done
Creating rasa_app_1             ... done
Creating rasa_nginx_1           ... done
ubuntu@ip-172-31-9-98:/etc/rasa$ curl 'http://localhost:5005/'
curl: (7) Failed to connect to localhost port 5005: Connection refused
ubuntu@ip-172-31-9-98:/etc/rasa$ curl 'http://localhost:5005/status'
curl: (7) Failed to connect to localhost port 5005: Connection refused
ubuntu@ip-172-31-9-98:/etc/rasa$ curl 'http://localhost:5002/api/version'
curl: (7) Failed to connect to localhost port 5002: Connection refused
ubuntu@ip-172-31-9-98:/etc/rasa$ curl 'http://localhost:5002/api/health'
curl: (7) Failed to connect to localhost port 5002: Connection refused
ubuntu@ip-172-31-9-98:/etc/rasa$ curl 'http://localhost:80/api/version'
{"rasa":{"production":"1.7.0","worker":"1.7.0"},"rasa-x":"0.25.1","keys":[{"alg":"RS256","key":"-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA40t7bpLgwzsL4sT4LMqr\nPmgYqLgmgXxv\/2B2uOaQbbkxEtMI+ZnxaMkqmkEiOdQePVaa\/NdoRsYYUQKOeDg7\nKXX815JC\/UxZSxFkxte4gVSQJPnzxK3R8FXfy+Z1l9UDeyY36nV\/lDDZtt2kykEN\nGHjW6zbtm7I5YdHihYgc8zZy+esUhVHC16P\/\/tf1DKo1p0oUIGnHiAtxnUyyPcDN\nNhJjTAe1kPMO02yR4W3MmQ6xhuYroPXsD7irsIbD+Kxp2aTixuwSBqh9ewcB7bkS\n38iWGoQBKfWpRvnu7GRHyrA+QSAXGwPFumXy9LKzC1G+w7HL\/yCZH\/q\/dv\/wbiCR\nOQIDAQAB\n-----END PUBLIC KEY-----\n"}]}
ubuntu@ip-172-31-9-98:/etc/rasa$ curl 'http://localhost:80/api/health'
{"production":{"version":"1.7.0","minimum_compatible_version":"1.6.0a2","status":200},"worker":{"version":"1.7.0","minimum_compatible_version":"1.6.0a2","status
ubuntu@ip-172-31-9-98:/etc/rasa$
ubuntu@ip-172-31-9-98:/etc/rasa$ sudo docker-compose ps
         Name                       Command               State                      Ports
-------------------------------------------------------------------------------------------------------------
rasa_app_1               ./entrypoint.sh run python ...   Up      5055/tcp
rasa_db_1                /entrypoint.sh /run.sh           Up      5432/tcp
rasa_duckling_1          duckling-example-exe --no- ...   Up      8000/tcp
rasa_nginx_1             /opt/bitnami/entrypoint.sh ...   Up      0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp
rasa_rabbit_1            /entrypoint.sh /run.sh           Up      15672/tcp, 25672/tcp, 4369/tcp, 5672/tcp
rasa_rasa-production_1   rasa x --no-prompt --produ ...   Up      5005/tcp
rasa_rasa-worker_1       rasa x --no-prompt --produ ...   Up      5005/tcp
rasa_rasa-x_1            sh -c user_id=$(id -u) &&  ...   Up      5002/tcp
rasa_redis_1             /entrypoint.sh /run.sh           Up      6379/tcp

That all looks good. From the host, I assume curl htttp://localhost:80 returns some html?

From your own desktop or another system, does curl http://<server>:80 fail?

Hi @stephens

From local os of EC2, the return for

curl ‘http://localhost:80

is

<!doctype html><html lang="en"><head><meta charset="utf-8"><meta name="viewport" content="width=device-width,initial-scale=1"><link rel="shortcut icon" href="/icons/favicon.ico"><link rel="apple-touch-icon" href="/icons/apple-touch-icon-57x57.png"/><link rel="apple-touch-icon" sizes="72x72" href="/icons/apple-touch-icon-72x72.png"/><link rel="apple-touch-icon" sizes="114x114" href="/icons/apple-touch-icon-114x114.png"/><link rel="apple-touch-icon" sizes="144x144" href="/icons/apple-touch-icon-144x144.png"/><script>try{window.SERVER_DATA=__SERVER_DATA__}catch(_){}</script><title>Rasa X</title><link href="/static/css/2.20a0f44c.chunk.css" rel="stylesheet"></head><body><div id="root"></div><script>!function(c){function e(e){for(var r,t,n=e[0],o=e[1],a=e[2],u=0,i=[];u<n.length;u++)t=n[u],Object.prototype.hasOwnProperty.call(l,t)&&l[t]&&i.push(l[t][0]),l[t]=0;for(r in o)Object.prototype.hasOwnProperty.call(o,r)&&(c[r]=o[r]);for(d&&d(e);i.length;)i.shift()();return s.push.apply(s,a||[]),f()}function f(){for(var e,r=0;r<s.length;r++){for(var t=s[r],n=!0,o=1;o<t.length;o++){var a=t[o];0!==l[a]&&(n=!1)}n&&(s.splice(r--,1),e=p(p.s=t[0]))}return e}var t={},l={1:0},s=[];function p(e){if(t[e])return t[e].exports;var r=t[e]={i:e,l:!1,exports:{}};return c[e].call(r.exports,r,r.exports,p),r.l=!0,r.exports}p.e=function(o){var e=[],t=l[o];if(0!==t)if(t)e.push(t[2]);else{var r=new Promise(function(e,r){t=l[o]=[e,r]});e.push(t[2]=r);var n,a=document.createElement("script");a.charset="utf-8",a.timeout=120,p.nc&&a.setAttribute("nonce",p.nc),a.src=p.p+"static/js/"+({}[o]||o)+"."+{3:"701f2b8b",4:"8c1fb4e7",5:"31b5504a",6:"36fc3f8a"}[o]+".chunk.js";var u=new Error;n=function(e){a.onerror=a.onload=null,clearTimeout(i);var r=l[o];if(0!==r){if(r){var t=e&&("load"===e.type?"missing":e.type),n=e&&e.target&&e.target.src;u.message="Loading chunk "+o+" failed.\n("+t+": "+n+")",u.name="ChunkLoadError",u.type=t,u.request=n,r[1](u)}l[o]=void 0}};var i=setTimeout(function(){n({type:"timeout",target:a})},12e4);a.onerror=a.onload=n,document.head.appendChild(a)}return Promise.all(e)},p.m=c,p.c=t,p.d=function(e,r,t){p.o(e,r)||Object.defineProperty(e,r,{enumerable:!0,get:t})},p.r=function(e){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},p.t=function(r,e){if(1&e&&(r=p(r)),8&e)return r;if(4&e&&"object"==typeof r&&r&&r.__esModule)return r;var t=Object.create(null);if(p.r(t),Object.defineProperty(t,"default",{enumerable:!0,value:r}),2&e&&"string"!=typeof r)for(var n in r)p.d(t,n,function(e){return r[e]}.bind(null,n));return t},p.n=function(e){var r=e&&e.__esModule?function(){return e.default}:function(){return e};return p.d(r,"a",r),r},p.o=function(e,r){return Object.prototype.hasOwnProperty.call(e,r)},p.p="/",p.oe=function(e){throw console.error(e),e};var r=this["webpackJsonprasa-interface"]=this["webpackJsonprasa-interface"]||[],n=r.push.bind(r);r.push=e,r=r.slice();for(var o=0;o<r.length;o++)e(r[o]);var d=n;f()}([])</script><script src="/static/js/2.363fbd7a.chunk.js"></script><script src="/static/js/main.b9e1cc2d.chunk.js"></script></body></html>

And Yes, it seems working with curl and other people, but not myself. I’ve to giveup the EC2 for rasa X, and use another cloud service (not AWS) to install rasa X which shows login page for myself.

It’s so weird.

Thanks

For BLANK page, I tried different browsers with different laptops around me, it shows errors in this post:

Can you right click the browser and select “inspect element”? Check the console and networks tabs for any errors? Also, try hosting on AWS Hong Kong location?

Tried AWS Hong Kong, still no luck. It’s NOT blank any more if I’m using another Cloud service.

So is it working now? Which cloud service?

Hi, aliyun works for me

To who might need an example of setting up Rasa X dockers, please check the following blog:

How to set up the docker cluster of Rasa X and Grakn

Hope it’s useful.

btw, the previous link is wrong, just correct it.