Run Rasa X on HTTPS


I enable https of rasa core with the below command and rasa core is now running on HTTPS.

rasa run --ssl-certificate <.crt> --ssl-keyfile <.key> --enable-api -m models/


But when I am trying to do the same stuff with rasa X then still its showing running on http://localhost:5002.

rasa x --ssl-certificate <.crt> --ssl-keyfile <.key> --rasa-x-port 5010 -p 5011

How can i run Rasa X on https?

Thanks and Regards


@Anand_Menon: Have you worked on this earlier?

Hi @ricwo,

How can I run rasa x on https without docker?

Thanks, Harsh

For people new to this discussion, please check out this GitHub issue: Run Rasa X on HTTPS · Issue #4703 · RasaHQ/rasa · GitHub

In summary:

  • Running rasa x on HTTPS is currently only supported on Rasa X EE
  • Check out step 9 here for how to put everything behind https Deploy to a Server - you just need to provide some certs

Thanks for the reply.

But i dont want to run rasa x on docker.

As I start my rasa core with certs file it is running on https now as same I want to rasa x on https.

I used below command:

rasa x --ssl-certificate <.crt> --ssl-keyfile <.key>

As far as I know, this is an Enterprise feature.

@kapoorh Why not front end Rasa X with nginx to provide https?

Yes I did it with nginx, but while doing Training it throws an 500 Internal Server Error on api/projects/default/models/jobs.

Sounds like your missing a redirect for the API endpoint. Here’s a fragment from my nginx.conf.

server {
  listen 443 ssl;

  location /jokebot/webhooks {

  location / {

For docker-compose deployments of Rasa X, you can override the default nginx configuration by adding - ./nginx.conf:/opt/bitnami/nginx/conf/nginx.conf to the docker-compose.yml:


    restart: always
    image: "rasa/nginx:${RASA_X_VERSION}"
      - "80:8080"
      - "443:8443"
      - ./certs:/opt/bitnami/certs
      - ./nginx.conf:/opt/bitnami/nginx/conf/nginx.conf
      - ./terms:/opt/bitnami/nginx/conf/bitnami/terms
      - rasa-x
      - rasa-production
      - app


worker_processes  auto;
worker_rlimit_nofile 10000;
error_log /dev/stdout info;
pid "/opt/bitnami/nginx/tmp/";

events {
    worker_connections 4096;

http {

    server {

        listen 8080 default_server;

        server_name _;

        return 301 https://$host$request_uri;


    include       /opt/bitnami/nginx/conf/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log /dev/stdout;

    client_body_temp_path  "/opt/bitnami/nginx/tmp/client_body" 1 2;
    proxy_temp_path        "/opt/bitnami/nginx/tmp/proxy" 1 2;
    fastcgi_temp_path      "/opt/bitnami/nginx/tmp/fastcgi" 1 2;
    scgi_temp_path         "/opt/bitnami/nginx/tmp/scgi" 1 2;
    uwsgi_temp_path        "/opt/bitnami/nginx/tmp/uwsgi" 1 2;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    gzip on;
    gzip_vary on;
    gzip_min_length 1400;
    gzip_proxied expired no-cache no-store private auth;
    gzip_types text/plain text/css text/xml text/javascript application/javascript application/json application/x-javascript application/xml;

    include /opt/bitnami/nginx/conf/conf.d/*.nginx;

    # allow the server to close connection on non responding client, this will free up memory
    reset_timedout_connection on;

    # request timed out -- default 60
    client_body_timeout 10;

    # if client stop responding, free up memory -- default 60
    send_timeout 2;

    # server will close connection after this time -- default 75
    proxy_read_timeout 3600;

    # number of requests client can make over keep-alive -- for testing environment
    keepalive_requests 100000;

    # whether the connection with a proxied server should be closed
    # when a client closes the connection without waiting for a response
    # default is off
    proxy_ignore_client_abort on;
1 Like