Custom Action Server - SSL-CERTIFICATE_VERIFY_FAILED

Since three weeks I’m facing an issue with our self hosted custom action server. The action code works fine on localhost and on our server GET /actions & GET /health respond with a code 200, so this seems to work fine. But every call to /webhook from rasa chatbot, no matter if from local machine or from hosted bot with rasa open source server, fails.

Do you have any idea how to solve it? Rasa Helm-File for custom action server: Ver. 1.0.3

Failure:

2022-09-30 14:12:02 ERROR    rasa.core.actions.action  - Failed to run custom action 'action_tell_time'. Couldn't connect to the server at 'https://alfaca.se-labor.de/webhook'. Is the server running? Error: Cannot connect to host alfaca.se-labor.de:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1131)')]
2022-09-30 14:12:02 ERROR    rasa.core.processor  - Encountered an exception while running action 'action_tell_time'.Bot will continue, but the actions events are lost. Please check the logs of your action server for more information.

endpoints,yml:

# This file contains the different endpoints your bot can use.

# Server where the models are pulled from.
# https://rasa.com/docs/rasa/model-storage#fetching-models-from-a-server

#models:
#  url: http://my-server.com/models/default_core@latest
#  wait_time_between_pulls:  10   # [optional](default: 100)

# Server which runs your custom actions.
# https://rasa.com/docs/rasa/custom-actions

action_endpoint:
  url: "https://alfaca.se-labor.de/webhook"
#  url: "http://localhost:5055/webhook"

# Tracker store which is used to store the conversations.
# By default the conversations are stored in memory.
# https://rasa.com/docs/rasa/tracker-stores

#tracker_store:
#    type: redis
#    url: <host of the redis instance, e.g. localhost>
#    port: <port of your redis instance, usually 6379>
#    db: <number of your database within redis, e.g. 0>
#    password: <password used for authentication>
#    use_ssl: <whether or not the communication is encrypted, default false>

#tracker_store:
#    type: mongod
#    url: <url to your mongo instance, e.g. mongodb://localhost:27017>
#    db: <name of the db within your mongo instance, e.g. rasa>
#    username: <username used for authentication>
#    password: <password used for authentication>

# Event broker which all conversation events should be streamed to.
# https://rasa.com/docs/rasa/event-brokers

#event_broker:
#  url: localhost
#  username: username
#  password: password
#  queue: queue

Dockerfile for Custom Action image

# Extend the official Rasa SDK image
FROM rasa/rasa-sdk:3.2.2

# Use subdirectory as working directory
WORKDIR /app

# Copy any additional custom requirements, if necessary (uncomment next line)
COPY actions/requirements.txt ./

# Change back to root user to install dependencies
USER root

# Install extra requirements for actions code, if necessary (uncomment next line)
RUN pip install -r requirements.txt

# Copy actions folder to working directory
COPY ./actions /app/actions

# By best practices, don't run the code with root user
RUN chgrp -R 0 /app && chmod -R g=u /app && chmod o+wr /app
USER 1001

values.yaml for helm installation of custom action server:

affinity: {}
applicationSettings:
  port: 5055
  scheme: http
args: []
autoscaling:
  enabled: false
  maxReplicas: 20
  minReplicas: 1
  targetCPUUtilizationPercentage: 80
command: []
deploymentAnnotations: {}
deploymentLabels: {}
extraEnv: []
fullnameOverride: ''
image:
  name: CUSTOM-IMAGE-NAME
  pullPolicy: IfNotPresent
  pullSecrets: []
  repository: ''
  tag: IMAGE-TAG
ingress:
  annotations: {}
  enabled: true
  extraPaths: {}
  hostname: alfaca.se-labor.de
  labels: {}
  path: /
  pathType: ImplementationSpecific
  tls:
    - hosts:
        - alfaca.se-labor.de
      secretName: XXXXXXX-tls
initContainers: []
livenessProbe:
  failureThreshold: 6
  httpGet:
    path: /health
    port: http
    scheme: HTTP
  initialDelaySeconds: 15
  periodSeconds: 15
  successThreshold: 1
  timeoutSeconds: 5
nameOverride: ''
nodeSelector: {}
podAnnotations: {}
podLabels: {}
podSecurityContext: {}
readinessProbe:
  failureThreshold: 6
  httpGet:
    path: /health
    port: http
    scheme: HTTP
  initialDelaySeconds: 15
  periodSeconds: 15
  successThreshold: 1
  timeoutSeconds: 5
registry: docker.io
replicaCount: 1
resources: {}
securityContext: {}
service:
  annotations: {}
  externalTrafficPolicy: Cluster
  loadBalancerIP: null
  nodePort: null
  port: 80
  type: ClusterIP
serviceAccount:
  annotations: {}
  create: false
  name: ''
strategy:
  rollingUpdate:
    maxSurge: 1
    maxUnavailable: 0
  type: RollingUpdate
tolerations: []
volumeMounts: []
volumes: []
global:
  cattle:
    systemProjectId: ''

@sjproost Did you find any solution to this issue?