Hi dear all, I am looking for a way to deploy rasa open source only and link it to the action server via kubernetes . does anyone have a documentation or can help me ? I use a linux virtual machine
@stephens anything that could help ?
Yes, the open source helm chart
Hi there. I deploy my rasa bot on k8s. here is my Dockerfile and yaml file.
Dockerfile
FROM leftsky/rasa
WORKDIR /app
COPY . .
RUN rasa train
rasa.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: rasabot-deployment
spec:
replicas: 1
selector:
matchLabels:
app: rasabot-deployment
template:
metadata:
labels:
app: rasabot-deployment
spec:
containers:
- name: rasabot-deployment
image: "the docker image you have built"
env:
- name: APP_NAME
value: RASABOT
- name: APP_VERSION
value: 0.0.1
ports:
- containerPort: 9100
After this you can exec kubectl apply -f rasa.yaml
to deploy your rasa bot.
By the way.The image used above leftsky/rasa
is build by myself.You can build yourself one.Here is its needed files.
Dockerfile
FROM python:3.7.9-slim
RUN apt update
RUN apt install supervisor -y
RUN python -m pip install --upgrade pip
RUN pip install rasa
RUN pip install jieba
RUN pip install paramiko
COPY start.sh /start.sh
RUN chmod +x /start.sh
COPY supervisord.conf /etc/supervisord.conf
EXPOSE 9100
WORKDIR /app
CMD ["/start.sh"]
start.sh
#!/bin/bash
# ----------------------------------------------------------------------
# Start supervisord
# ----------------------------------------------------------------------
exec /usr/bin/supervisord -n -c /etc/supervisord.conf
supervisord.conf
[unix_http_server]
file=/dev/shm/supervisor.sock
[supervisord]
logfile=/tmp/supervisord.log
logfile_maxbytes=50MB
logfile_backups=10
loglevel=warn
pidfile=/tmp/supervisord.pid
nodaemon=false
minfds=1024
minprocs=200
user=root
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///dev/shm/supervisor.sock
[program:rasa-actions]
command=rasa run actions
autostart=true
autorestart=true
priority=10
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=2048
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=2048
[program:rasa-api]
command=rasa run --enable-api --port 9100 --cors *
autostart=true
autorestart=true
priority=10
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=2048
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=2048
Hope this helps.
Hello thanks, do you have a video ressource ?
hello welcome sir
i will take a look and i will tell you thanks again
it assumes that you have a dockerhub right ?
Yes, leftsky/rasa
is on docker hub
ok for me i dont have a docker hub i think i will create one locally
for the rasa models if mine has been built locally how can i link it with the helm ?
hi @stephens , seems like my github integration (or dwnload file ) in order to train an initial model with the deployment of rasa open source via helm is not working i have tried all the recommandations but the files are not beeing download how can i do ?