How to add voice integration in custom rasa chatbot?

(Nishet Vyas) #1

Hello everyone, How to add voice platform in rasa custom chatbot? There any way to to add voice integration in chatbot ? if anyone can do that please share it or if there any resource then also send it.Thanks in advance.

1 Like

(Mohan Kumar G) #2

@Nishet check out this GitHub - scalableminds/chatroom: React-based Chatroom Component for Rasa Stack they have implemented voice recognition feature.


(Nishet Vyas) #3

Thank you @mohan but i can’t find any voice features here, in that git hub we write our query in a textbox but i want to write using listen i mean like google assistant.


(Mohan Kumar G) #4

@Nishet if you have implemented the above you might have seen that… here is the index.html present in the git repo.



(Nishet Vyas) #5

Ohk Thank you so much @mohan, but sorry to say i not using this ui. i have my own UI and now i want to integrate voice button in chat what can i do for that? sorry for my bad English.


(Mohan Kumar G) #6

@Nishet I don’t have much knowledge on UI side… do research and try to replicate whatever they have done in their web page … Hope somebody from community will reply.


(Nishet Vyas) #7

yupp no problem.i will search and if i found something let you know.Thanks


(Syntithenai) #8

Hey Nishet, check out GitHub - syntithenai/hermod: voice services stack from audio hardware through hotword, ASR, NLU, AI routing and TTS bound by messaging protocol over MQTT.

Voice stack using Mozilla deepspeech with NLU and core routing provided by RASA, with web client. Looking at JOVO to bring google and alexa into the dialog protocol.



(Nishet Vyas) #9

@syntithenai thnaks for sharing.but can you please help me more, i check that link but cannot find anything how to integrate into rasa.if you done that then can you please share me a step ya any link for micro phone integration into rasa.thanks in advance.


(Syntithenai) #10

Hi Nishet, the quickstart guide on the hermod github page includes a docker image to get started as easily as possible. The stack implementation requires many micro services and external machine learning services (Deepspeech and RASA) and installing and configuring these services is a bit of a mission. Docker is great that way.

Get the quickstart going and then build and mount in your own RASA NLU and Core models then the voice stack will issue intents from your model.

Action integration is a work in progress. You can write actions and map them to intents or (RASA core) actions implemented in node or python. Python offers best integration with RASA features like forms. There are action server implementations for node and the browser that listen for messages and call functions from configuration.


(Nishet Vyas) #11

ohk Thanks @syntithenai for your response. I will try it and after that i will tell you what happens.Thanks again.