Agent.handle_text not working properly with Rasa 1.0

Hello everyone, I use an agent in my bot, but with the new version of rasa (rasa 1.0) I am getting a wired message. My basic implementation is like this:

interpreter = RasaNLUInterpreter('./models/nlu/current')
action_endpoint = EndpointConfig(url="http://localhost:5055/webhook")
agent = Agent.load('./models/dialogue', interpreter=interpreter, action_endpoint=action_endpoint)

and the agent handles the messages using the handle_text function as in:

reply = agent.handle_text(userMsg)

so, right now I when I use this method I am getting the following message:

<coroutine object Agent.handle_text at 0x133236af0>

This is not an error message, it is just the output of the function and this is not what it used to be in the older version of Rasa that Iā€™ve used. Iā€™d really appreciate any help from you guys.

We switched to using async programming and asyncio (asyncio ā€” Asynchronous I/O ā€” Python 3.7.3 documentation).

So there are two options:

  • if your function is defined with async def you should call it with an await (this is the preferred way)
reply = await agent.handle_text(userMsg)
  • if your function is synchronous and you can not convert it to an async def, you can create an event loop and run the function in there:
import asyncio

# ...
loop = asyncio.get_event_loop()
reply = loop.run_until_complete(agent.handle_text(userMsg))

But when i tried it the response im getting is just None

import asyncio
from rasa.core.agent import Agent
from rasa.core.interpreter import RasaNLUInterpreter

agent = Agent.load("models/current")

async def process(msg):    
    output = await agent.handle_text(msg)
    print(output)
    return output

asyncio.run(process("hi"))

#(or)

loop = asyncio.get_event_loop()
reply = loop.run_until_complete(agent.handle_text("hi"))
print(reply)

Please help

Also how to specify custom action endpoint when using Agent.load(modelpath)

currently restaurant bot code doesnt work with custom actions

Also can you please answer, how multiple predict calls will be tracked conversation as each time it is loading the model fresh

Here is what I tried:

async def parse(text: Text):
    response = await agent.handle_text(text)
    return response

if __name__ == "__main__":
    agent = Agent.load(model_path)

    loop = asyncio.get_event_loop()
    response = loop.run_until_complete(parse('Hi'))

For the actions endpoints here is what I use:

action_endpoint = EndpointConfig(url="http://localhost:5055/webhook")

agent = Agent.load(model_path, action_endpoint=action_endpoint)

Ok, so this was a bit tricky, I believe that it can be solved using the input channels provided by Rasa, but I am not sure.

As for what I am doing, I am using webhook with a global agent.

I am getting None from this call as well. Can you please help with this?? In Rasa X the bot is working, when I am trying to invoke it through the agent programmatically it returns None.

Do you mean using a custom connector?

I am trying to write a custom connector to implement my own custom channel.

Can you please elaborate on

Here is something that I think might help, this should probably answer some of your questions.

app = Flask(__name__)
app.secret_key = ''
CORS(app)

global agent
action_endpoint = EndpointConfig(url="http://localhost:5055/webhook")
agent = Agent.load(str(modelPath), action_endpoint=action_endpoint)

@app.route('/yourRoute', methods=['POST'])
def webhook():
    global agent

    userMsg = request.form['Msg']

    if agent.is_ready():
        loop = asyncio.new_event_loop()
        asyncio.set_event_loop(loop)
        response = loop.run_until_complete(parse(userMsg))

    if response is not None :
        if type(response) is list:
            return response[0].get('text')
        else:
            return response
    else:
        return 'The response was not valid'


async def parse(text: Text):
    global agent
    response = await agent.handle_text(text)
    return response

if __name__ == '__main__':
    port = int(os.getenv('PORT', xxxx))
    print("Starting app on port %d" % port)
    app.run(debug=False, port=port, host='127.0.0.1')

Thanks for the sample code. For me when I run the code if agent.is_ready(): is not getting true because in below code I see empty self.policy_ensemble. Seems like I have to pass policy while creating agent instance.

def is_ready(self):
    """Check if all necessary components are instantiated to use agent."""
    return (
        self.tracker_store is not None
        and self.policy_ensemble is not None
        and self.interpreter is not None
    )

But my question is why it worked for you? AND why do we need to pass policy while doing inference?

I tried doing this way also but still, it failed because self.is_ready() is false.

agent = Agent(policies=[KerasPolicy()])
agent = agent.load(str("./models/"))

Can you help, please?

Hi @kothiyayogesh, Iā€™m glad my sample code helped. As for the if agent.is_ready():, I thought that using it might help, but Iā€™m not entirely sure. My logic behind it is that I donā€™t want the agent to parse the message if it is not ready but this has never actually happened before. I assume that you can safely remove it and it will still work normally.

I use the policies when training rasa, so Iā€™m assuming that my policies exist in the trained model that I am loading to the agent.

I also passed policy while training the model.

This is what part of my core model metadata.json looks like.

  "python": "3.6.6",
  "max_histories": [
    7,
    null,
    5
  ],
  "ensemble_name": "rasa.core.policies.ensemble.SimplePolicyEnsemble",
  "policy_names": [
    "rasa.core.policies.keras_policy.KerasPolicy",
    "rasa.core.policies.fallback.FallbackPolicy",
    "rasa.core.policies.memoization.MemoizationPolicy"
  ],
  "trained_at": "20190606-203530",
  "rasa": "1.0.3",
  "tensorflow": "1.13.1",
  "sklearn": "0.20.3"
}

It means an agent should load the policy from my model, right?

For me self.is_ready() is always false because self.policy_ensemble is none in is_ready defination.

Itā€™s working. Problem was my model folder was zipped hence rasa was not able to extract policy detail. After unzipping manually it worked.

Thanks

2 Likes

Glad it worked for you

both methods not working either it gives None Response or Type Error Coroutine Object not Callable, and how about rasa core train , load_data ?

i am facing a similar error to yours. The response returns a NoneType.

import logging from rasa.core import config from rasa.core import utils from rasa.utils import io from rasa.core.agent import Agent from rasa.core.interpreter import RasaNLUInterpreter from rasa.utils.endpoints import EndpointConfig import asyncio

logfile = ā€˜dialogue_model.logā€™

def run_core(core_model_path, nlu_model_path, action_endpoint_url): logging.basicConfig(filename=logfile, level=logging.DEBUG) nlu_interpreter = RasaNLUInterpreter(nlu_model_path) action_endpoint = EndpointConfig(url=action_endpoint_url) agent = Agent.load(core_model_path, interpreter=nlu_interpreter, action_endpoint=action_endpoint)

print("Your bot is ready to talk!!")
while True:
	a = input('You: ')
	if a == 'stop':
		break
	loop = asyncio.get_event_loop()
	responses = loop.run_until_complete(agent.handle_text(a))
	
	print(responses) #returns None
	for response in responses:
		print(f'Bot : {response["text"]}')
return agent

if name == ā€˜mainā€™: actionConfig = io.read_yaml_file(ā€˜endpoints.ymlā€™) print(actionConfig) run_core(ā€˜models/coreā€™, ā€˜models/nluā€™, actionConfig[ā€œaction_endpointā€][ā€œurlā€])

Pleae help me how u solved it. I tried with asyncio method tooā€¦ Still same issue

Would you mind providing code with all imports too? Because I am getting error ā€œEndpointConfigā€ and other things.