Principles for Building Ethical Conversational Assistants [Feedback Thread]

Hi Rasa @community

We believe conversational assistants have the potential to make technology more accessible than ever before. To support our mission of empowering makers to build AI assistants that work for everyone, we work to make sure everyone can benefit from powerful AI technology, not just those with access to big tech.

While conversational assistants offer great promise, they also present challenges. As builders of this technology, we all have the responsibility to make sure that conversational assistants are designed and deployed ethically. We’ve outlined a set of Principles for Building Ethical Conversational Assistants to state Rasa’s position on responsible use of conversational assistants.

We would love to hear the community’s feedback on these principles and start a discussion on building ethical conversational assistants.

When you create a conversational assistant, by extension you are responsible for its impact on the people it talks to. You should consider how users might perceive the assistant’s statements and how the conversation might affect their lives. At Rasa, we believe it’s important that we remain mindful of those potential impacts, and together as a community, build upon a set of principles to help guide us in creating chatbots responsibly.

It is in the best interest of all conversational assistant creators that the public perceives these assistants as helpful and friendly. It is also in the best interest of all members of society (including creators) that conversational assistants are not used for harassment or manipulation. Aside from being unethical, we believe that such use cases would create a lasting reluctance among users to engage with conversational assistants.

The following four key points should help you use this technology wisely. Please note, however, that these guidelines are only a first step, and you should use your own judgement as well.

Don’t mislead: the conversational assistant should be accurate

Even though a conversational assistant only exists in the digital world, it can still inflict harm on users. For example, conversational assistants often serve as information sources or decision guides. If the information that the assistant provides is inaccurate or misleading, users may end up making poor (or potentially dangerous) decisions based on their interaction with your assistant. So before you prepare your assistant for production, make sure that all its responses and all information sources that it may access via custom actions are accurate, well-researched, and not misleading the user in any way.

Be respectful: A conversational assistant should not encourage or normalize harmful behaviour from users

Although users have complete freedom in what they can communicate to a conversational assistant, assistants are designed to only follow pre-defined stories. In doing so, a conversational assistant should not try to provoke the user into engaging in harmful behaviour. If for any reason the user decides to engage in derogatory behaviour, the assistant should politely refuse to participate.

Identify as a bot: A conversational assistant should always identify itself as one

When asked questions such as “Are you a bot?” or “Are you a human?”, a conversational assistant should always inform the user that it is a software program, and not a human. This does not mean that conversational assistants can’t be human-like, but a good assistant helps users accomplish their goals without pretending to be a human. In contrast, impostor bots (algorithms that pose as humans) are used as tools for social media manipulation, and this creates a lot of mistrust.

Identify the creators: A conversational assistant should provide users a way to verify its identity

When you design an assistant to represent a company, political party, organization, etc., it is important to allow users to verify that this representation is authorized. You can use already existing technologies to do this: For example, if you integrate a conversational assistant to a website served using HTTPS, the content of the site (including the assistant itself) will be verified as legitimate by a trusted certificate authority. Another example would be to have the conversational assistant use a verified social media account.

The principles can also be found in the Rasa Open Source GitHub project under

These principles represent a first step toward codifying a set of guidelines for building ethical conversational assistants. As this technology develops and our community grows, we’re committed to listening, learning, and adapting our approach.

We look forward to hearing your thoughts and feedback below. :slight_smile:


Some good overlap with the EU ALTAI Assessment List for Trustworthy AI (NB I know that not all bots use AI). There has been a helpful convergence on high level principles for Ethical AI but an implementation gap in what that actually looks like in practice so having lists like this with descriptions of do’s and don’ts is a good start


Great to see this kind of discussion - looks like a solid start to me.

In addition I feel that, for AI to be ethical, it’s creators should also be encouraged to consider privacy when collecting and storing personal data; e.g. through a conversation / forms.

We have an emphasis on getting assistants out in the real world quickly so we can start learning from real interactions, which is very valuable. But in the process we should always be mindful of the potential value of the data they will collect, and the associated responsibilities and liabilities we therefore have.

Will personally identifiable data be (and remain) encrypted? Who can/could access it? What’s the process for user-requested deletion? etc.

Perhaps this can be further supported by the technology itself - e.g. in identifying and flagging sensitive information to aide developers / creators / analysts, giving them the tools to handle it in a responsible way. But people like us also play an important role in ongoing guidance and best practice. I’m all for establishing a baseline together.

As this technology and adoption evolves, AI assistants will continue to learn about their users to help them effectively, and - with digital autonomy in mind - assistants will someday work with one another to do even more. Sharing of data and respect of privacy will surely be a hot topic.

What do others think?