Hi Rasa @community
We believe conversational assistants have the potential to make technology more accessible than ever before. To support our mission of empowering makers to build AI assistants that work for everyone, we work to make sure everyone can benefit from powerful AI technology, not just those with access to big tech.
While conversational assistants offer great promise, they also present challenges. As builders of this technology, we all have the responsibility to make sure that conversational assistants are designed and deployed ethically. We’ve outlined a set of Principles for Building Ethical Conversational Assistants to state Rasa’s position on responsible use of conversational assistants.
We would love to hear the community’s feedback on these principles and start a discussion on building ethical conversational assistants.
When you create a conversational assistant, by extension you are responsible for its impact on the people it talks to. You should consider how users might perceive the assistant’s statements and how the conversation might affect their lives. At Rasa, we believe it’s important that we remain mindful of those potential impacts, and together as a community, build upon a set of principles to help guide us in creating chatbots responsibly.
It is in the best interest of all conversational assistant creators that the public perceives these assistants as helpful and friendly. It is also in the best interest of all members of society (including creators) that conversational assistants are not used for harassment or manipulation. Aside from being unethical, we believe that such use cases would create a lasting reluctance among users to engage with conversational assistants.
The following four key points should help you use this technology wisely. Please note, however, that these guidelines are only a first step, and you should use your own judgement as well.
Don’t mislead: the conversational assistant should be accurate
Even though a conversational assistant only exists in the digital world, it can still inflict harm on users. For example, conversational assistants often serve as information sources or decision guides. If the information that the assistant provides is inaccurate or misleading, users may end up making poor (or potentially dangerous) decisions based on their interaction with your assistant. So before you prepare your assistant for production, make sure that all its responses and all information sources that it may access via custom actions are accurate, well-researched, and not misleading the user in any way.
Be respectful: A conversational assistant should not encourage or normalize harmful behaviour from users
Although users have complete freedom in what they can communicate to a conversational assistant, assistants are designed to only follow pre-defined stories. In doing so, a conversational assistant should not try to provoke the user into engaging in harmful behaviour. If for any reason the user decides to engage in derogatory behaviour, the assistant should politely refuse to participate.
Identify as a bot: A conversational assistant should always identify itself as one
When asked questions such as “Are you a bot?” or “Are you a human?”, a conversational assistant should always inform the user that it is a software program, and not a human. This does not mean that conversational assistants can’t be human-like, but a good assistant helps users accomplish their goals without pretending to be a human. In contrast, impostor bots (algorithms that pose as humans) are used as tools for social media manipulation, and this creates a lot of mistrust.
Identify the creators: A conversational assistant should provide users a way to verify its identity
When you design an assistant to represent a company, political party, organization, etc., it is important to allow users to verify that this representation is authorized. You can use already existing technologies to do this: For example, if you integrate a conversational assistant to a website served using HTTPS, the content of the site (including the assistant itself) will be verified as legitimate by a trusted certificate authority. Another example would be to have the conversational assistant use a verified social media account.
The principles can also be found in the Rasa Open Source GitHub project under PRINCIPLES.md
These principles represent a first step toward codifying a set of guidelines for building ethical conversational assistants. As this technology develops and our community grows, we’re committed to listening, learning, and adapting our approach.
We look forward to hearing your thoughts and feedback below.