The L3-AI conference brings together speakers from all over the world who are experts in the conversational AI community. During the conference, they’ll be sharing their work building truly interactive AI assistants.
But before we kick off L3-AI on June 18th, we want to give you a chance to get to know some of our speakers by hosting a series of Ask Me Anything (AMA) sessions in the forum.
How does it work?
On Monday, June 8, we’ll open this thread to pre-submitted questions. Once we open the thread, you’re free to ask our speaker anything (especially as it relates to conversational interfaces and NLU ). On Wed. June 10, 6am-7am PDT/3pm-4pm CEST, Julian will be available live for one hour to answer both presubmitted and live questions in this forum thread. Be sure to react to other questions you’re interested in, so speakers can see which questions have the most community interest At the end of the AMA, we’ll close the thread, but you can catch Julian again at L3-AI!
About Julian Gerhard:
Julian Gerhard is CTO at SUSI&James GmbH, a startup focused on digital voice assistants and artificial intelligence. His interests lie in combining machine learning with natural language processing to create a bridge between humans and machines. Julian is a Rasa superhero, known for his work with Rasa on Raspberry Pi and smart hardware.
This is indeed an exciting question and I am afraid it is not so easy to answer. First of all, it is important to understand that the GDPR exists to protect data. All too easily one drifts into the perception of a protagonist / antagonist relationship - this happens to me too. From the perspective of this tension, as a solution engineer in the field of AI / Conversational AI, I must be clear about the following point.
The GDPR contains guidelines for among other things:
Data controller — The person who decides why and how personal data will be processed. If you’re an owner or employee in your organization who handles data, this is you.
That is an incredibly important point. So if we agree that the issue concerns us as someone who uses Rasa to build artificial intelligence to communicate with the end user, I would consider the following aspects (unordered):
Any input into my system should be classified as “personal”
It should be possible to trace the route taken by the data at all times
It should in any case be possible to completely delete call histories
Ff third parties are involved, e.g. in synthesis and transcription, the points 2 and 3 must also be comprehensible to these third parties to a certain extent
It must be transparent to the end user what happens to his data
As a system designer, you should always keep in mind how you want your data to be handled
Of course, this seems like a burden at first - it definitely creates additional work.
When talking to others about this topic, I often mention that you can’t expect individualization of the service if you are not willing to provide criteria for individualization. Netflix’s algorithm will not be able to make any interesting suggestions if I don’t accept that it illuminates my previous habits.
In another but equally important aspect seems to me the so-called “proper use”.
Let’s assume that my system is supposed to allow changing master data via chat interface. If I now accidentally copy and send my clipboard into the chat and there is private data in the clipboard, then I have used the system improperly at first. Privacy policies now usually ensure that there is a way out of this situation - but they also presuppose a sense of responsibility on the part of each individual regarding privacy. I hope that this was understandable.
In general I have always liked the topic of IoT. I see the Raspberry Pi as a good example of beginner friendliness. Some time ago I took a look on the internet to see what people do with it in general - there are some really amazing things.
In fact, the idea came to me more as a consequence of my actual project - building a SmartSpeaker. I reported about it at one of the Rasa Meetups. With such a project it is and was important for me not to spend a lot of time with “side issues”, but to get to the challenges as quickly as possible. Since I come from the software world and I’m not very familiar with words like GPIO and all that goes with it, the Raspberry came in handy - especially given the fact that it is based on a Unix operating system.
In fact, I have found that no matter how well you model your UseCase, real users always find a way to behave unexpectedly.
You have to imagine, for example, that groups of 5-10 people sit together and think about how a conversation about “booking a flight” might go. You go through the individual paths, catalogue all the information you need, translate it into the logic of the framework of your choice and then believe that you have found a feasible way.
Most of the time in such meetings there are people who say: “But what if the user says the following: …”. - it’s usually quiet for a moment until someone says, “They won’t do it.”
I believe that they will do exactly that. But that’s also what I love - being confronted with the unexpected on a regular basis.
As someone who works in this field with, among other things, an industrial background, i.e. the B2C and B2B context, I usually know a lot of what is currently possible and not possible - regardless of whether we offer this within the company itself or it is the latest research result.
I therefore find it very pleasant to ask myself from time to time: What would I find really ingenious. I think we are moving further in the direction of personalization here and on the ladder further towards level 5.
Personally, I find the topic of inference incredibly exciting. For something like a personal assistant to really help me, he must not only be able to react but also to act. At present, the smart systems in my environment tend to classify what I do and derive recommendations and even instructions for action, for example. Of course there are causal chains for proactivity, but the feeling is different. For example, what would happen if my system prepared something that I only realized was necessary at some point in the future? Of course, this behavior presupposes a certain degree of autonomy and the question would also be from which implications a necessity arises, but I firmly believe that the computing capacities, directed in the right way, can draw these conclusions much faster than I can. Children seem to infer complex issues effortlessly - perhaps without being aware of the underlying principle - but they do. For a machine at the moment, incredible amounts of energy, money and preparation are necessary. I am curious to see how far we will get in the next 5 years.
Maybe you can describe in more detail what you still need for “full end to end”. In my opinion there is already almost everything you need in the toolchain. Of course some aspects require more effort than others, but I don’t think it’s impossible.
Well, I’m sure there are various answers to that. In any case I would recommend to read the documentation carefully. The Rasa team has put a lot of effort into this and I would say that at least 70% of all questions you come across in the beginning can be answered there by yourself.
A second important aspect is certainly the editing perspective. If Rasa OSS can’t do something you’d like it to do, I would say to the person: You are the code tamer - do something about it. But keep in mind that things have a purpose.
Otherwise I would recommend to keep an eye on the community. Usually, the problems you get yourself have had a lot of other people and you can find very good ideas and approaches when you browse the forums or Github. If you don’t find what you’re looking for there, you should always report it to the forum - every entry is important and might help many other people to get there faster.
Then there is of course the design idea. You should always focus on the benefit the bot should bring to the user. An incredibly complex, truly impressive software architecture is useless if it doesn’t help in the end. If you are not able to pitch the UseCase in a few sentences, you should go back to the drawing board.
Thank you @JulianGerhard! And thank you to everyone who joined the conversation. You can catch Julian’s talk at L3-AI on June 18. See the full speaker line-up and schedule, and register to get your free ticket here.