If you look at a hype cycle, you see how an innovation goes through a rollercoaster of different perceptions. First everybody keeps frowning at it and asks “is this even a thing?” and then, once it has proven to work and generate some business impact, everybody tries to match it into all sorts of more or less far-fetched usecases, until people start realizing how much money they have spent on what were essentially nothing but very smart toys.
On the brink of it’s hypecyle, the topic of virtual assistants and chatbots caught the attention of ouf technical business development team. Everybody was building Alexa skills and tech evangelists went around preaching how natural interfaces would have replaced conventional reading and typing in a few years. Some even went as far as pronouncing the death of mobile altogether. Those good old days.
Since the hype about personal assistants was omnipresent, it was somehow clear, that we had to build one. But it was not even that easy to build a sane case around the topic. The two really tough questions, that really kept us thinking were:
● How is a chatbot going to benefit your communication game anyway?
● And even though it was not obvious at all in the first place, but how is the use of NLG going to benefit a chatbot system?
The first question almost had too many answers. For instance, bots change communication strategies by adding the element of invididualization. It’s quite obvious if you look at how chatbots are publishing their content: Only one reader ever gets to see it at a time. So by not addressing your content to all readers at the same time, you can taylor it to the needs of only one of them. Your content doesn’t have to contain every piece of information relevant for all the different groups in your audience. Instead, you can boil it down to the things relevant to one particular individual.
While thinking about it, our approach started feeling more and more radical: We didn’t want to find the information a user was asking for. We wanted to strip away every bit and piece he was *not* asking for. If for instance, they were looking for weather information, we would give them only the forecast for today and only for their current location.
And this proved useful when it came to the attention economy of your reader. Because you can cut away the information which is not relevant for this particular individual, you can make sure your chatbot’s user gets the most informational value out of the time he or she spends with your bot. He is not distracted by information which is not meant for him, and understands the relevant conclusions much quicker and deeper. And there we had our case: You can generate business impact from a chatbot by making it talk about information that need to be understood. Chatbots have a niche where information really needs to be conveyed with no misinterpretation and no time wasted.
But that leads us to the second question: The example above can be made using Amazon’s Alexa toolkit or Google’s Dialogflow without all too much effort. So how does Natural Language Generation even fit in here, let alone generate additional value? Let’s have a look at how a modern dialogue system usually works:
Basically, frameworks like Alexa or Dialogflow are really really strong at understanding what you want from them. They offer a multitude of services for ingesting and deconstructing natural language, mining it for information and learning how to extract intents and entities from it. Basically, they specialize in finding out what you wish the bot to do, and what parameters you give it in order to do it’s task. And then they come to a decision and select an answer for you, from a so-called bucket.
But the crux lies exactly in that principle of selection. If you ask Alexa for the weather in London, the answer will tell you all there is to know about the weather in london, in a very general way. You probably know the impulse to growl “Alexa, STOP!” when she gets going. This vagueness has a very obvious reason: It is just too costly and time-consuming to configure particular answers for all those edge- and cornercases.
So our idea was to make a system that wasn’t based on bucket selection. We dreamed up a bot toolchain which would accept very general questions, not even necessarily topical, and return super narrow answers which were nailing what the user had meant to ask for, while taking as little of his precious attention span as possible.
And that’s exactly what we did: We set up a weather chatbot in Dialogflow, but instead of giving it an answer for each endpoint in it’s decision tree, we just gave it a fulfillment link and told it to pass on everything it had analyzed to AX Semantics. A microservice we hooked between the two platforms would preprocess the intent and entities, gather data from several APIs and then, AX Semantics would take all that information and generate a contextual answer from it.
The result was quite astonishing. Our bots would not tell you the whole story once they had found out what you actually wanted. They would tell you the bold statement and then allow you to ask additional questions out of that context. If you really wanted additional information, you would have to ask for it. A side-effect of throwing a rather powerful NLG system at the task of answer generation was that we could also give very relational and specific answers about questions that were simple and open to the edge of moronity. A user would not have to ask for the weather, but could ask about his BBQ plans, and the logic capabilities of AX Semantics would direct him to the weather information he was actually asking about.
Eversince then, we tried to go through our own hypecycle and tried to adapt the bot concept to all kinds of contexts. We programmed a weather bot, Business Intelligence bots, IoT-bots that would let you talk to a relatively “dumb” car, service bots, e-commerce-bots to consult customers in a complex product portfolio, some really thrilling cases in industrial IoT, and so on. Meanwhile, the first customers have started building their own based on our toolchain. Stay tuned to read about those bot-based cases in the next posts.