How to Overcome Intent Limitations on Amazon Lex and other NLP Engines
The importance of training data in creating compelling conversational AI solutions cannot be under-estimated. Yet, increasing the training data set for a bot can be constrained by the chosen natural language processing (NLP) engine as intent limitations are reached.
How Intents Define the Scope of a Bot
A conversational AI bot relies heavily on sufficient training data in order to respond to a wide enough scope of questions as to be practical and useful. Although this seems simple and intuitive, these two things – defining the scope of questions and providing sufficient training data – can be difficult to achieve for practical use cases because of the intent limitations imposed by many of the public NLP engines such as Amazon Lex, Microsoft Luis, IBM Watson, and Google DialogFlow. This blog examines the practical ways in which a multi-model NLP architecture can overcome the intent limitations associated specifically with the Amazon Lex NLP engine.
Amazon Lex is the NLP provider from AWS, powering conversational AI solutions for voice and chat. If you have developer access to AWS, it is a highly powerful tool that is comparatively easy to use. It doesn’t require any deep understanding of AI or data science to build your first Amazon Lex chatbot and it uses simple concepts such as intents, entities, utterances, and fulfillment.
An intent is used to describe what a user actually means when they pose a question. A customer may ask a question in many different ways (these are often called training utterances) but with the same intent. For example, a customer may ask “how much is in my account?”, or “what funds do I have?”, or “will I have enough for my loan payment?” with the intent of checking their account balance. So the intent, in this case, is “account balance” and there is a myriad of different ways in which customers can ask questions that yield the same result, i.e. the balance of their account. Usually, there is a single answer or response to an intent. Bot developers create an intent for every different question that they want to respond to. So, the number of intents is used to describe the scope of what the bot is capable of responding to.
The Problem of Intent Limitations of the NLP Engine.
The problem begins with the limitation that the AWS Lex NLP Engine (and other NLP vendors) place on any individual bot. The Amazon NLP engine only allows 100 intents per bot, which means that the bot can only handle 100 different questions.
But is 100 intents not enough?
The first problem is that the 100 intent limit also includes ancillary topics that make a conversation less robotic-like. To make conversational bots more human-like they also need to handle things like greetings and small talk i.e. being able to respond to and initiate)phrases like “how are you today?, “what time is it?”, “what’s the weather like?”, “who made you?”
We have seen small talk consume 20, 30, and even 50 intents as developers try and accommodate all the weird and wonderful things that customers ask AI digital assistants. And they often complain about the experience if the chatbot can’t handle these simple phrases.
The second problem is that with more complex business processes like processing an order, managing a refund, or resolving a complaint, the number of permutations around what the customer can ask or what the process can relate to, often consumes many of the intents. This reduces the number of usable intents even further as the designer considers permutations and complexity.
Other topics that can reduce the net number of usable intents can include the number of languages the bot needs to support, the different number of product lines it can handle, or the stage of the customer lifecycle. For example, the intents required for a customer onboarding scenario or an annual renewal are different from the intents for inbound customer service.
Many companies trying to provide a good customer experience will run out of intents very quickly if they can only use 100 – or even 180 in the case of Microsoft Luis.
The Answer Lies with a Dispatcher or Intelligent Orchestrator
We believe the answer lies in the architecture. If I need 700 intents to create a good experience, then I need at least 7 Lex bots all working together and each handling different parts of the conversation. That means I need some type of dispatcher or intelligent orchestrator to route the customer request to the correct bot that can handle that intent.
To make it easier for the dispatcher to work, it makes sense to group the intents in a single NLP agent by topic or relevance. It also makes it easy to split out language into a separate bot (per language) and to take all the small talk phrases and put those into a single bot that demonstrates the character or culture of the organization. All of these groupings will help simplify the ongoing support and maintenance of the bot once in production.
Dividing the problem into multiple skills-based groups also encourages collaboration within the organizational teams responsible for the different areas. The onboarding team can create the data model and training set for their bot and the customer service people can take care of their data set. Product managers can be responsible for individual products and billing and finance can be responsible for finance-related intents. The chatbot begins to reflect how the organization itself is structured – into teams (groups of intents) based around skills and competencies.
What about the Intent Limitations of other NLP engines?
Although the intent limits differ by NLP engines with some having higher capacity than the AWS NLP, the advantages of using an orchestrator still apply. It is easier to build complex conversational AI solutions by breaking the problem up into skills and topics that can be managed by subject matter experts and quickly maintained and updated using a microservices-type architecture This approach also allows mixing and matching NLP engines to solve other limitations like natural language support or slot filling e.g. Lex does not support native languages other than English and few of them support native Arabic.
Having a flexible multi-bot architecture that allows you to mix and match best of breed conversational AI solutions opens up a whole world of possibilities in how and where this technology can be deployed.
If this issue resonates with your business as they deploy more conversational AI solutions, we’d love to talk to you. You can contact us at any time and we’ll provide a deeper explanation of this intelligent architecture.