AI, Bots

6 Elements of a Multi-Bot Approach to Overcome NLP and User Experience Issues

By 8 Minute Read

A Multi-bot Approach to Enterprise Chatbot Implementations.

In part one of this multi-part blog series, we outlined four problems that business chatbots can encounter as they expand their capabilities and scope, often leading to bot failures stemming from the limitations of current NLP technology.  While it’s one thing to recognize the drawbacks of current NLP technology in terms of the volume of intents and utterances that a bot can handle well while meeting the needs of the specific business use case, an approach to solving this has been lacking. 

As we enter 2020, an increased rollout of chatbot solutions across large organizations will undoubtedly lead to increased awareness of the limitations of NLP and the resulting impact on user experience. As this happens, we expect enterprises to start embracing the concept of a collaborative multi-model approach to orchestrating bots across an organization.

Bots = Skills

In our previous blog, we introduced the concept of organizing bots around skills, similar to how a company organizes around employee skills. Just as no single person in a business can do everything, the same logic applies to bots. Although the concept of individual skills isn’t unique and has been applied by the likes of Amazon’s Alexa and Google Assistant, the approach to making these skills work together more collaboratively using natural language is the nut that is yet to be cracked by the enterprise. 

Introducing a Multi-Bot Orchestration Model

To enable more coherent and consistent experiences across multiple business bots, a collaborative approach is required. Think of this in terms of how skilled employees collaborate to fulfill a customer or business need efficiently.

We’ll take a look at the precedent for multi-model approaches and the shortcomings of first-generation bot solutions that route users between bots in similar structured ways to how traditional IVRs handle incoming requests and route you to the service according to predetermined selections.

The blog then proposes a multi-bot orchestration model that creates the opportunity for collaboration which overcomes the problems of poor experience and NLP limitations, outlining the elements of this approach and why it is important in the context of enterprise chatbot implementations.

The Precedent for Chatbots to Organize around Skills

The fact that a single bot can’t do everything and that a multi-model approach is needed isn’t a new concept. For example, Amazon’s Alexa and Google’s Assistant, when first launched, had only a handful of 1st party skills, resulting in limited uses and poor experiences.

However,  Alexa now boasts over 100,000 skills, largely contributed by third parties and not Amazon itself. As skills are added to these bots, their capabilities have expanded to make them more effective across a wider variety of functions. Google approached this in a similar way although they already had integrated their AI into their assistant and were able to open up a new channel to it, allowing them to broaden its reach and uses.

Despite the growth in the number of skills that these assistants have, the user experience is still fragmented and made up of a bunch of disparate skills that aren’t designed to work together. Hence they suffer some key shortcomings, such as :

There is very limited collaboration across skills, often resulting in disjointed and frustrating user experiences.

You need to use the skill invocation name to invoke a request. The user needs to know the skill names to switch skills rather than rely on intent detection to switch skills. This requires knowledge on behalf of the user with ongoing training needed to keep the user informed of all skill names and capabilities.  This can lead to confusion and errors in the assistant’s understanding of the user.
They still don’t handle compound questions very well i.e. I need a copy of my bill so I can pay off the balance because I am moving house.

The Challenge for Enterprise Virtual Assistants = Collaboration across Skills

As enterprises aim to convert some core business processes into self-service bot-powered capabilities they need to follow a similar path and build their own Alexa- or Google-like assistant, aligned to their business requirements and with multiple skills (or bots) that can fulfill different capabilities. 

Enterprises are building first-party bot skills because they need to control the brand of their virtual assistant but as they add skills the challenge they face is in providing a coherent and consistent experience across all these skills. 

Building bots with different skills isn’t the key challenge for the enterprise. After all, the precedent is already there with the likes of Alexa and Google Assistant. The bigger issue that needs to be solved is how to orchestrate multiple skills into a single conversation i.e. how can bots collaborate to resolve the compound query?

Most conversations have parts to them that are not related to the task at hand. If a single bot is to handle all the tasks then it becomes too large to manage. With multiple bots you can specialize the execution of tasks and keep things simple, but that now implies one must transition across bots to complete a conversation.

Google and Amazon haven’t yet solved this as their skills are not collaborative but it is something that the enterprise will need to work towards in order to create consistent brand experiences.

Six Elements of a Multi-Bot Orchestration Approach

So what does it mean to have a collaborative orchestration approach to enterprise bot projects?

Here are six key concepts that underpin a multi-bot orchestration model.

  1. Everything is Addressable from a Single Point or Conversation.  Contextually relevant skills should be automatically invoked with natural language, not invocation names, throughout a conversation. For example, if a Capital One customer wants to check their bank balance via Alexa they have to first invoke the skill by saying “start Capital One”. When everything is addressable the onus on the user to know the required skill and invoke it is no longer an issue.
  2. The Business Use Case is Not Constrained by the AI model. To date, a single bot AI model may be too small to fulfill the needs of a business use case. This was discussed in our blog about why enterprise chatbots fail, ie.
    – There is a practical upper bound on AI capabilities, in terms of the maximum number of intents and topics that can be handled within a single model.
    – When models are used to solve a specific task, that upper bound is sufficient to accommodate all business use cases.
    – Orchestrating multiple models together leads to an additive increase in overall AI understanding. 
    – If too much is packed into a single model, you need to make tradeoffs on quality versus functionality.  This happens too often when an organization is creating its first few bots. They take feedback from users and add chit-chat and other pleasantries to their single bot.  This effectively lowers the upper bound of functionality that the bot can deliver and jeopardizes its quality of understanding.
  3. The Concept of Different Bot Types Comes In. Not all bots are the same. There can be bots for customer journeys or bots that handle transactions, small talk, language, and other different functions. This becomes important when you start to consider stories, history, and state. State refers to the waypoint and history on the user journey while stateless is when a bot doesn’t need to know what’s gone before to continue to respond correctly. For example, a journey bot needs to remember state whereas an FAQ bot is stateless and doesn’t need to remember what went before. In a collaborative bot approach, these all can work together when needed in order to execute the necessary tasks.
  4. Disambiguation is Necessary to Know which Bot should Take Action. How users ask for something i.e. their utterances, can vary widely. When multiple bots are working together, the less ambiguous the intent, the greater the possibility that it will be routed correctly to the appropriately skilled bot. An example from a multi-product insurance company is when a customer requests a quote without specifying which product line i.e. “I’d like a quote”. So the system needs to disambiguate (ie.remove ambiguity) and figure out which bot can service the user. So the bot may ask “do you need insurance for your boat, car, or home?”
  5. Information Sharing across Multiple Bots. This means that all bots can access shared services across business processes in a similar way that skilled employees can access shared resources such as IT, HR, etc. This creates greater efficiency and consistency in the model and a better experience for the customer. For example, a customer is not requested to identify themselves or their account number each time they switch context to a new bot.
  6. Oversight and Management help Thread Context Together. A key function of this is, not just to address the right bot, but to be able to tie the individual responses to the context within a conversation. This is critical in delivering a good brand experience in the way a human would. Oversight is also important in determining the impact on the user by identifying and dealing with things like exceptions, sentiment, missed inputs or utterances and knowing what to do when a bot fails to answer.

What Multi-Bot Orchestration Looks Like

Once these elements are in place you end up with a set of thin bots that are defined by subject matter area, or their type, or their function, with each bot being aware of their individual limits. 

This is akin to subject matter experts within different departments. For example, in an insurance company, a claims department may have different subject matter experts or skills for different insurance product lines or different stages within the complete claims process, from responding and filing a claim, processing a claim, reimbursing a customer, etc.

They are then orchestrated by a brain with certain functions that are centralized and accessible to all. These shared services are analogous to different business areas sharing common resources or services such as IT, HR, Finance, Administration and others.

Multi-Bot Approach

An Example of a Multi-bot Model in Customer Service

 

Another good example of this is how customer service centers are organized around skills, e.g. billing, complaints, orders, account status, activations, returns, etc. When the bot approach is not a collaborative one the solution can become very unwieldy.

We have seen some large organizations deploy multiple bot solutions where they can’t context switch using natural language. These solutions invariably involve dumb routing to subject matter bots, with routing based on keyword matching and picking from pre-defined lists, an approach that relies on error-prone heuristics rather than natural conversational routing. These solutions require triggers like IVR systems, or Alexa skills invocation, putting the onus on the user to understand the structure and where the bot routes the user back to the start if they switch context. 

Natural conversation routing, on the other hand, is intent-based routing, using semantics to detect intent and then determine the next best step. There is a marked improvement in accuracy and efficiency when routing is intent-based rather than heuristics-based and it represents a smarter approach to orchestrating across multiple bots. Machine Learning has advanced enough that we can now create smart routing that maintains context within a single conversation and this is why we created a collaborative orchestration model that is based on AI and Machine Learning rather than heuristics. 

Summary: The Benefits of Multi-model Approach.

As enterprises expand and mature their implementations of Conversational AI across different use cases, the argument for a multi-bot orchestration model will become hard to ignore. The benefits of this collaborative orchestration model which ServisBOT has created are:

  • You Can Keep Track of Multiple Conversations: Even with an overall conversation in play our model also keeps track of the user conversing specifically with each bot in isolation. It can also track the state within journeys so that the bot can remember prior history. This enables more sophisticated use cases and improves the user experience.
  • The Complexity and Costs of Maintenance are Reduced: Maintaining thin bots is a lot easier to execute. You can test individual functionality to be assured that they work as expected. Complexity is drastically reduced so it’s easier to tune the model and maintain overall quality of the experience. Consequently, bot maintenance costs are lower.
  • You Can Easily Turn On and Off Bots: You can remove a bot from live execution dynamically without having to rebuild the entire Virtual Assistant experience. This makes it easy to make changes quickly if the need arises.
  • You Can Get Higher Accuracy Without Hitting NLP Limitations: The accuracy of AI models drops when the number of configured intents & topics becomes large. To achieve high accuracy levels you can limit the number of intents per model and then incrementally add more models. 
  • You Can Expand the Breadth of Use Cases and Functionality: Since you have the ability to keep adding more skills, the functionality or the range of use cases can be expanded across the business. Amazon’s Alexa, for example, currently can differentiate between 100,000 skills by name. The question then is how many skills define your business?

For more information on our collaborative Multi-bot Approach for enterprise chatbot implementations you can schedule a demo or talk with our technical experts.

To see a great chatbot project in action check out our customer case study for insurance.

Tags:

Close this Window