AI, Bots, Chatbots

Overcome Chatbot Failure with a Multi-Bot Architecture

By 10 Minute Read

Chatbot successes are on the rise as natural language and machine learning technologies advance. However, enterprise chatbot failure can often emerge as a result of the limitations of current NLP technology.

Table of Contents

  1. The Rise and Fail of Enterprise Chatbot Solutions
  2. Chatbot Evolution: The Hype versus Chatbot Failure
  3. The Current State of Chatbot Evolution
  4. Four Chatbot Problems that are Challenging Enterprise
  5. Introducing a Multi-bot Model to tackle NLP Limitations and Chatbot Failure
  6. Conclusion

1. The Rise and Fail of Enterprise Chatbot Solutions.

The appetite for natural language solutions continues to grow as businesses realize the value that a good chatbot can deliver in terms of the ability to further automate business processes, reduce costs, and improve overall business results. Despite suffering bot failures, many organizations are undeterred in their chatbot journey, choosing to forge ahead rather than abandon their efforts. 

So why are companies undeterred in pursuing chatbot solutions, despite having witnessed failures? 

It’s hard not to be enticed by the business case for a chatbot. The bot can interact with customers and employees day and night, respond to their requests using natural language and across multiple channels, and automate the tasks required to fulfill their needs. And it does this all at minimal transactional cost.

Like any technology that has the power to transform business models and generate multiple compelling benefits for a business, enterprises have quickly caught on to the chatbot opportunity and bot-building has taken off. Plenty of chatbot success stories are being reported. However, some companies are also hitting some unforeseen issues as they pack their chatbots with additional capabilities. As a chatbot expands its capabilities it can become smarter and more useful but there also can be a breaking point at which enough is enough and the bot experience deteriorates.

This blog explores some common issues with current natural language technology and the limitations it places on enterprise bot solutions as they grow and expand in complexity, sometimes even ending up failing. We’ll explore the drawbacks of specific single chatbot scenarios and introduce the concept of a multi-bot architecture which will be described in more detail in subsequent blog posts.

2. Chatbot Evolution: The Hype versus Chatbot Failure

First-generation chatbots emerged over ten years ago but failed to live up to their over-hyped expectations, with some being flat-out disasters. There were different reasons behind these early failures. Automation of workflows as part of early chatbot implementations was minimal at best as the focus was on reducing human intervention and associated costs.

The biggest reason for these early failures, however, was the NLP technology was immature to the point that many bots had to quickly handoff to a human who could understand and answer their query, defying the promise of reduced manual intervention. 

In this first phase of evolution, the primary use cases for bots were driven largely by customer service and not by other areas of the business. Many were launched with insufficient capabilities or skills to do more than just answer basic customer service questions and even in doing this, users were often left feeling more frustrated than beforehand. First-gen bots failed classically on both the customer experience and the business outcome fronts.

Chatbot evolution chart

Source: Everest Group

Despite the failures, many businesses still believed that advancements in NLP and ML combined with a more sophisticated approach would bear fruit. So rather than abandon the technology, enterprises continued to explore the possibilities for chatbots and started taking a more strategic approach, identifying a broader set of business use cases for bots and automating simple tasks. 

And so the second-generation of enterprise chatbots emerged around 2015 offering more contextual and multi-lingual support combined with the ability to automate basic process tasks. However, these projects have not been without significant investment in time and resources. For example, Bank of America’s Erica chatbot cost an estimated $30 million and took 2 years to build with a development team of 100 people. While this is not a typical project, it is also not unheard of in large-scale enterprise chatbot deployments.

3. The Current State of Chatbot Evolution

Now as we’ve accelerated towards a third-generation of business chatbots, advancements in machine learning and automation are further helping to increase productivity across complete user journeys.  However, since natural language technology is still relatively new and advancing, designing and maintaining good conversational experiences is often challenging.

Best practices and benchmarks are also still evolving so many companies are starting with a blank slate. And when it comes to training chatbots there is a lack of data for real business scenarios. Use cases are industry- or business-specific so the data that they need to train on differs considerably from that of a more general consumer use case, such as locating a restaurant (e.g OpenTable). For many businesses, there are also no good “hello world” examples that help get them started. 

These factors have contributed to some of the disillusionment and failures that surround today’s chatbot technology. Our hypothesis is that the current approach to chatbots with the current state of NLP technology is not sufficient for the complexity of human conversations.

4. Four Chatbot Problems.

1. The Problem of Too Many Intents

The issue of intents and utterances and how many a single chatbot can handle well is often overlooked when a company architects their bot solution. Before continuing, here are some definitions to distinguish between intents and utterances. 

Intent: An intent is the user’s intention. For example, if a user types “how much does the car cost?”, the user’s intent is to get pricing information on the car. Intents are given a name, such as “GivePrice”.

Utterance: An utterance is what the user says or types. For example, for the above intent, there are different ways of asking for the same information e.g.  a user may type or say “show me pricing for the car”, “what does the car cost?”, “how much is the car?”. The entire sentence is the utterance.

A single intent generally has many utterances. In some business intents the number of utterances can be high, representing the many different ways that different users request the same information.

For those starting off with a first chatbot, the volume of intents and utterances often doesn’t even arise as an issue as it’s hard to even generate a large number of utterances for each intent. But as intents and utterances get added over time, issues can begin to emerge.

So how many intents can you put into a single chatbot?

Although there is no hard and fast number, as a single bot handles up to and beyond 100 intents it can come close to the limit that existing NLP solutions can support. Unfortunately, it’s impossible to define the exact breakpoint at which the bot performance starts to decline. Different use cases require different sets of intents and utterances with some intents packing in a wide range of different utterances. For example, there can be hundreds of ways to ask for the same information or transaction.  Often you won’t know what is going to happen when you add intents or utterances, until you start to see the bot performance decline. 

2. The Use Case doesn’t fit in a Single Bot 

When it comes to the different use cases that a business has for chatbots there are varying degrees of complexity, both from a conversational as well as from a flow or journey perspective. Often the business use case is more complex than a current single-bot solution can support. And because organizations are becoming more sophisticated and are pushing out an increasing number of chatbots, some are coming to the realization that one chatbot may not be sufficient to handle a single use case successfully.

For example, a company may build a chatbot to handle customer FAQs and roll this out in an initial phase. Over time, they may decide that the FAQ bot should also have the capability for the customer to transact, bring them through a multi-step journey, add more personality via small talk, or handle context switching, or a combination of all these capabilities. By adding additional capability, you can potentially erode the capacity that the bot has to deal with the actual use case i.e. answering FAQs. 

The concept of fitting your use case into the bot also helps explain a phenomenon that some companies are seeing, where their chatbot experience declines when they expand the functionality. 

So let’s say that a company decides to add more personality to their FAQ bot by adding in some small talk and that their small talk model has somewhere between 20 and 30 intents. If the number of FAQ intents is already close to the limit of about 80, the introduction of the extra 20-30 eats into the intents related to the original use case. The FAQ experience starts to drop off.

This has implications for how you architect your bot solution to meet the requirements of your use case. Will a single bot be sufficient? If not, how will you architect multiple bots so that they can be coordinated and work together to fulfill the need?

3. Increasing the Accuracy of your Bot

Another way in which a bot can be impacted is when the accuracy level needs to be raised. This may mean adding more variants of the intents i.e. the greater the accuracy around certain bot capabilities, the less capacity there may be for other feature capabilities.

Let’s say your use case requires some mission-critical tasks or you are trying to avoid false positives in the bot’s responses. Accuracy will then be an important consideration where you may have to trade off the number of intents for the quality of responses. Since language is fluid and has many nuances, aiming for accuracy can consume considerable bot capacity.

4. The Issue of Bot Maintenance

A chatbot is not a one-and-done project nor can it be left to its own devices. Chatbots need to be maintained as user interactions with it grow, ultimately generating additional utterances that may not be recognized by the bot, ie. so-called missed utterances that the bot needs to learn to handle.

Language is very fluid with vocabulary growing all the time. When it comes to conversations, the need for reviewing and refreshing is even more critical compared with other digital content assets. Your chatbots need to keep up and to do this means feeding the bot more utterances and intents, expanding its capabilities, and correcting errors. This requires a lot of configuration and once stories are written it’s hard to maintain them.

User demands also shift, as do business priorities. A bot represents your company and if not monitored and managed it can get stale and deliver a poor experience, potentially damaging your brand image.

Building a chatbot is not like building an application. At any stage, a chatbot is tuned for a certain amount of data. Any changes can potentially disrupt the whole model. This a continuous, ongoing and arduous task.  

5. Introducing a Multi-bot Model to tackle NLP Limitations and Chatbot Failure.

The current uni-dimensional state of NLP is not suitable for a growing number of enterprise chatbot implementations. The state of existing technology means that you may be limited in being able to do a full customer journey with a single chatbot. 

While a single bot model works for many initial and less complex use cases, the way to overcome the problems outlined above is to think of bots as having different skills, whereby stringing together multiple bots (or skills) will meet the needs of the use case without hitting the breakpoint of NLP limitations.

The way to tackle single bot issues is to design your solutions with a multi-bot architecture in mind.

Think of bots in a similar way to how you think of employees in an organization i.e. where each has a different role and skillset. No one person in a company has the skills to undertake all aspects of the business. It’s been long recognized that training staff with specific skills to carry out certain roles and functions that contribute to business results enables them to be subject matter experts rather than generalists that are spread too thin. It’s the same with chatbots. It’s easier to train a bot to do one thing and do it well than pack too much functionality and expectations into a single bot. This will set it up for failure.

Just like a human, there is only so much a bot can learn. And considering that bots are being trained by the people who are involved with the use case or business department it makes sense to train them for the skills that are specific to that use case or business area. 

This brings to mind the concept of different business units having ownership of the bots that represent their area of responsibility. So in an insurance company, for example, the claims department handles claims and helps train the bots that are associated with different claims use cases. Similarly, customer support owns the support bots, sales may own bots for online conversion, or customer acquisition objectives, and so on. 

Although individuals can own a use case, the over-arching experience is formed by the collaboration of all these individuals and their skills. Likewise in our chatbot world, individual bots bring specific skills to a use case but the overall customer or employee experience is shaped by how these bots work and collaborate together. At an enterprise level, it is all about delivering superior experiences that reflect the brand.

But in a multi-bot architecture, how do you manage and coordinate all the individual skills specific to a use case? This is where the enterprise virtual assistant comes in, as a central brain that orchestrates and coordinates multiple bots that work towards fulfilling a use case or journey.

Multi-bot architecture

Representation of a Multi-Bot Architecture


The Virtual Assistant blends independently managed bots into a unified experience, routing to the bot best skilled to respond to user requests.  It also monitors the ebb & flow of conversations and enables all bots to support language detection, translation, sentiment analysis, PHI/PII detection, and human escalation by centralizing these skills and making them available to all individually skilled bots.

Orchestration is then critical to navigating operations across multiple bots, executing both conversational and procedural logic. This multi-bot model allows you to scale your business horizontally with thin-sliced bots by blending together FAQs, business processes, and transactions.

6. Conclusion.

In summary, current NLP technology places limits on how much or how accurately a single bot can handle intents successfully, with evidence pointing to an intent limit that varies by the use case and number of utterances.

For this reason, enterprises are beginning to witness bot failures over time as they pack extra functionality into their chatbots. The approach to overcoming this limitation is to break the problem up into individual skills (or bots) and layer it with a virtual assistant to orchestrate these skills and centralize the handling of things like small talk, languages, etc. This is the approach that we took at ServisBOT as we built our Conversational AI platform, focusing on an architecture that supports businesses in creating multiple thin-sliced bots that are skilled in very specific tasks but that can work in harmony when orchestrated by the virtual assistant.

Customers and employees are expecting more from chatbots than just question and answer capabilities. Natural language is opening up a whole new way to engage with them, in the way that they think and not the way that the organization thinks. In 2020, as businesses expand their chatbot solutions across their business, their bot architecture and orchestration will play an important role in their ability to deliver consistent experiences and reap the rewards of chatbot success.

Chatbots are now being conceived with greater degrees of conversational and automation capabilities that can deliver better experience and business results. Don’t let the limits of existing NLP hamper your bot journey. Think less in terms of a single bot approach and more in terms of a multi-bot model that allows you to expand your chatbot horizon.

The next blog in the series covers the 6 elements of a multi-bot approach in more detail.

Want to read more about ServisBOT’s chatbot approach? Download our Conversational AI Platform datasheet.

Close this Window