A Chatbot Architecture for Growing and Managing your AI Bots
Following my previous blog about the challenges of scaling conversational AI solutions and creating better chatbot experiences, there is also the challenge of expanding chatbot projects across the business. There are two aspects to this that I want to address here:
#1: Broadening the Scope of an AI Bot by Adding More Capabilities and Features
This is often a natural progression of chatbot maturity as a business adopts the technology and progresses the functionality of its chatbot from basic Q&A or informational capabilities to address more complex customer needs, fulfill multi-step workflows, and/or add more free-form conversation. Adding features and capabilities to an AI bot can lead to some unwanted impacts as a single bot can become overloaded with intents at the expense of accuracy, performance, and ultimately customer experience.
The other challenge that we see is that many businesses embark on a large-scale conversational AI project but start by targeting a single group of customers or a certain product/market segment, testing and adapting the bot’s performance before a larger rollout to broader customer segments, countries (i.e languages), and or products. As they scale up the audience or functionality, the complexities around these large-scale bot projects rapidly increase. Building in more and more intents can lead to a breaking point or even reach a limitation for the NLP engine being used. How do you train and manage a monolithic bot without breaking what you already have?
- In one instance of a large customer service organization in the telco sector, expanding the scope of their chatbot beyond a certain level resulted in a decline in their NPS score, essentially creating a limit to how much extra functionality they could pack into a single bot.
- Another example is from a large technology company that has multiple products in different geographies that they want to provide customer tech support for using digital assistants. A single bot struggles to handle multiple products in an array of different languages, yet the company wants customers to have a consistent experience.
- Then there are the large multi-national businesses that have thousands of employees in different countries and want to provide more self-service for IT support or HR using digital assistants but need to have these handle geo-specific company policies, regulations, and languages.
#2: Expanding Chatbot Projects Across the Business
There’s another side to chatbot scalability and that’s when chatbots proliferate throughout a department or across a whole business. Use cases for conversational bots can quickly multiply, whether these are customer-centric or internal employee-focused bots. While many consider the natural fit of chatbots with customer service there are many other interaction points across a customer and/or employee journey where the technology has been successfully applied. This can lead to individual departments working on their own chatbot projects, maybe using different natural language processing (NLP) engines, chatbot vendor solutions, or conversational bots that are embedded in an existing enterprise software system.
What happens then is that there is no consistent customer experience across the different solutions and channels. Additionally, the complexity of handling a diverse and fragmented system landscape becomes difficult and costly to manage. Organizations that reach this point in their conversational AI journeys are now looking for a better way to either manage this landscape or shift to a more common set of tools and skills.
An example we have seen of this is an organization that has grown through acquisition and is now trying to manage solutions built on different NLP engines with each potentially integrating with a very diverse landscape of business systems. Other businesses have deployed multiple digital assistants that are very mission-focused but want to bring these together into a more unified and consistent experience rather than have siloed bots that are individually maintained and not necessarily providing a consistent brand experience.
The solution to the above scenarios lies in adopting a different approach to how the solution is architected. Rather than stuff all intents and functionality into a single unwieldy monolith the solution can be broken up into more manageable components, using a conversation manager or central Virtual Assistant as a bot orchestrator at the forefront of the conversation. This can then route the conversation to and from individual skills-based bots that fulfill the request.
This approach is not necessarily new and history has proven that breaking things down into more manageable components can be very advantageous. Examples of the contact center IVR and microservices software architectures help explain.
IVR: Routing Inbound Calls Based on Skills in the Contact Center
Essentially digital assistants act like customer service agents and in the world of the contact center, the concept of the IVR has been around for decades. IVRs have been instrumental in automating a certain amount of inbound calls away from human customer service agents but even when they are transferred to agents the IVR is used to better route the call to an agent who can best handle the request.
Basically, the IVR routes an inbound request according to the skills or subject matter expertise needed to address it. When a customer chooses an option they are directed to the skills that will help resolve their issue, whether that’s an automated response or a transfer to an agent that is trained to help them with their specific issue, such as paying a bill, checking the status of a delivery, buying a product, etc. This is a similar concept of the reception or concierge in a brick-and-mortar business who is there is help guide a visitor to the right department or person. So the idea of organizational hierarchies based on skills and teams isn’t a novel one and can be applied very effectively to the world of digital or virtual workers.
Microservices: Software Applications Break a Problem into Smaller Pieces
Looking at this from a different angle, think about how software application development has evolved throughout the years. The days of developing applications as huge monoliths have given way to what is termed a microservices architecture.
To quote from “The What, Why, and How of a Microservices Architecture”
The single responsibility concept, which suggests to “group together several elements that change for the same cause, and separate the elements that change for various reasons,” was first used by Robert C. Martin. This similar strategy is extended by a microservices architecture to loosely related services that may be created, distributed, and maintained separately. Each of the following services is in charge of a certain function and therefore can engage with other services via straightforward APIs to resolve more complicated business problems.
The logic behind this approach lies in breaking a problem down into smaller, more manageable pieces and the advantages this brings are:
- Easier to grow and scale.
- Add functionality faster.
- When functionality is added things are less likely to break elsewhere.
- More cost-effective and faster to maintain the technology
In the contact center and in software development, the concept of breaking things into more manageable skills or components that can work together to address the ultimate need has worked for years.
Shouldn’t this be the approach to how we build Conversational AI solutions?
Multi-Bot Approach: Breaking Conversation Bots Up By Skills
In the realm of Conversational AI and chatbots, we propose a multi chatbot AI architecture or multi-bot orchestration approach that addresses the two scaling issues mentioned at the beginning of this blog. Indeed, even for new adopters of chatbot solutions, starting with a multi-bot mindset and architecting for this has many benefits. It sets the organization up to better manage complexity as implementations advance and grow.
The concept behind this approach is to consider all the use cases for AI bots that help a customer and allow the individual teams with the appropriate skills to build their solutions. Even though digital assistants are often associated with customer service use cases, there are a plethora of other areas where the technology can be applied successfully.
When a customer initiates their first interaction with a business, for example, applying for a loan or membership, seeking a quotation, or looking for help with a product choice, the teams responsible for customer acquisition, online sales, or quotations will have the skills to design the experience and create the content for these types of bots. On the operations side, the teams responsible for claims, billing, scheduling, or collections have the skills for these bot projects. Across the complete customer lifecycle, there are many interaction points and workflows that are owned by different teams that have the skills and training to manage this. These then turn out to be the best teams to create the content for their own digital assistants.
You need to make it easy for these teams to build their bots but not for them to have to worry about architecting a unified experience and managing the ebb and flow of the conversation. That’s where the central Virtual Assistant (VA) or Conversation Manager comes in.
The Benefits of a Multi-Bot and VA Model
Let’s say a customer looks for billing information. That can be handled by the billing bot. But then the customer switches gears and looks for a status update on a recent order. The billing bot doesn’t have the necessary skills to handle this so the query has to be routed to a different bot. Human conversation can typically switch context so a conversation manager needs to be at the forefront of the conversation interface, understanding intent and routing correctly between skilled bots.
Handling disambiguation is another example where the central Virtual Assistant plays a key role. There are plenty of scenarios where disambiguation can become an issue. An insurance customer may make a request to “file a claim” but what if they have several policies e.g. car, home, or life insurance. The VA deals with this by going back to the customer and asking them which policy they want to make a claim against. Similar to a bank customer that has a deposit, savings, and credit card account with their bank, when they ask for their balance the VA asks them to verify which account.
So the VA manages the conversation and routing so that individual bots don’t have to. It also can centralize certain functionality such as authentication, security, and human handover so that this doesn’t have to be built into the underlying bots. The bots are just there to solve the customer issues that they are trained to handle so can be trained and tuned individually without impacting other skilled bots.
Bot orchestration has many advantages for organizations that are maturing from their initial chatbot and want to create and manage more complexity, diversity, and functionality in their conversational AI solutions.
For more information or to request a demo of a multi-bot approach to how you architect your conversation bots please contact us.