re:Invent 2018: Validation for a Serverless Technology Stack

AI, News

re:Invent 2018: Validation for a Serverless Technology Stack

By 4 Minute Read

Last week saw 55,000 technology executives, builders and operators hit a relatively chilly Las Vegas November for a week of AWS re:Invent.

As my first re:Invent, I was assaulted with a whole host of interesting presentation sessions and demos, videos of which will be available in coming days.

At ServisBOT, we’ve built an entire AWS technology stack. We’ve spent the past year on a journey to replace all of our traditional compute with a serverless stack. This means replacing things like EC2 instances & Docker Containers with AWS Lambda, and moving from relational RDS-based databases to serverless-first solutions like DynamoDB and Aurora. We’ve also introduced cloud compute into our data science workloads, so were very interested to see any announcements around AWS Sagemaker, the AWS hosted environment for machine learning.

This led to a week focused on learning all we could about new features launching at the event, along with emerging best practices for what we’ve already built. We work very closely with AWS for advice on architecting our serverless & Machine Learning (ML) stack, and many sessions were validation for what we’ve already built, but we learned plenty along the way.

Here’s two key observations which were not only important to us, but we feel will be also to our customers.

Serverless is the epicentre of cloud innovation – begin your adoption, or be left behind.

The announcements made at this week’s re:Invent had one very clear focus: Serverless computing is the space in which AWS is focusing a huge portion of its R&D effort. Traditional compute has begun to feel like table stakes when compared to the rapid pace of serverless innovation. Of course, there is a downside to working with technology at the bleeding edge of innovation. The fastest way to end a conversation with a vendor on the expo floor was to ask “what can you do for our entirely serverless stack?”

There are a tiny number of niche vendors doing great things to monitor and instrument the Lambda ecosystem, but major players in the space have yet to catch up. Organizations may not be ready to jump into the deep end like we did with production workloads running on a serverless stack, but the time to experiment is extremely nigh.

Serverless at Scale AWS

DevOps and automation workloads for non-critical path is a great place to start, a pattern we saw many organizations following.

We’re encouraging organizations to integrate their CRM vendors, systems of record, and any custom applications they may have using serverless integration patterns, helping to make these systems ready for the switch to Conversational AI that ServisBOT brings.

Machine Learning workloads should be orchestrated in the cloud

AWS Sagemaker is a fantastic tool for orchestrating Machine Learning payloads, using GPU & memory-optimized instances to offer efficiencies on processing these workloads, which a developer’s machine, or even a traditional data centre couldn’t provide. Just launched at re:Invent are productized versions of Amazon Retail’s own machine learning expertise such as Forecast and Personalize.

Combine this with the launch of human workflow to the training of SageMaker models, and the benefits of cloud-based Machine Learning make it the natural choice for a business to develop new models.

At ServisBOT, we’re supporting a Bring-your-own-Model version built around the SageMaker ecosystem for companies with established Machine Learning. Our dedicated AI team is also prepared to assist in the development of custom models on SageMaker for difficult to predict & highly proprietary scenarios.   

Powering Agility with AWS Amplify and AppSync

Changing the face of Customer Engagement has forced our engineering team to think long and hard about traditional methods of bottom-up engineering.  Too often a team of engineers can be presented with an ask, disappear into the deepest and darkest depths of their minds, and emerge, caffeine addicted, with a 6 month plan from which a solution may or may not appear.  Unfortunately, this is polar opposite to the agility required by our core mission statements.

At ServisBOT, we want to drive home agility through the use of Rapid Prototyping and luckily, serverless is an excellent means to that end.  Nowhere is that more apparent than in the excellent talk we attended at re:Invent 2018, “Bridging the Gap Between Real Time/Offline and AI/ML Capabilities in Modern Serverless Apps”.  As with many excellent conference talks, it delivered far more than the title promised, in this case, an absolute tour de force in Rapid Prototyping with Serverless utilizing AWS AppSync and AWS Amplify.

Utilizing the AWS Amplify Command Line Interface (CLI),  the speakers demonstrated, through the use of commands like “amplify add auth”, “amplify add api”, “amplify add storage”, how to stitch together Serverless AWS Services such as Amazon Cognito User Pools, an AppSync GraphQL API (backed by DynamoDB and ElastiCache Search) and Amazon S3.  All this done without one line of code written (but we love code!).  The final architecture is detailed in the GitHub repository of the React based chat and AI interaction app used in the demo.

Serverless AWS

Leaving the session, there was definitely an air of “Amplify and AppSync are going to change the world” but alas, we have to come back down to earth for a moment.  As with the Ruby on Rails days, the promise of Rapid Prototyping through CLI Driven Development can suddenly lose its appeal when unforeseen errors and problems start arising from the ether.  Not unfamiliar with this situation, at ServisBOT we are evaluating Amplify and AppSync as production tools and frameworks with the emphasis on both repeatability across environments and integration with currently existing architectures.  We are hoping to enable our Rapid Prototypes to evolve from “throw away” proving grounds to the solutions powering your conversational UI in a performant, reliable and fun experience. Watch this space!

Our ServisBOT Takeaways on Keeping Pace

It may feel overwhelming upon discovering the plethora of latest AWS announcements, but remember not all are relevant to your business.

Two things which have helped us keep pace:

  1. We’ve found a need to filter out the noise. EC2 new volume types are launched every year  – we no longer need to know the inner workings of these unless they relate to Machine Learning workloads.
  2. We’ve found some technologies need everybody to be an expert in (e.g. Lambda, DynamoDB).
    But, for more niche areas, elect a subject-matter-expert (SME), and encourage their learnings.
    We’ve successfully used this model for Cognito, SageMaker, CloudFront, Lex and many more besides.

Of course, there were a whole host of other announcements made at re:Invent which we are incredibly excited about (DynamoDB on-demand scaling, anyone?!), but hopefully this has provided a concise summary of those we feel will be of most impact to our customers.

 

About the Authors:

Cian Clarke is co-founder and Director of Engineering at ServisBOT where he is focused on scaling out the engineering organization.  Cian manages software architecture, frontend & microservices team development, and the delivery of a reliable and efficient platform.

John Rellis is an Engineering Team Lead for ServisBOT based in Waterford, Ireland.  John likes long walks on the beach, sunsets and draining every last drop of productivity from the Engineering Team to bring ServisBOT to our customers through software that tests itself while we sleep.

Close this Window