How to put together a enterprise serverless app, and how to communicate the architecture.
This is the third article in a series about implementing an AWS Serverless Web Application for global clients of a large enterprise. If you are lost, please read the first article.
Serverless is a relatively new technology for the enterprise. Software developers as myself easily understand the technology and vendors offer unprecedented support for adoption. Despite all this, there is very little literature or practical knowledge for the line manager’s benefit. For example, how does serverless architecture change life when a manager is planning to migrate from an expensive monolithic ASP DotNet application to AWS serverless Angular DotNetCore application? How does the manager respond to questions about timelines, availability and such? When planning a large scale enterprise migration to AWS serverless, it is important to communicate correctly so that managers can then form the correct expectations and relay upwards. Through correct communication, managers can drive operational changes as well as set the promise for long term business transforming changes.
Today, in this article, I want to focus on the operational change aspect and how to communicate that. Within that, I want to focus on only the “Composition” of an AWS Serverless App.
First off all, most Architecture knowledge of monolithic single or dual region architectures may not apply directly to AWS Serverless Architecture. It is important to acknowledge differences and lay out a geographic and capability map early on. This helps everyone understand what they are getting into. An extreme example of distribution is Content Delivery Network (CDN for short). AWS Cloudfront (AWS’ CDN) has more than 100 geographic locations and a variety of capabilities that can be served from these. For my client, I designed CDN not only for hosting static content, but also for some dynamic response processing (compute) using Lambda@Edge. On the other side of things, we use DynamoDB “Global Tables” to replicate data in 2 AWS regions and in some cases, we duplicate the same data in multiple tables.
To start the conversation, I circulated a diagram (Figure 1) early on. In that diagram, I showed vertical stacks of components that realize functionality for static assets, dynamic APIs and authentication. Each component is either already distributed by AWS natively or can be distributed by developers by well documented means.
This picture stoked questions from all stakeholders. Leaving aside the content of questions for a moment, the very fact that we were having deep conversations was a huge leg up in migration efforts. People brought up a bunch of concerns, beliefs and questions that we discussed before coming up with solutions. This process was of immense value to the enterprise in speedy adoption of serverless technology.
One of the most asked discussed question was around the ability to change software as needed in releases. My client was used to building long term roadmaps with quarterly releases. They understood very well that this was not the best way forward and wanted to release software more frequently. At the same time, such a change in Release Cadence would have been a giant leap for them. Serverless, Distributed software is inherently amenable to quick, small releases and the requirement was to support both, “Small and Frequent” as well as, “Big Bang and Quarterly” releases. This was challenging, but, eventually, successful. For the sake of this article, I am going to sidestep “Big Bang and Quarterly” and focus on the “Small and Frequent”.
Figure 2 is another visual that was very useful for communication with stakeholders. It shows Vertical Stacks of Functionality and how those are realized by Horizontal Stacks of Technology. This goes hand in hand with Figure 1. The key information here is that stakeholders should deliberately split large monolithic products into smaller independently manageable products. (Here, “manageable” means that those can be announced to customers independently, can be supported separately, possibly also budgeted separately). Each change to such an independent “product” then is rolled out in a specific sequence. At first, any changes to DynamoDB are rolled out. These could include indexing changes, new tables, replications etc. The key constraint is that the change must be backward compatible. For example, if the data type of an indexed column is to change, instead of changing the index, a new index should be created on a new column. This allows both indexes and columns to co-exist. Once the DynamoDB change is rolled out, then, new version of Lambda code should be rolled out. Once again this should be backward compatible. Then, the API Gateway should be rolled out to use the new version of Lambda code on existing APIs or new APIs. Finally, the consumers (such as web or mobile apps) should be rolled out to use the newly minted API endpoints. (I am sure there is a name someone has coined a name for this process).
This allows a full deployment without any downtime anywhere in the application and an “always-on” type of architecture. This is achieved by full automation of all “Horizontals” via AWS Codepipeline.
Lot more to it
Enterprise Serverless has a lot more to it. I hope to cover some of the automation tools and micro-services thinking in another article. In this article, I wanted to lay out 2 key items that we reviewed over and over through the migration and we used as our touchstones.