Basic cloud application patterns
https://www.pexels.com/photo/blue-skies-clouds-cut-fingers-335907/

Basic cloud application patterns

As I stated in my previous post, I started working a year ago on new projects using the agile SAFe methodology, however that’s not the only thing I’m learning right now.

We also have decided to develop these applications as cloud native on the Amazon Web Services Infrastructure.

Before I ramble on about the two of the concepts I’ve come across in the different trainings and literature I’ve gone through in the past months, I’ll take a few minutes of your time to clarify what I believe to be a cloud native application. I know these concepts have been discussed and described intensely over the past years but I'd like to give you my perspective on them and would love to hear your comments.

In my previous projects, we usually developed and deployed our applications on servers physically located in our company datacenters. 

This of course started as physical servers each of them running a single operating system, but it now usually goes through the allocation of dynamic virtual machines and the automation a huge part of the OS installation on configuration.

The software installation itself still relies heavily on tedious technical documentation run manually by distant operators.

Two of the major issues we run into in almost every projects are :

  • Lead time (if you need specific servers it will take you a good 6 months to get them with the specification, purchase process and installation) for some projects we already had the opportunity to use VMWare virtualization services to get VMs in a matter of minutes but it game out to be a surprisingly small percentage of the projects I worked on.
  • Installation discrepancies (for one reason or another the infrastructure and software installed by the same teams with the same procedure tends to diverge systematically)

We also started experimenting with software defined network and infrastructure through projects like Openstack but without extreme successes yet.

Lift and Shift

Once we get to running our applications on virtual machines or containers running on premises, taking these virtual machines and running them on a cloud infrastructure like AWS, or Google Cloud seems like a natural evolution.


It’s an approach that Amazon refers to as lift and shift, I guess because it is limited to running existing software from its container (Docker, VMDK...) on a new location, just like if you were putting these of the servers on a forklift and bringing them to the cloud provider.

I've discussed this approach for applications where the objectives were : to reduce the cost of infrastructure, ensure extreme scale adaptation for a planned and short period (for instance for multiplying the number of HR portal frontends during yearly interview season).

Cloud aware infrastructure

Let’s consider our applications are now running smoothly on cloud infrastructure, the next obvious step is to start using managed services and their APIs to make the application more efficient without actually changing the application.

I'd put in that step the usage of EC2 autoscaling, and to some extent the Elastic Beanstalk service.

The basics of how this works is simple AWS and other cloud providers deliver load balancing services that can be used by applications without having to invest on infrastructure. These services will forward the flow to a group of servers (in AWS usually an Auto Scaling Group). We define Auto Scaling groups as Start configuration (server image with the application running) and an objective in term of number of servers. AWS will automatically spawn the requested number of servers and monitor their health (health checks at application level will need to be defined). This allows 2 interesting features: fault tolerance at cluster level (if a server fails, another will be spawned without human intervention), adaptation to load (you can use autoscaling rules to modify the objective of the autoscaling group based on cpu load or any metric in the load balancing or server management services to increase or decrease the number of servers).

Since you only pay for the servers you use by the second, this way of working opens to very interesting business cases for application with elastic workloads (for instance commerce with strong client increase during the sales, HR applications during yearly performance reports...)

There are however strong requirements for this type of infrastructure to work:

The part of the application that needs to scale (for instance web frontends) needs to be as stateless as possible (if there are server side sessions, tcp persistance, or any need for client context to be persisted on the server, than the autoscaling might be inefficient)

The software running on these servers and the usage licences must be compatible with just in time machine creation (if your software requires dedicated licence files, manual operation to get the software enabled..., autoscaling up will just not work)

Cloud native applications

Applications designed to work on cloud infrastructure have several advantages when running on cloud:

They can leverage advanced managed services (messaging services like SNS, SQS, image/audio processing services, translation services, chat bot services...)

They can easily interact with the managed services APIs to adapt/create resources on demand or use automated triggers like file upload toi S3, or even CDN calls in cloud front to adapt to end user request.

The new fad in IT solutions is Serverless. It may be obvious but I'd like to state anyway that a serverless application is run on servers, you just don't have to care about them.

If we take the example of the Lambda service in AWS, the point is to identify the function of code that should be launched and it's trigger. Lambda will automatically spawn compute to run this code function when triggered. It can be used for on demand processing (like creating a thumbnail when end users upload an image) but also for enduser requests (a lambda function can be used to reply to http calls from end users being an API gateway, making it possible to run a web application without having to run any web server.

This type of infrastructure is extremely interesting when the flow of processing is very elastic. You only pay for the execution time you use. But if you use servers round the clock it is more costly and less efficient in terms of commute than native VMs.

Interesting realizations:

Bad optimizations and bad implementations will cost you money. For instance if everything you do runs on lambda and you go for a micro service architecture, you might end-up calling synchroneous lambdas from lambdas which means you will pay 2 executions for the entire wait time of the 1st lambda. Now setting everything as asynchronous is not the right way to go either because lambda also has a very small front cost (100ms of execution) so if you overoptimism your function to use only 20ms, you will still pay for 100ms. That makes some good practices extremely un natural.

DevOps for cloud native application is a dual pipeline, it builds resources and it builds the code that run on these resources the skillset to build these two pipelines is not exactly the same and version management becomes extremely tricky. (Do I update the minor increment if I only change the default objective of an autoscaling group in cloud formation, and nothing in the actual code of the application.

There are a number of challenges to this for a company like Airbus:

1 - The Oasis in the desert (a concept I borrow from my colleague Frederic Fenoglietto)

Although all of the cloud infrastructure services provide all the basic utilities (Power, Network connectivity, …) very few of our applications are self sufficient. They all need to communicate with others to use reference data, exchange enriched information…

Microsoft Active Directory has been so central to the infrastructure of most companies that we easily forget all the services it offers to central IT teams. (Single accounts, privilege management, centralized security logs…) all of these luxuries have to be rebuilt or forgotten.

This can be compared to settling in an oasis in the desert. You’d have fruit, water, sunshine everything you need to survive but each time you want to go out for groceries it’s a 3 weeks ride on the back of a camel.

2 - Security in the open world

On prem, applications benefit from the apparent security offered by the perimeter (firewalls, IDS, Proxys, DMZ, ...). The network zones and deployment procedures are also usually precisely defined and enforced. This frees the mind of application developers who can focus on their features without worrying too much about the technical aspects of securing the communication, storage and APIs.

In the early sprints of the applications I work on now, we've had to tune down our security requirements to make things happen. (for instance open the authorizations more than strictly required in order to iterate quicker on the development.) But one of the important points is that security is a hot topic at each level (Storage, communication, authentication, authorization, ...) and developers, system operators and application administrators need to be aware and constantly attentive to security. This also means they should understand the networking and operations of the resources they manipulate while on prep they only had to focus on storage and presentation. On our serverless applications however there is a huge opportunity for increased security. Since we use mostly managed services, we benefit automatically from the Identity and Access Management Service and logging services from AWS in between these resources. It makes it possible for us to ensure for instance that by design, a specific lambda can only read from or right to specific Dynamodb tables. We can also benefit from. a central log and metrics service to identify alerts and interesting patterns.

Now I am far from having seen and done everything there is to the cloud of course, and I'd be happy to hear your comments on this article.

Further reads :



To view or add a comment, sign in

More articles by Jean Malha

Others also viewed

Explore content categories