Containers, Serverless, yada -- make this change to your app and go to lunch with the money + time you saved
Photo by bruce mars from Pexels

Containers, Serverless, yada -- make this change to your app and go to lunch with the money + time you saved

WARNING: Cloud native architecture ahead. Prepare to have your web application scale, cost less, exceed productivity goals, and trigger an addiction to increasing your business capabilities.

Even if this feature I'm going to talk about isn't present in your application, if you were to spend time decomposing your system into constituent cloud managed services for the "physical view" of your architecture, there's so much low hanging fruit that can have the same effect.

So here it is: stop uploading files through the compute of your web tier.

Benefits

  • Overall system reliability will go up
  • Your web tier will cost you less and scale even better
  • Your storage costs will drop
  • File processing costs will go down

How is this even possible? The answer is...with cloud storage. In the case of Azure, Azure Blob storage.

A little history first

In my designing software for the cloud, I frequently experience folks who think their application is unique--that their requirements for uploading those images to generate thumbnails is an industry trade secret. Or perhaps it's uploading PDFs that need to get processed and have to be handled a particular way.

Yes, it's true that there's some secret sauce in there somewhere, but just like cooking, the mechanics of a heat source, a pan, and some foundational ingredients will always be the same.

"Oh Yea?" You say.

"How am I going to do it then? Huh? The user needs to upload those files somehow. We need to process them too. How are we going to get those files from a browser to disk so we can copy them onto our file server? And what about processing them? We have to do some upfront checks and metadata stuff (which we do on the web server right now), and then we copy it over to our file share. How are we going to correlate the file info that we store in our database for that file? How is our file system watcher going to know when a file needs to be handled if we do do this? What about virus scan??? Great, you're exposing us to malicious files now!"

The conversation or thought process may be a little different, but they're all very similar.

Use cloud to do the boring stuff

This scenario of uploading files is pretty boring. Right?

We've been uploading files using a browser for some time now, and it's time to move on and do some more interesting stuff, yea? Let's focus our attention on the bits of code that really do matter--that really are proprietary.

To get on the same page, here's the boring stuff:

  • Streaming bytes from a browser to an endpoint
  • Storing the bytes as a file
  • Processing said file (including virus scanning, and so on)
  • Persisting metadata about the file in a database and making a logical connection to some business model (customer, user, etc.)
  • Knowing when we're sure all the processing is done, so we can do some other interesting stuff

"OK, sure, but this doesn't answer any of my questions! And now I have even more questions than before."

Fair enough.

The cloud native solution

Here's how we're going to do it.

No alt text provided for this image

Azure Managed Services

This solution will work for any cloud, but here are the Azure managed services that will be used in this example:

The role of the managed services

Web Apps only responsibility is handing HTTP requests from the client. We've reduced the variations of HTTP traffic types from handling files and sending/receiving JSON, to just one. With the compute and memory needs dropped, use autoscale for traffic load patterns to scale the cluster in and out.

The Blob Storage will be our anchor for sending the files. Say goodbye to your file server. With compute costs eliminated there, we can realized even more cost savings by archiving files to Azure Archive Storage when we want.

Event Grid and Service Bus will be our integration services for the asynchronous, event driven patterns we've now implemented.

Functions' role is to act as our serverless, on demand, compute that will process the files. You'll move the code that used to run in a daemon, probably on a batch schedule, to being triggered when an Event Grid event is published.

Managed service variations

If you're thinking, "OK, but we're using or want to use Kubernetes." Love it. Here's what this will look like when you're running AKS (Azure Kubernetes Service) and still want to leverage serverless with Functions.

No alt text provided for this image


The difference now is that the traffic will no longer run through your Web App, but through your Kubernetes Ingress to whatever Service that's associated to the Deployment of Pods that handle the HTTP traffic (I'm cheating here on the diagram and not showing that in its entirety). The processing pipeline will now be deployed to the cluster as KEDA.

What about the code that does this?

Good question. I'm going to save that for another post. I'm currently working on an end to end working example of how to do this.

Cheers!

To view or add a comment, sign in

More articles by Kevin Hillinger

Explore content categories