Log4Shell and lessons learned
#Log4Shell and the lessons learned - by Jehanzeb Khan

Log4Shell and lessons learned

I am old.

I have been kicking around long enough to have seen software development done in its own shell with libraries that were custom made (which I often questioned considering there was a very little chance of them being reused but more on that later) and were more concerned with fulfilling the functional requirements of the software solution. It was a simpler time where you created your own logging solution (if you needed it) or your own variation of the serializer to send the data to another machine over TCP/IP (or if you didn't care about the sequence of arrival, then UDP).

The problem that we have now is that while we have completely overhauled and fully transformed how we build solutions and services, we still haven't strayed far enough from thinking and working in the way that we did twenty to thirty years ago. This seems like a very damning statement. One that might be read as an oversimplification by those that know me or as a pompous arrogance by those that don't know me but I would stand by it, even if it is till the end of this post!

How we normally build solutions.

Think about the last solution that you were building - and now think how you thought about the multi-threading in there. I am pretty sure you thought through your design, you gave all the bottlenecks a thought, and you came to the conclusion that the things that can be done in parallel, should be parallelized. I am pretty sure that I would do the same. I would work on the 15.24 Km view (or 50,000 foot for those that are still using a measurement system used by the medieval armies) [I had to put a pun! Can't forget my American family and friends] and work on the large n-tiered design, look at the problems that I have in terms of my performance, service, scalability, and accessibility and fill my stack with the right tools and languages that would make it happen. I would also talk to my team about the support processes and teams that we would need to create for the service to run and start the cycle of iterations. Herein, lies the kicker. You see, in the step between selecting the right technology to use for our system and the "big picture" view we rarely ever consider the underlying technology because "we will replace it once we have the system off the ground". That statement, never happens - and when it does, it comes right at the footing of a large cost.

I am unsure about the number of people (from big or small companies) who might be saying "no! that's not our team" (inaudibly) but I am sure how many people that I know that would be nodding their heads at this point at a sight too familiar.

But how do we get out of this?

Well; one way of course is to write everything yourself - and I mean everything; write the OS, write the utility libraries, write everything that you need, then put it through the most intense scrutiny process you can think of and hope & pray to God that all your labours bear fruit. After all, that is what a truly a custom, fully-tailored, fully secure system would look like. I can recall a friend (and mentor) who used to say that if your OS doesn't allow something, just write your own (of course this was more than a quarter century ago at this point and he didn't say that to me, he was the recipient from his mentor but you get my point).

One memorable quote that I have from my mentor Saqib Ilyas is that the only secure system is one sitting unplugged, powered down, in a vault, guarded by armed men in a bunker that is unknown - all the rest are varying levels of insecure

This in itself isn't scalable, sensible, or even remotely possible in this day and age! Or is it? You see, I used to work for large defence contractors and worked on software in the past that had to be (at least) DO-278B compliant which meant (in a vey broad sense of the word) that every single line of the requirements had to be traceable to every single line of code. This ensures that you remove all the issues that you might find with extra code that might cause issues later on, ensures that every single line is known to have all possible use-cases and tests that would traverse all paths through the software. It also meant, in a broad sense, that we had to ensure that we only use libraries that we could either fully test, or were known to be highly stable. While this would work in the military where they rarely care about costs (unless being questioned by the Senate Hearing Committees) but I am pretty sure that I would never put this into action in any other industry.

The obvious other way, that I could think of, is to work out all your dependencies during design and check for vulnerabilities that have happened in the past. It would give you a good measure on how stable the library/service/technology is. I really hope that everyone (at the very least) does some form of this action. This would not only improve your RCA process when the time comes but might also show you better ways of solving for the problem that you are building.

kamía panákeia

I have no panacea. I am pretty sure that I don't want to be associated with the word either! That being said, I am confident that if you have a software bill of materials that you generate for your software (whether while conceptualizing it or at the first stable build) a few layers (ideally as deep as possible) that would help you later on when someone finds the vulnerability in one of the libraries that you might have used (whether directly through the addition of the library or as part of another that you might have added [there might be another Azer Koçulu])

How we've done it

Most of the tools that my team writes* and is responsible* for don't use Java. So we aren't fixing anything on our team. Yes, we were fortunate - this time; but in case it was a case this vulnerability would have been found in one of the libraries we would have used, it would have been a simple case of patching for us and sending out an email to all our users what we changed, why, and how they would be impacted [we would have used our software bill of materials for this]. This however, is all after the fact and I (personally) don't like to deal with risks like that. We adopt a hybrid approach in our designs where we document the libraries that we use but mostly opt for very stable software and services, which is easier for us as we have a large team and a large organization to bank on while we build the services the right way and ensure that we have test automation for the golden paths that we take in our software being traversed all the time (not just after builds - and yes there's a reason to that as well).

What I fear!

Like most of the first-world, I too enjoy having over-engineered, and senseless smart appliances around the house (despite my better judgement) and what I fear the most is that the application that is running in my fridge, microwave, washing machine, dryer, and the air conditioner etc. would never be patched and would be left exposed to malicious actors to peruse as they will. Thank you Java for being pervasive enough to make it to my blender!

* We ensure that Frostbite engine is bug free, with services, and tools that cater to our developers of high quality and experience. If you would like to join our team (that plays Battlefield as a means of bonding, camaraderie, and cohesion) send me a message; I am hiring for roles in Sweden and UK.

To view or add a comment, sign in

More articles by Jehanzeb Khan

  • Estimating for service teams

    Have you ever felt that our modern software development practices are sometimes not enough for the project management…

    4 Comments

Others also viewed

Explore content categories