Why Source Code Dependency is a Challenge for Mainframe Workload Rehosting
You would like to read just one single document from LzLabs to better understand why we state that our Software Defined Mainframe is so different from other mainframe rehosting approaches?
Then, you should read the white paper linked below “Why Source Code Dependency is a Challenge for Mainframe Workload Rehosting”. It clearly defines the power of the binary approach implemented in LzLabs Software Defined Mainframe® (SDM).
Beyond the initial and important fact that the LzLabs approach does not need source code, SDM provides specific highly powerful capabilities when compared to a standard recompilation:
- Storage of data on Linux in their legacy format is the foundation of SDM: it is required to reach compatibility for legacy programs doing some “fancy” bit manipulations on data, let’s say bit shifting to multiply by two to optimise performances - even if programmers were told not to do that! Those programs clearly assume an underlying mainframe binary representation for data: big-endian numbers, EBCDIC collating sequences for strings, etc. This kind of programs can’t cope without change with the x86 view of the world: little-endian numbers, ASCII encoding, etc. Even slight alteration of the original encoding when data is rehosted through traditional recompilation solutions would lead those programs to produce erroneous results when recompiled. SDM binary program compatibility allow those “optimisations” to continue working on Linux without any source code.
- Many user applications interact with control blocks, which presents a problem when applications are migrated into an environment that does not support them. Such applications typically require some level of rewriting to remove these dependencies for any migration to a new platform. Because SDM supports control blocks needed to ensure interoperability of legacy applications on the SDM platform, such user applications need not be rewritten, which reduces the cost, risk, and delay of the migration effort.
- Standard floating-point operations computed by an x86 processor produce in some cases, due to different number representations, slightly different results than identical operations done on a mainframe with identical input values. When you compound thousands of such operations to reach a result, you can get different numbers at the end for the same algorithm. Would you think that bankers can accept the different outcome when those results deal with multi-billions of dollars? It doesn’t mean that x86 representation is better or worse, the core issue is that numbers are different, which can impact business stability! SDM reproduces the mainframe computation results for floating operations while most recompilation solution rely on the target platforms processor mathematical capabilities.
I could go on and on from here: the binary approach has many other advantages. To find out more, go to this page to download the complete white paper. And as always, you can get in touch with us if you want to discuss those advantages applied to your specific context!