Building an "Interoperable Distributed Ledger Middleware" .

Building an "Interoperable Distributed Ledger Middleware" .

Introduction - 

Not a day goes by and a new crypto currency or an ICO (Initial Coin Offering) comes out on exchanges or via private issue globally. This announcement is usually followed by a group of developers and marketing executives explaining the merits and use cases for these newer crypto currencies. There are over a hundred crypto currencies and tokens with their own blockchains that are built around specific use cases. In most cases, the blockchains replicate the same functions as another instead of working on interoperability.

Then there are shills jumping into the crypto markets to pump up for ICO, make unsubstantiated claims, make their quick buck and run off. There are ICOs being offered with team of marketing execs and a single developer, hyping up use cases of their blockchain technology. 

The more we read, the more confusing it gets, in understanding what is real and what is the proverbial “snake oil”, what you should invest or back, and what you should run away from. 

There is not much I can advise you on in this realm, as I too have been taken in by false promises only to be holding an email of promised tokens worth nothing more than the bit & bytes that it was transmitted on. It is said hindsight is 20/20 and you learn from your mistakes, so after a few of these failed ICO backings and healthy dose of skepticism in majority of the blockchains, I decided to put my efforts in what I know best, an enterprise solution built for helping rapid integrations in enterprises. 

Also, I wanted to focus on managed service providers, multi-tenancy and quicker returns on investment (ROI) just as a practice.

 Purpose – 

I would like to discuss my thoughts on how to close the gap, in interoperability between various crypto currencies and more so, interoperability between various distributed ledgers which are the backbone of these crypto currencies or functional distributed applications. 


I am proposing, not unlike a crypto-currency exchange but

A) that is much more user case specific (built with purpose of bridging diverse distributed ledgers).

B) currency or blockchain agnostic.

C) Developing a hosted solution for Interoperable Distributed Ledger Exchange (IDLM). 

IDLM would be built in a highly redundant architecture across multiple cloud services provider, (think AWS, AZURE, GCC, IBM BLUEMIX, etc) built in a scalable Geo-redundant fashion. 

The end goal is that we should be able to use all the best features of various blockchain technologies, regardless of their limitations. For Example, we should be able to deploy a smart contract on the Ethereum blockchain which should interact with an IOTA Tangle (Directed Acyclic Graph (DAG)). This can happen by deploying 

There is always value in building additional functionality into blockchain or DAG, but wouldn’t it make sense to allow the developers who know their own distributed ledger optimize the best features, while we, the consumers, pick and choose the best functions to deploy with the help of a middleware that can communicate to many types of distributed ledgers?

I am not proposing a build from scratch middleware solution, which is an option, but initially review to see if we can optimize and enhance existing solutions such as MS Azure’s Project Bletchley or Interledger with a big data back-end to fit our needs Also add the connectivity between a DAG to IDLM instead of blockchain specific transactions interoperability. 

High Level Logical Architecture –

The key infrastructure features of the hosted IDLM are listed below. Some high-level integration features are

Each IDLM POP (point of presence) will run a full node talking to the relevant blockchain or DAG. 

On the integration side with the full nodes, Web3.js can be used to pull data into a service bus and send instructions back to the full node. 

On the service bus, data will be encapsulated in AMQP (Advanced Message Queuing Protocol). This is very well suited for high speed transmission, esp. in a publish and subscribe bus. AMQP also provides security and reliability that is perfect for this solution. 

Data published to the bus will be subscribed by all required ERP, Data Stores, Operations and Analytics applications with additional controls offered via VLANs to partition the applications based on their functions. 

Data published back to the Service Bus by the various applications will be encapsulated in AMQP (Spring AMQP-(TBD)) back to the distributed ledger full nodes for additional information or execute next set of instructions. 

Geo-Redundancy / Multiple Cloud Vendor Deployment. - As noted in the introduction, the solution would be built across multiple (3 minimum) global cloud vendors. This platform would be designed for secure access via API and end users. The platform would be highly redundant in each of the data center design. 


No Single point of failure at Cloud point of presence. - The architecture will involve redundant data links coming to each site. There will be an active/active load balancer to handle all incoming / outgoing traffic. 

The data will be transferred into the redundant service bus and moved across the architecture into big data stores. There will be a connection to the service bus for analytics. The solution will be monitored from infrastructure layer to application layer internally and then end user experience via external monitors performing synthetic checks. 

Multi-Tenant Architecture. - The build of the IDLM will have silos created for expansion for each customer specific data. Multi-tenancy with high level of logging for security compliance to HIPAA, PCI, SOX, FISMA, GLBA, etc. using Elasticsearch. Offloaded to a central NOC/SOC for analysis. 

No Vendor specific technology (as applicable). - Focus on using open source or commercial technology that is not tied to a specific cloud provider. Based on ROI, customer requirements and applicable functions this can be determined on a case by case basis. 

Phase 1 – Specific Use Case for Ethereum Blockchain to IOTA Tangle. 

Ethereum blockchains smart contracts are much ahead rest of the block chains (with their smart contract deployments. Using Solidity, deploying smart contracts for various enterprise / cross enterprise functions are already being performed. 

IOTA, with its TANGLE (Distributed Ledger), is built exactly for M2M communication. The full nodes that are running in each of the IDLM POP will provide fast transactions support. We will be able to maintain a full copy of the Tangle database in each of our IDLM POP. 

Both the Ethereum Full node running GETH and IOTA Full node, support JSON for communication. This allows us to encapsulate JSON AMQP and push data into the Service Bus. 

Phase 2 – Expand solution with additional distributed ledgers full nodes. 

Starting with Bitcoin blockchain, add in functionality for new distributed ledgers. Expand the solution in additional cloud providers space as need for newer POP arises due to latency and other environmental factors. 

Trade-Off-

The main trade-off here is that we will be taking the enterprise approach which means we will be moving instructions sets “off-chain” in scope of a blockchains or DAG. This means that we are introducing the an external solution in the “blockchain” or “directed acyclic graph”, depending on the distributed ledger we are communicating with. 

Conclusion-

My take on the trade-off is that even though we are taking the data “off-chain” in our solution, we are not breaking the immutability or reliability of the ledger. 

 Secondly, there are way too many positives such as rapid deployment, utilizing the most mature technology from each distributed ledger instead of waiting on the same functionality being developed in one that lacks it. 

We can develop applications that need to interact with multiple distributed ledgers without having to go out and find specific blockchain developers who are already in high demand and short supply. 

As the IDLM solution can normalize the data, and set appropriate instructions to the distributed ledger nodes, we can create, test, deploy new applications using our standard SDLC (Software Development Lifecycle).

This will allow us to move away from non-performing distributed ledger and still re-use most of the application we have developed, by changing the instructions set to the application programming interface (API) or application binary interface (ABI). 

From a Finance, Healthcare or Government perspective, there is a need for granular logging for compliance purposes, so this IDLM architecture allows us to 

Disclaimer - 

The opinions expressed here are my own & not those of my employer. As I research and learn different technologies my thoughts and opinions will change. As technology changes, so will my opinions. 

Thanks for taking the time to read. 

Dan

I absolutely agree with your thoughts

To view or add a comment, sign in

More articles by Dan T.

Others also viewed

Explore content categories