Categories
Blockchain Financial Services

Bridges between blockchains: connecting the DeFi space

The blockchain space has been fortunate with a creativity burst that resulted in impressive product abundance. Yet, an uneven distribution of liquidity together with a rapid rise of DeFi created an urgent need for interoperability between blockchains.

Bridges are a set of smart contracts that promise to enable information exchange between protocols at lower fees than more traditional means. In the present blog post, we propose a mental framework that could ease the work on such a demanding project for managers, developers, and product managers.

Introduction to the DeFi space

The recent explosion in the blockchain space could be intimidating. At the time of writing the present blog post, there are around 140 blockchains that are the backbone for approximately 19,577 cryptocurrencies. Their overall market capitalization is estimated at 1.27 trillion dollars USD. Within it, NFT collections like Bored Ape Yacht Club or CryptoPunks have an estimated market cap of 1,097,268.38 ETH and 417,239.67 ETH, respectively. This abundance comes with a very particular set of pros and cons.

Namely, on the one hand, this fragmentation is excellent. It gives freedom to the blockchain creators, so they can investigate different design options in search of more effective consensus methods, faster transaction rates, or more versatile smart contracts languages. As a result, users have a plethora of options to choose from, so they can find a protocol that suits them best. This opens a door for fierce competition between blockchains and pushes their evolution even faster. During this process, new stars are born, and their early adopters are handsomely rewarded.

On the other hand, this fragmentation is terrible. Tokens from blockchains are locked within a confined space, resulting in data isolation. In addition, despite having potential, many protocols are struggling to attract liquidity as movement between protocols is costly. Finally, smart contracts from distinct blockchains cannot exchange information, which limits their use cases.

Motivations to building bridges between blockchains

We should add two peculiarities to the mentioned market characteristics that recently started to play an important role. First, most of the market liquidity is divided only among two contenders, i.e., Bitcoin and Ethereum. Secondly, the rapid rise of decentralized finance (DeFi) pushes protocol creators to attract liquidity quickly. Yet, how to get liquidity if it is locked among competitors? Currently, the answer for many clients is to use a bridge.

In essence, the bridge is a set of smart contracts that lock tokens on root blockchains and mint a wrapped equivalent of them on a target one. Such a project is interesting for a developer because of a distinct set of inherent challenges. Namely, because a bridge links blockchains that usually use different programming languages, a developer needs to be fluent in them. Moreover, a developer needs to understand protocols to a great degree, as most of the work will be spent on consolidating their discrepancies.

Let’s be clear. The work will be demanding as even running a test version of the product will be a dev-ops project of its own. Finally, you need to have a substantial budget for the project. As building and auditing, the code is expensive. Nevertheless, this is the current frontier in computer science, and there is a joy in being the explorer of it.

Design phase

As the market research progresses and the management decides to start the project, it is essential, in our opinion, to explore the following topics that will help to shape the product:

User personas

This section covers the most fundamental problem that a bridge must resolve, i.e, a bridge needs to be a response to the client’s needs. These in turn will vary based on numerous factors like:

user experience: how will we explain how our bridge works? If an inexperienced user makes an error will we have a way to recover the funds?

products that the user is using: are we dealing with a sophisticated DeFi trader that is interested in high volume and frequent bridging or crypto enthusiasts that just occasionally is exchanging tokens

tokens that the user is holding in the wallet: if our bridge is linking blockchains with small liquidity it is possible that a user will not have its native currency;

trust level: during exploring the different personas that are going to interact with the product it is worth noting that each of them could have a different trust level, e.g, an advanced user with coding experience would prefer a fully decentralized solution, whereas a user that just explores the space would prefer a system with a way to recover the funds, i.e., less decentralized.

Technology

The crucial decision to be made is to choose the proper level of bridge decentralization. Do we want to have an administrative user? How much power should this user have? Should it have the authority to recover some funds, change the contracts, or freeze assets? If we want to have an administrator with all mentioned features, should we create an administrative panel for its role?
Next, the developer team needs to choose tools to create a solution to the described user needs and decided level of decentralization. During the research, it is worth answering the following questions:

    Is the tool open source and has a permissive license? Is the project active and backed by some organization? How many people are using it? Are developers familiar with the programming language of the tool? Was the tool audited for security? How easy will it be to implement proper tests with it?

Fees

This could be one of the most tricky features to implement. On the one hand, your client wants to use the bridge to lower the costs that are imposed by more traditional exchanges. On the other hand, your product needs to create revenue. Balancing these conflicting requirements requires careful calculations.
A minor issue that the team needs to resolve is in what currency the fees are going to be taken.

Monitoring

Finding quickly if there is a problem with bridge assets could be critical to the project’s success in the long term. Thus, it is important to have in mind a logging/monitoring function for each feature of the bridge.

Discrepancies between blockchains

The effective philosophy of unifying differences between blockchains is the key to the bridge’s success. It needs to resolve issues like:

  • Different number of decimals in tokens
  • Managing distinct data types
  • Separate ways of creating tokens

Scenarios

During the design phase, it is also worth spending some time exploring different scenarios that could happen, mostly including the edge ones, by creating spreadsheet simulations. Your team needs to be able to answer questions like:

  • What is the plan if the bridge is hacked like the Wormhole bridge?
  • How to omit a situation in which the bridge will not be able to back all the users claims, a case that allegedly happened to EVODeFi Bridge?

Implementation

For product managers (PM) the implementation phase could be particularly demanding. Projects using blockchain as its backbone are not easily adhering the mainstream Agile values, e.g.:

    How to make frequent releases and respond to change if smart contracts code is immutable? How to adhere to the principle of working software over documentation if auditors and more advanced users want a detailed code description?

These are just a few problems that a PM will face while working on a project. As a result, similarly to the developers, the bridge will open a new perspective and challenges for PM.
In our opinion, given that smart contracts are currently facing a lot of technological constraints and orchestrating interactions between them on distinct blockchains is time-consuming, the cost of change is substantial. Thus, the design and planning phase is more important than in typical IT projects.

Testing

While creating a financial product, testing is the most crucial part of the development process. There is no room for error, as even a tiny bug could result in significant damages that could hamper the bridge’s reputation. Thus, testing should be an iterative process split into separate phases. We propose the following ones:

    Unit tests of functions inside smart contracts End-to-end tests of the whole bridge process Tests on the chosen test network Code audit by a separate institution Tests on mainnet

Conclusion

DeFi expansion with its need for liquidity together with the smart contract dominance created a significant need among users for bridges that connect previously isolated blockchain. Projects like that are demanding as they force managers, developers, and project managers to quit their comfort zones and quickly adapt to a mostly unknown environment. Yet, in the present post, we presented a mental framework that could help reduce the cognitive burden.

Disclaimer

All data is based on: coinmarketcap.com . Data related to cryptocurrencies is subjected to rapid changes and could change substantially when the reader is checking it.
The present blog post is for informational purposes only and is neither legal nor financial advice/statement. Cryptocurrencies are high-risk investments with the potential to lose all invested capital. Thus, before you invest, consult proper authorities and perform due diligence.

Categories
Blockchain Software Technology

How to deploy a Hyperledger Fabric network on Kubernetes?

Enterprise-class companies need secure and efficient solutions that can fluently scale with the needs of such large organizations, regardless of the technology of choice. That can be a huge challenge that forces developers to demonstrate their skills, but also the ability to think outside the box and make unobvious decisions.

Watch our webinar related to this article by Marcin Wojciechowski.

And that was the exact case of the collaboration with one of our recent clients.

After the consultation phase, our client opted for a solution built on Hyperledger Fabric’s blockchain network. Hence, our job was to fit the blockchain network into the technical and organizational requirements of the client.

The business consulting client for whom we have created such a solution has multiple independent branches spread worldwide. And that was one of the main challenges as they are independent both technically and organizationally. The client also has a very demanding security policy for any data coming in and out of the branch’s IT environment.

Decentralized blockchain network on multiple clusters with Kubernetes

We have created a secure, decentralized blockchain network on multiple clusters with Kubernetes to manage to scale and orchestrate the system. The solution with Kubernetes in place enables efficient use of resources according to system load, which results in efficiency in terms of both cost and computing power. And what is equally important is that it is designed to work according to the technical requirements and our client’s procedures – including monitoring and approving any in or out traffic.

However, a complex network with precise requirements had to be created to make this possible. For example, each of our client’s local organizations was supposed to deploy all the elements of a network (which is not evident with Hyperledger Fabric architecture).

The need for one Orderer Certificate Authority

It turned out that creating a network that is exactly the same was not possible, due to the need for one Orderer Certificate Authority server However, making it very similar was possible. Such a situation occurred because enterprise-class companies have branches independent of each other from a legal point of view but are also technically independent, so separate and different development environments had to be connected and managed by one CA server.

This is not only directly related to technology but also to regions. As different regions have their own environments, even within computing clouds, a multi-regional solution was needed. Fortunately, blockchain makes this possible.

However, to achieve the goals, we had to think out of the box and implement a solution that was not originally envisioned. It was also challenging to find information about ready-made, described solutions of this type.

We achieved a distributed Hyperledger Fabric network that is easy to deploy and expand. This PoC is an excellent base for future projects where a network is required to be spread on multiple servers and orchestrated by Kubernetes. Therefore, guided by the principle of transparency and a simple desire to share knowledge, we have decided to describe the process of building such network architecture in this article.

Technologies we used to deploy Hyperledger Fabric

As we have mentioned above, to achieve creating a distributed network, we decided to use Kubernetes, as it is more flexible and easier to scale than Docker Swarm. Later in the project, we would also bring automated deployment of the network for each department, where Kubernetes should also do a better job.

During a research phase of the PoC, we found Hyperledger Fabric Operator, a tool created by David Viejo, a software architect from Kung Fu Software. It helps deploy an HLF network on Kubernetes in a declarative way. All configuration files and crypto material are handled under the hood, meaning the developer needs to worry about adding specific elements to the network.

We needed at least two Kubernetes Clusters to test how the tool works. In the beginning, we had two ideas for the deployment: first, to use KIR’s sandbox as one of the machines for the deployment and a local computer as the server provider, and second, to set up local clusters using kind. Sandbox did not work out – we were provided with a connection to an already prepared blockchain network where we could fiddle with it.

Still, we could not provide any network configuration changes. We decided to skip the process of setting it up locally, as it could require additional work later to adapt the script for cloud clusters. We decided to give a try to Digital Ocean, which is an American-based cloud infrastructure provider with servers all over the world, offering a free credit valid for the first two months, which was perfect for our PoC needs. To allow communication between the clusters, we decided to find a free domain and ended up using provider freenom.com, as it also provides DNS management.

Workflow

We started by creating a simple network diagram to show the network topology and allow us to visualize what we are creating.

For clarity purposes, not all of the connections are visible on the diagram – peers can communicate with any orderer, and they also communicate with each other using a gossip protocol.

Then, we started learning how the HLF Operator works. Thankfully, we found a presentation of the tool from the Hyperledger Budapest meetup by the tool creator himself, which sped up the introduction process a lot.

The third step was to try out the Operator using the aforementioned tools. We decided to start with a single cluster setup and later expand it to achieve a distributed network. This step was relatively easy, as following the steps from the meetup was successful, and the network was running in no time.

Lastly, we expanded the network by another cluster. With this step done, we would have all the required knowledge to add even more clusters to the network. Hyperledger Fabric Operator documentation describes how to set up a single cluster setup using Istio – thanks to that, we could figure out a way of communication between the clusters.

Solution

The work resulted in a script that handles the deployment on two clusters. All we need to do is to provide it with the correct configuration, execute it and adjust DNS settings.

Resource estimates

The network that is set up with the script, consumes the following amount of resources:

  • Space ~6Gb per cluster
    • Peer+CouchDB 2GB (depending on chaincode and amount of data stored)
    • CA 1GB
    • Orderer 1GB
  • Memory ~3Gb per cluster
    • Peer+CouchDB 1.1GB
    • CA 0.25GB
    • Orderer 0.5GB

If you want to follow our deployment exactly, prepare the following:

  • 2 DigitalOcean clusters
    • Each cluster consists of 3 nodes – 2.5GB RAM usable, 2 vCPUs
    • Kubernetes version 1.22.8-do.1
  • 3 Free domains on freenom.com – one for each organization
    • OrdererOrg
    • Org1
    • Org2

How does it work:

  • Install HLF-operator and Istio on both clusters
  • Wait for Istio to assign public IP to the cluster
  • Set up DNS on freenom.com
    • To do that, go to freenom.com client area
    • Go to Services -> My Domains
    • For each domain open “Manage Domain” in a new tab
    • On each tab go to Manage Freenom DNS
    • Add the following records:
DomainNameTypeTTLTarget
org1.compeer0A3600cluster1ip
org1.compeer1A3600cluster1ip
org2.compeer0A3600cluster2ip
org2.compeer1A3600cluster2ip
ord.comcaA3600cluster1ip
ord.comord1A3600cluster1ip
ord.comord2A3600cluster2ip

Where cluster IP is received by this command:
kubectl get svc istio-ingressgateway -n istio-system -o json | jq -r ‘.status.loadBalancer.ingress[0].ip’

  • Deploy CAs on both clusters and wait for them to be running
    • For Orderer CA remember to add flag –hosts $DOMAIN, otherwise Istio won’t be able to redirect to correct cluster
  • Deploy Peers and Orderers on both clusters and wait for them to be running
    • Here flag –hosts $DOMAIN, is also necessary, for all deployments, since they need to communicate with each other
    • When deploying orderer on Cluster2 it will not recognize the Orderer CA, as it is running on Cluster1
    • To work around it, temporarily use the CA of Org2 for generating deployment config and before applying change the following variables
      • .spec.secret.enrollment.component.cahost – to Orderer CA domain
      • .spec.secret.enrollment.component.caport – to Istio gateway port (443 default)
      • .spec.secret.enrollment.component.catls.cacert – copy from Orderer1 config
      • .spec.secret.enrollment.tls.cahost – to Orderer CA domain
      • .spec.secret.enrollment.tls.caport – to Istio gateway port (443 default)
      • .spec.secret.enrollment.tls.catls.cacert – copy from Orderer1 config
      • .spec.secret.enrollment.tls.csr.hosts – to include Orderer CA domain
  • Create yaml connection configuration files for all organizations on both clusters
    • Use yq to merge them together
  • Install chaincode (can be run in the background)
  • Generate initial channel block on Cluster1
    • As consenter for now include only Org1, Org2 won’t be visible yet on Cluster1
  • Join Peers and Orderer from Cluster1 to the channel 
  • Generate Org2 definition and add Org2 to the channel
  • To add Orderer from Cluster2 as consenter, channel needs to be modified manually
    • Inspect channel config
    • Edit channel config to include Orderer2 in:
      • .channel_group.groups.Orderer.groups.OrdererMSP.values.Endpoints.value.addresses
      • .channel_group.groups.Orderer.values.ConsensusType.value.metadata.consenters
    • TLS cert can be found on this path inside the Orderer2 pod /var/hyperledger/tls/server/pair/tls.crt
      • Because of corrupted line endings they need to be trimmed (using sed -e “s/\r//g”) or the certificate comparison will fail
      • Certificate needs to be encoded in base64
    • Compute channel changes
    • Encode the update
    • Sign the update by OrdererOrg
    • Update the channel
  • Join Peers and Orderer from Cluster2 to the channel
  • Wait for chaincode to finish installing, approve, commit and init chaincode
  • All peers should now be able to read and write transactions

Conclusions

We achieved something that is not yet well documented – we can find a few articles about how to deploy the Hyperledger Fabric network on Kubernetes, but they are usually confusing for people that did not have any prior experience with this tool.

HLF Operator on the other hand generates most of the necessary configuration, making it a relatively easy task. Deployment of distributed networks using this tool is not well documented, there are only a few tips on how to deploy using Istio, but nothing that would explain how to do it on multiple cluster setup, so we hope this article will help many of you do it smoothly.

To see the exact commands that are needed to deploy this network, please have a look at our github repository, prepared especially for this article.