Enterprise-class companies need secure and efficient solutions that can fluently scale with the needs of such large organizations, regardless of the technology of choice. That can be a huge challenge that forces developers to demonstrate their skills, but also the ability to think outside the box and make unobvious decisions.
Watch our webinar related to this article by Marcin Wojciechowski.
And that was the exact case of the collaboration with one of our recent clients.
After the consultation phase, our client opted for a solution built on Hyperledger Fabric’s blockchain network. Hence, our job was to fit the blockchain network into the technical and organizational requirements of the client.
The business consulting client for whom we have created such a solution has multiple independent branches spread worldwide. And that was one of the main challenges as they are independent both technically and organizationally. The client also has a very demanding security policy for any data coming in and out of the branch’s IT environment.
Decentralized blockchain network on multiple clusters with Kubernetes
We have created a secure, decentralized blockchain network on multiple clusters with Kubernetes to manage to scale and orchestrate the system. The solution with Kubernetes in place enables efficient use of resources according to system load, which results in efficiency in terms of both cost and computing power. And what is equally important is that it is designed to work according to the technical requirements and our client’s procedures – including monitoring and approving any in or out traffic.
However, a complex network with precise requirements had to be created to make this possible. For example, each of our client’s local organizations was supposed to deploy all the elements of a network (which is not evident with Hyperledger Fabric architecture).
The need for one Orderer Certificate Authority
It turned out that creating a network that is exactly the same was not possible, due to the need for one Orderer Certificate Authority server However, making it very similar was possible. Such a situation occurred because enterprise-class companies have branches independent of each other from a legal point of view but are also technically independent, so separate and different development environments had to be connected and managed by one CA server.
This is not only directly related to technology but also to regions. As different regions have their own environments, even within computing clouds, a multi-regional solution was needed. Fortunately, blockchain makes this possible.
However, to achieve the goals, we had to think out of the box and implement a solution that was not originally envisioned. It was also challenging to find information about ready-made, described solutions of this type.
We achieved a distributed Hyperledger Fabric network that is easy to deploy and expand. This PoC is an excellent base for future projects where a network is required to be spread on multiple servers and orchestrated by Kubernetes. Therefore, guided by the principle of transparency and a simple desire to share knowledge, we have decided to describe the process of building such network architecture in this article.
Technologies we used to deploy Hyperledger Fabric
As we have mentioned above, to achieve creating a distributed network, we decided to use Kubernetes, as it is more flexible and easier to scale than Docker Swarm. Later in the project, we would also bring automated deployment of the network for each department, where Kubernetes should also do a better job.
During a research phase of the PoC, we found Hyperledger Fabric Operator, a tool created by David Viejo, a software architect from Kung Fu Software. It helps deploy an HLF network on Kubernetes in a declarative way. All configuration files and crypto material are handled under the hood, meaning the developer needs to worry about adding specific elements to the network.
We needed at least two Kubernetes Clusters to test how the tool works. In the beginning, we had two ideas for the deployment: first, to use KIR’s sandbox as one of the machines for the deployment and a local computer as the server provider, and second, to set up local clusters using kind. Sandbox did not work out – we were provided with a connection to an already prepared blockchain network where we could fiddle with it.
Still, we could not provide any network configuration changes. We decided to skip the process of setting it up locally, as it could require additional work later to adapt the script for cloud clusters. We decided to give a try to Digital Ocean, which is an American-based cloud infrastructure provider with servers all over the world, offering a free credit valid for the first two months, which was perfect for our PoC needs. To allow communication between the clusters, we decided to find a free domain and ended up using provider freenom.com, as it also provides DNS management.
Workflow
We started by creating a simple network diagram to show the network topology and allow us to visualize what we are creating.
For clarity purposes, not all of the connections are visible on the diagram – peers can communicate with any orderer, and they also communicate with each other using a gossip protocol.
Then, we started learning how the HLF Operator works. Thankfully, we found a presentation of the tool from the Hyperledger Budapest meetup by the tool creator himself, which sped up the introduction process a lot.
The third step was to try out the Operator using the aforementioned tools. We decided to start with a single cluster setup and later expand it to achieve a distributed network. This step was relatively easy, as following the steps from the meetup was successful, and the network was running in no time.
Lastly, we expanded the network by another cluster. With this step done, we would have all the required knowledge to add even more clusters to the network. Hyperledger Fabric Operator documentation describes how to set up a single cluster setup using Istio – thanks to that, we could figure out a way of communication between the clusters.
Solution
The work resulted in a script that handles the deployment on two clusters. All we need to do is to provide it with the correct configuration, execute it and adjust DNS settings.
Resource estimates
The network that is set up with the script, consumes the following amount of resources:
- Space ~6Gb per cluster
- Peer+CouchDB 2GB (depending on chaincode and amount of data stored)
- CA 1GB
- Orderer 1GB
- Memory ~3Gb per cluster
- Peer+CouchDB 1.1GB
- CA 0.25GB
- Orderer 0.5GB
If you want to follow our deployment exactly, prepare the following:
- 2 DigitalOcean clusters
- Each cluster consists of 3 nodes – 2.5GB RAM usable, 2 vCPUs
- Kubernetes version 1.22.8-do.1
- 3 Free domains on freenom.com – one for each organization
- OrdererOrg
- Org1
- Org2
How does it work:
- Install HLF-operator and Istio on both clusters
- Wait for Istio to assign public IP to the cluster
- Set up DNS on freenom.com
- To do that, go to freenom.com client area
- Go to Services -> My Domains
- For each domain open “Manage Domain” in a new tab
- On each tab go to Manage Freenom DNS
- Add the following records:
Domain | Name | Type | TTL | Target |
org1.com | peer0 | A | 3600 | cluster1ip |
org1.com | peer1 | A | 3600 | cluster1ip |
org2.com | peer0 | A | 3600 | cluster2ip |
org2.com | peer1 | A | 3600 | cluster2ip |
ord.com | ca | A | 3600 | cluster1ip |
ord.com | ord1 | A | 3600 | cluster1ip |
ord.com | ord2 | A | 3600 | cluster2ip |
Where cluster IP is received by this command:
kubectl get svc istio-ingressgateway -n istio-system -o json | jq -r ’.status.loadBalancer.ingress[0].ip’
- Deploy CAs on both clusters and wait for them to be running
- For Orderer CA remember to add flag –hosts $DOMAIN, otherwise Istio won’t be able to redirect to correct cluster
- Deploy Peers and Orderers on both clusters and wait for them to be running
- Here flag –hosts $DOMAIN, is also necessary, for all deployments, since they need to communicate with each other
- When deploying orderer on Cluster2 it will not recognize the Orderer CA, as it is running on Cluster1
- To work around it, temporarily use the CA of Org2 for generating deployment config and before applying change the following variables
- .spec.secret.enrollment.component.cahost – to Orderer CA domain
- .spec.secret.enrollment.component.caport – to Istio gateway port (443 default)
- .spec.secret.enrollment.component.catls.cacert – copy from Orderer1 config
- .spec.secret.enrollment.tls.cahost – to Orderer CA domain
- .spec.secret.enrollment.tls.caport – to Istio gateway port (443 default)
- .spec.secret.enrollment.tls.catls.cacert – copy from Orderer1 config
- .spec.secret.enrollment.tls.csr.hosts – to include Orderer CA domain
- Create yaml connection configuration files for all organizations on both clusters
- Use yq to merge them together
- Install chaincode (can be run in the background)
- Generate initial channel block on Cluster1
- As consenter for now include only Org1, Org2 won’t be visible yet on Cluster1
- Join Peers and Orderer from Cluster1 to the channel
- Generate Org2 definition and add Org2 to the channel
- To add Orderer from Cluster2 as consenter, channel needs to be modified manually
- Inspect channel config
- Edit channel config to include Orderer2 in:
- .channel_group.groups.Orderer.groups.OrdererMSP.values.Endpoints.value.addresses
- .channel_group.groups.Orderer.values.ConsensusType.value.metadata.consenters
- TLS cert can be found on this path inside the Orderer2 pod /var/hyperledger/tls/server/pair/tls.crt
- Because of corrupted line endings they need to be trimmed (using sed -e “s/\r//g”) or the certificate comparison will fail
- Certificate needs to be encoded in base64
- Compute channel changes
- Encode the update
- Sign the update by OrdererOrg
- Update the channel
- Join Peers and Orderer from Cluster2 to the channel
- Wait for chaincode to finish installing, approve, commit and init chaincode
- All peers should now be able to read and write transactions
Conclusions
We achieved something that is not yet well documented – we can find a few articles about how to deploy the Hyperledger Fabric network on Kubernetes, but they are usually confusing for people that did not have any prior experience with this tool.
HLF Operator on the other hand generates most of the necessary configuration, making it a relatively easy task. Deployment of distributed networks using this tool is not well documented, there are only a few tips on how to deploy using Istio, but nothing that would explain how to do it on multiple cluster setup, so we hope this article will help many of you do it smoothly.
To see the exact commands that are needed to deploy this network, please have a look at our github repository, prepared especially for this article.