Categories
Software Technology

Micronaut vs SpringBoot: which one is better?

Some time ago, I attended the Devoxx Conference in Kraków, the biggest Java conference in Poland. I searched for inspiration and wanted to hear about the newest trends in the industry. Among many great lectures, one topic left me with a feeling that I should give it more attention – the Micronaut framework and GraalVM native image.

Conference specification

Conference speeches are time limited so they usually don’t give you a ready solution which you can just grab and use in your projects. They show you only some aspects, highlighting the best features. A closer look and a real-life application is necessary if you want to actually use it. The most popular framework for creating Java applications is currently Spring with Spring Boot. It is very powerful and easy to start using it. Many features work out of the box and you can start a local development with just adding a single dependency to the application.

However, Spring heavily depends on reflection and proxy objects. For this reason, Spring Boot applications need some time to start. On the other hand, Micronaut is designed to use AOT (Ahead Of Time) compilation and avoid reflection. It should make it easier to use GraalVM and native image. We may ask a question if startup time is really important for Java applications. The answer is not very obvious. If we have an application running all the time, the startup time is not crucial.

The situation changes on Serverless environments, where applications are started to handle the incoming request and closed afterwards. Our application may prepare a response within milliseconds but if it needs a couple of seconds to start, it may become a serious problem if we are billed for each minute of running our application.

Applications used for testing

To make a comparison, I decided to write two simple CRUD applications (CRUD stands for Create, Retrieve, Update, Delete and represents the set of most common actions on various entities). 

One application was created using Spring Boot 2.7.1, the second one using Micronaut Framework 3.5.3. Both used Java 17 and Gradle as a build tool. I wanted to compare the following aspects:

  • application startup time;
  • how easy is to write the application;
  • how easy is to test the application;
  • how easy is to prepare the native image.

To look more like a real application, I defined three entities: Order, Buyer, and Product.

Example: 3 entities of model application

Each entity is persisted in a database, has its own business logic and is exposed through REST API. As a result each application has 9 beans (3 REST controllers, 3 domain services and 3 database repositories).

Example entity structure

Startup time and memory consumption

* all measurements done on MacBook Air M1

We can see that even without GraalVM, Micronaut gets better startup times and less memory consumption. Combining the applications with GraalVM gets a huge performance boost. In both scenarios Micronaut gives us better numbers.

Micronaut from the Spring developer perspective

I work with Spring on a daily basis. When I thought about writing a new application, the Spring Boot was my first choice. When I heard about Micronaut for the first time I was afraid that I would need to learn everything from scratch and writing applications wouldn’t be as easy as using Spring. Now, I can confidently say that the use of Micronaut is just as easy as writing a Spring Boot application.

Production code

Here we have mainly similarities:

  • Both have a nice starting page (https://start.spring.io/ and https://micronaut.io/launch/) where you can choose the basic configuration (Java version, built tool, testing framework) , add some dependencies and generate a project. Both are well integrated with IntelliJ IDEA.
Spring perspective
Micronaut perspective
  • Database entities and domain classes are just identical
  • DB repositories also looks identical – the only difference is in the imports
package eu.espeo.springdemo.db;

import java.util.List;
import java.util.Optional;
import java.util.UUID;

import org.springframework.data.repository.CrudRepository;
import
org.springframework.stereotype.Repository;


@Repository
public interface ProductRepository extends CrudRepository<Product,Integer> {
  @Override
  List<Product> findAll();

  Optional<Product> findByBusinessId(UUID businessId);

  void deleteByBusinessId(UUID businessId);
}
package eu.espeo.micronautdemo.db;

import java.util.List;
import java.util.Optional;
import java.util.UUID;


import io.micronaut.data.annotation.Repository;
import io.micronaut.data.repository.CrudRepository
;

@Repository
public interface ProductRepository extends CrudRepository<Product,Integer> {
  @Override
  List<Product> findAll();

  Optional<Product> findByBusinessId(UUID businessId);

  void deleteByBusinessId(UUID businessId);
}
  • REST controllers are very similar
package eu.espeo.springdemo.rest;

import static java.util.stream.Collectors.toList;

import java.util.List;
import java.util.UUID;

import org.springframework.http.MediaType;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;


import eu.espeo.springdemo.domain.ProductService;
import lombok.RequiredArgsConstructor;

@RestController
@RequestMapping(value = “/products”, produces = MediaType.APPLICATION_JSON_VALUE)

@RequiredArgsConstructor
public class ProductController {

  private final ProductService productService;

  @GetMapping
  public List<ProductDto> listProducts() {
    return productService.findAll().stream()
          .map(ProductDto::fromProduct)
          .collect(toList());
  }

  @GetMapping(value = “/{productId}”)
  public ProductDto getProduct(@PathVariable(“productId”) String productId) {
    return ProductDto.fromProduct(productService.findByBusinessId(UUID.fromString(productId)));
  }

  (…}
}

package eu.espeo.micronautdemo.rest;

import static java.util.stream.Collectors.toList;

import java.util.List;
import java.util.UUID;

import eu.espeo.micronautdemo.domain.ProductService;
import io.micronaut.http.MediaType;
import io.micronaut.http.annotation.Body;
import io.micronaut.http.annotation.Controller;
import io.micronaut.http.annotation.Delete;
import io.micronaut.http.annotation.Get;
import io.micronaut.http.annotation.PathVariable;
import io.micronaut.http.annotation.Post;

import lombok.RequiredArgsConstructor;

@Controller(value = “/products”, consumes = MediaType.APPLICATION_JSON, produces = MediaType.APPLICATION_JSON)
@RequiredArgsConstructor
public class ProductController {

  private final ProductService productService;

  @Get
  public List<ProductDto> listProducts() {
    return productService.findAll().stream()
        .map(ProductDto::fromProduct)
        .collect(toList());
  }

 @Get(value = “/{productId}”)
  public ProductDto getProduct(@PathVariable(“productId”) String productId) {
    return ProductDto.fromProduct(productService.findByBusinessId(UUID.fromString(productId)));
  }

  (…)
}










,

Basically that’s it. There are no other differences in production code. 

The advantage of Micronaut

However, there is one big advantage of Micronaut. It checks many things at compile time instead of at runtime. Let’s see the following repository method: findByName. We used the wrong return type because we return String instead of Product. How will both frameworks behave?

package eu.espeo.micronautdemo.db;

import java.util.List;
import java.util.Optional;
import java.util.UUID;

import io.micronaut.data.annotation.Repository;
import io.micronaut.data.repository.CrudRepository;

@Repository
public interface ProductRepository extends CrudRepository<Product,Integer> {
(…)
String findByName(String name);
}

Micronaut will fail during project compilation with the following failure:

error: Unable to implement Repository method: ProductRepository.findByName(String name). Query results in a type [eu.espeo.micronautdemo.db.Product] whilst method returns an incompatible type: java.lang.String

String findByName(String name);

Very clear. And how about Spring? It will compile successfully and everything will be fine until we try to call this method:

java.lang.ClassCastException: class eu.espeo.springdemo.db.Product cannot be cast to class java.lang.String (eu.espeo.springdemo.db.Product is in unnamed module of loader ‘app’; java.lang.String is in module java.base of loader ‘bootstrap’)

Let’s hope everyone writes tests. Otherwise such an error will be thrown on production.

Spring Boot vs Micronaut 0:1

Test code for the REST controller

For Spring we can write:

  • An integration @SpringBootTest with TestRestTemplate injected. This test starts the Web server. We can easily send some requests to our application.
  • An integration @SpringBootTest using MockMvc. This test doesn’t start the Web server so it is faster, but sending requests and parsing responses are not as easy as when using TestRestTemplate.
  • For more specific scenarios you can mock some classes using @MockBean annotation.

For Micronaut you can write:

  • An integration @MicronautTest with HttpClient injected. This test starts the Netty server. As easy to use as SpringBoot test with TestRestTemplate.
  • For more specific scenarios you can mock some classes using @MockBean annotation.
package eu.espeo.springdemo.rest;

import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.boot.test.web.client.TestRestTemplate;
import org.springframework.boot.test.web.server.LocalServerPort;
import org.springframework.http.HttpStatus;

import java.math.BigDecimal;
import java.util.UUID;

import static org.assertj.core.api.BDDAssertions.then;

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
class ProductControllerWebIntegrationTest {

  @LocalServerPort
  private int port;

  @Autowired
  private TestRestTemplate restTemplate;


  @Test
  void shouldSaveAndRetrieveProduct() {
    // given
    var productBusinessId = UUID.randomUUID();
    var productName = “Apple MacBook”;
    var price = BigDecimal.valueOf(11499.90);

    // when
    var createResponse = restTemplate
        .postForEntity(“http://localhost:” + port + “/products”,
            new ProductDto(productBusinessId, productName, price), ProductDto.class)
;
    then(createResponse.getStatusCode()).isEqualTo(HttpStatus.OK);
    var product = restTemplate.getForObject(
        “http://localhost:” + port + “/products/” + productBusinessId, ProductDto.class);
;

    // then
    then(product).isNotNull();
    then(product.businessId()).isEqualTo(productBusinessId);
    then(product.name()).isEqualTo(productName);
    then(product.price()).isEqualByComparingTo(price);
  }
}


package eu.espeo.micronautdemo.rest;

import io.micronaut.http.HttpRequest;
import io.micronaut.http.HttpStatus;
import io.micronaut.http.client.HttpClient;
import io.micronaut.http.client.annotation.Client;
import io.micronaut.test.extensions.junit5.annotation.MicronautTest;
import jakarta.inject.Inject;
import org.junit.jupiter.api.Test;

import java.math.BigDecimal;
import java.util.UUID;

import static org.assertj.core.api.BDDAssertions.then;

@MicronautTest
class ProductControllerWebIntegrationTest {

  @Inject
  @Client(“/”)
  private HttpClient client;


  @Test
  void shouldSaveAndRetrieveProduct() {
    // given
    var productBusinessId = UUID.randomUUID();
    var productName = “Apple MacBook”;
    var price = BigDecimal.valueOf(11499.90);

    // when
    var createResponse = client.toBlocking()
        .exchange(HttpRequest.POST(“/products”, new ProductDto(productBusinessId, productName, price)))
;
    then((CharSequence) createResponse.getStatus()).isEqualTo(HttpStatus.OK);
    var product = client.toBlocking()
        .retrieve(HttpRequest.GET(“/products/” + productBusinessId), ProductDto.class)
;

    // then
    then(product).isNotNull();
    then(product.businessId()).isEqualTo(productBusinessId);
    then(product.name()).isEqualTo(productName);
    then(product.price()).isEqualByComparingTo(price);
  }
}








,

Generally, there is no big difference. Good thing for Micronaut is that it provides the default configuration of a database using TestContainers, so we can just start writing tests and everything should just work. On the other hand, Spring provides no test configuration so we need to remember to configure a database. This gives us a slight advantage over Spring Boot.

Spring Boot vs Micronaut 0:2

Native image (GraalVM)

Both frameworks support compilation to a native image. We can produce either a native application or a docker image containing a native application. Micronaut already uses AOT compilation so creation of native image should be easier. To be able to do it, I needed only 3 steps:

  • install GraalVM (if not already installed) and native-image tool
    • sdk install java 22.1.0.r17-grl
    • gu install native-image
  • Add Gradle dependency
    • compileOnly(“org.graalvm.nativeimage:svm”)
  • Annotate DTO classes used by REST API with @Introspected to generate BeanIntrospection metadata at compilation time. This information can be used, for example, to render the POJO as JSON using Jackson without using reflection.
package eu.espeo.micronautdemo.rest;

import java.math.BigDecimal;
import java.util.UUID;

import eu.espeo.micronautdemo.domain.Product;
import io.micronaut.core.annotation.Introspected;

@Introspected

public record ProductDto(
UUID businessId,
String name,
BigDecimal price
) {
(…some methods mapping DTO to domain model…)
}

That’s it. Now you can just execute ./gradlew nativeCompile (to build just an application) or ./gradlew dockerBuildNative (to build a Docker image – but currently it does not work on Macs with M1 chip), wait a couple of minutes (native compilation takes longer than standard build) and here you go.

I was really curious if Spring Boot applications will be as easy to convert as Micronaut applications. There is a spring-native project which has currently a beta support (it means that breaking changes may happen). The changes I needed to make was slightly different but still not complicated:

  • install GraalVM (if not already installed) and native-image tool
  • add a Spring AOT Gradle plugin
    • id ‘org.springframework.experimental.aot’ version ‘0.12.1’
  • add a spring-native dependency (but when using Gradle you can skip it because Spring AOT plugin will add this dependency automatically)
  • configure the build to use the release repository of spring-native (both for dependencies and plugins)
    • maven { url ‘https://repo.spring.io/release’ }

Now you can just execute ./gradlew nativeCompile (to build just an application) or ./gradlew bootBuildImage (to build a Docker image – but for some reason the process is stuck on my Mac with M1 chip), wait a couple of minutes (native compilation takes longer than standard build) and here you go.

As we can see, both frameworks compile well to the native image and both have some problems with generating Docker images on M1 Macs 🙂 None of them is perfect yet.

Spring Boot vs Micronaut 1:3

Summary

It’s great to have a choice. My test confirmed that Micronaut can be a good alternative to Spring Boot. It has some advantages but Spring is still very strong. I will keep an eye on both.

Categories
Software Technology

Automation testing trends to look out for in 2022

In recent years, the interest in automation testing has grown significantly and despite the fact that manual testing still has a leading position, automation is about to replace it quite soon.

Despite the fact that automation testing is an integral part of the software development process, it is becoming more sophisticated in techniques and more user-friendly when it comes to design. Taking this into consideration, what automation testing tools are in use in 2022? To figure it out, I conducted an investigation and collected some most popular test automation trends that will determine the future of testing.

Microservices and API testing

So-called monolith architecture of applications is becoming a legacy, instead microservice architecture is becoming more popular and widely used. The reason for it is the fact that when applications mature and new features are added, they become one gigantic monolith and it becomes more difficult to deploy the application more frequently since each new feature requires much time and effort.

Nowadays, microservice architecture is a good solution, and it is used in more and more projects. It structures application as a collection of services that are highly maintainable, independently deployable, and built around business capabilities. Such applications are highly scalable and with better performance.

In addition to this, companies start investing into API testing more. The reason for this is that it allows developer operations, quality assurance, development, and other teams to begin testing an application’s core functionality before the user interface is ready. As a result, it is easier to find bugs earlier and fix them faster.

Integration testing

Some specialists argue that the testing pyramid is not relevant anymore, and that it has to be modified.

Automation testing trends to look out for in 2022

However, a lot of services in microservice architecture are still valid and take into consideration the fact that applications consist of many microservices and testing them together with the intercommunication among them is of vital importance. Integration testing evaluates application components on a broader level than unit testing and much better than UI testing. It ensures that we ‘communicate’ with databases, file systems, network appliances correctly, and receive accurate responses.

Cloud growth

It is clear that because of COVID-19, more people switch their technology from hosted to a cloud-based solution. It is not as it used to be in the past, where everything was hosted and one had to develop everything from scratch. Nowadays, you just take open-source libraries, which are built in clouds, and start using them for your purposes and shaping them as per your convenience. According to the latest IT market research, cloud spending rose 37% to $29 billion in 2020 and will continue to rise in the following years.

Testing distributed cloud systems

Testing this type of systems requires certain knowledge, techniques, and tools that I believe are already widely used by testers. A great example of such a tool is testcontainers. It is a library that uses docker containers within a test and engineers are able to provide external dependencies and distributed code architecture (such as databases, streams, mocks and anything else that can be dockerized) to them.

Sometimes we don’t discover an issue until it’s in production, but with this type of technology such issues can be found during testing. With cloud services, there are many external dependencies, and testing them can be difficult. However, technologies like testcontainers allow you to create self-contained test environments that mimic real-world conditions. LocalStack is another good example of a tool for testing distributed cloud systems.

Non-functional testing

It is being observed that there is a growing demand for analyzing non-functional requirements that were an area that most companies ignored for many years. Currently, there is a huge amount of applications on the market. Companies compete for each and every user and in order for a company’s product to be more attractive for them, such types of tests as accessibility, security, performance, reliability, and reusability testing are becoming more important than ever.

Understanding the basis of these non-functional areas is a must that all quality assurance engineers should have in order to apply these tests in testing strategy and increase the level of satisfaction of end users.

Codeless and low code automation

One other trend is a significant increase in low-code and codeless test automation. According to Gartner, 65% of all software startups including automation testing tools will be low code by 2024. However, making easy-readable code and creating codeless test automation also takes time and effort. The remote working environment complicates this process even more, as the communication in teams and implementation processes are slower than before the pandemic.

What is more, as we try to shape software quicker and faster, we need to do more different kinds of testing. One of them is exploratory testing. At this point, many people involved in the testing process are not necessarily developers or people with technical backgrounds, so codeless automation is exactly what is needed so that they can help with the testing process and understand exactly what to test.

Currently many easy-to-use codeless test automation tools are being created in order to help quality assurance engineers with the testing process.

AI-assisted automation

AI (Artificial Intelligence) is already changing testing in many various ways. The following are a few test automation scenarios that already leverage AI:

  • Automated, visual validation UI testing
  • Testing APIs
  • Running more automated tests that really matter
  • Creating more reliable automated tests

Of course such automation requires much knowledge, specific skills, analysis, and an understanding of complex data structures, statistics and algorithms. However, once obtained, it will be just a piece of cake for you. In addition to this, there are also many tools that can be used for AI-assisted automation such as ​​GitHub Copilot, DiffBlue and many more. Just try them out and explore all the potential of AI-assisted automation!

Mobile Automation Testing

With the tremendous increase in mobile usage, mobile testing has become a buzz word in the technical world. As mobile applications become more prevalent and they have continuously updated software, there is more pressure to make them faster and responsive for users.

The following are some top trends in mobile testing that will demonstrate that testing cycles are becoming shorter and the common themes are faster testing tools. These trends will help in overcoming the mobile automation testing challenges greatly:

  • Changing release schedules (due to the fact that mobile applications are released very often)
  • Usage of open source tools
  • Usage of Artificial Intelligence and Machine Learning

What’s next for automation testing?

Since mobility is a promising field, it needs to have an improved and robust mechanism for automation testing. As there are many challenges, this field would require excellent technical skills from both developers and testers.

All in all, while working with clients, tech teams often forget one vital thing: the client is their top priority and their work has to meet the clients’ expectations and needs in order to succeed. When it comes to software testing, automation solutions are based on clients’ unique goals. They pay off quicker due to lower operating costs, reduced lead times and increased output, so that is a very good way to make clients happy.

Automation testing trends in 2022 require learning more and harder, understanding new technologies, and paying much more attention to users’ needs than ever, in order to deliver high-quality software faster. Taking into consideration the fact that we live in a fast developing world, the above trends can not be ignored. For this reason, I strongly encourage you to have a closer look at them, good luck!

Categories
Software Technology

The role, skills, and business impact of software architects on product success

Having well-designed software architecture means having architectural integrity, short-term and strategic guidelines, manageable complexity, and reduced maintenance costs, resulting in an increased business competitiveness. Consequently, well-made decisions on databases, user interfaces, cloud services, and data security guarantee optimal usability of digital solutions.

At Espeo Software, we place special emphasis on the architecture in order to ensure the highest quality of the solutions we deliver for our clients. This is why our architects have a deep knowledge on technology which supports effective decision making.

What makes software architects so valuable to the projects they work on?

There are many things that can go wrong during the software development process. To name a few, code pieces can be hard to test for the development team, the system can be tightly coupled, new modules can pose integration problems, scaling can be expensive, and the solution may be inefficient or resilient.

If you have a software architect on your project, these issues are minimized, resulting in an improved quality, stability, and scalability of your digital solution. How is that possible? Software architects make high-level design choices and frame the technical standards that meet your business needs. This includes the decisions regarding software architecture recommendations, coding standards, and working methods and team composition later in the development stage.

At Espeo Software, the software architects have extended senior-level experience from different customer environments. As a result, they ensure that the best practices to support digital transformation are incorporated into your project. What is more, the architects are responsible for collecting all technical requirements and are your point of contact with the technical leaders. They also serve as a liaison between the consulting team and the software development team.

In my role as a software architect at Espeo, I am responsible for determining which processes and technologies the development team should use in order to meet our clients’ business requirements. This includes not just decisions about technology or tools, but also daily processes that ensure the highest quality of the delivered solution. As a result of aligning the client’s vision for the project with my team’s day-to-day activities, I can have a direct impact on the project’s transparency and quality while decreasing delivery risks at the same time.

Maciej Tomaszewski, Software Architect at Espeo Software

Combining extensive technical knowledge with soft skills that drive project performance

Hiring a software architect for your project can simplify, accelerate, and streamline the process of solving challenges in your project. Software architects not only have impeccable technology-related experience, but they also possess soft skills that enhance performance and bring team members together for a smooth development of your project. As well as overseeing a project and coordinating teams of developers, they prioritize tasks and juggle team members’ assignments throughout the development process to arrive at the best results in the agreed-upon timeline.

Additional benefits of having a software architect in your project

  • Fast and easier testing of your digital solution.
  • Incorporating new features as well as integration between modules is easier and faster.
  • The rate of code degeneration decreases as development progresses.
  • The infrastructure chosen by software architect provides flexibility and scalability.
  • As the business environment changes, the digital solution that is being developed adapts to deliver the value needed.
  • The solution solves business problems better and more effectively.
  • The potential issues are identified before they occur or possible solutions are prepared in advance.
  • Software architect creates documentation that incorporates a high level of understanding of how the system operates on multiple levels of complexity.

Want to hire a software architect to guarantee success of your project? Contact our Sales team for more information using the form below.

Categories
Blockchain Software Technology

How to deploy a Hyperledger Fabric network on Kubernetes?

Enterprise-class companies need secure and efficient solutions that can fluently scale with the needs of such large organizations, regardless of the technology of choice. That can be a huge challenge that forces developers to demonstrate their skills, but also the ability to think outside the box and make unobvious decisions.

Watch our webinar related to this article by Marcin Wojciechowski.

And that was the exact case of the collaboration with one of our recent clients.

After the consultation phase, our client opted for a solution built on Hyperledger Fabric’s blockchain network. Hence, our job was to fit the blockchain network into the technical and organizational requirements of the client.

The business consulting client for whom we have created such a solution has multiple independent branches spread worldwide. And that was one of the main challenges as they are independent both technically and organizationally. The client also has a very demanding security policy for any data coming in and out of the branch’s IT environment.

Decentralized blockchain network on multiple clusters with Kubernetes

We have created a secure, decentralized blockchain network on multiple clusters with Kubernetes to manage to scale and orchestrate the system. The solution with Kubernetes in place enables efficient use of resources according to system load, which results in efficiency in terms of both cost and computing power. And what is equally important is that it is designed to work according to the technical requirements and our client’s procedures – including monitoring and approving any in or out traffic.

However, a complex network with precise requirements had to be created to make this possible. For example, each of our client’s local organizations was supposed to deploy all the elements of a network (which is not evident with Hyperledger Fabric architecture).

The need for one Orderer Certificate Authority

It turned out that creating a network that is exactly the same was not possible, due to the need for one Orderer Certificate Authority server However, making it very similar was possible. Such a situation occurred because enterprise-class companies have branches independent of each other from a legal point of view but are also technically independent, so separate and different development environments had to be connected and managed by one CA server.

This is not only directly related to technology but also to regions. As different regions have their own environments, even within computing clouds, a multi-regional solution was needed. Fortunately, blockchain makes this possible.

However, to achieve the goals, we had to think out of the box and implement a solution that was not originally envisioned. It was also challenging to find information about ready-made, described solutions of this type.

We achieved a distributed Hyperledger Fabric network that is easy to deploy and expand. This PoC is an excellent base for future projects where a network is required to be spread on multiple servers and orchestrated by Kubernetes. Therefore, guided by the principle of transparency and a simple desire to share knowledge, we have decided to describe the process of building such network architecture in this article.

Technologies we used to deploy Hyperledger Fabric

As we have mentioned above, to achieve creating a distributed network, we decided to use Kubernetes, as it is more flexible and easier to scale than Docker Swarm. Later in the project, we would also bring automated deployment of the network for each department, where Kubernetes should also do a better job.

During a research phase of the PoC, we found Hyperledger Fabric Operator, a tool created by David Viejo, a software architect from Kung Fu Software. It helps deploy an HLF network on Kubernetes in a declarative way. All configuration files and crypto material are handled under the hood, meaning the developer needs to worry about adding specific elements to the network.

We needed at least two Kubernetes Clusters to test how the tool works. In the beginning, we had two ideas for the deployment: first, to use KIR’s sandbox as one of the machines for the deployment and a local computer as the server provider, and second, to set up local clusters using kind. Sandbox did not work out – we were provided with a connection to an already prepared blockchain network where we could fiddle with it.

Still, we could not provide any network configuration changes. We decided to skip the process of setting it up locally, as it could require additional work later to adapt the script for cloud clusters. We decided to give a try to Digital Ocean, which is an American-based cloud infrastructure provider with servers all over the world, offering a free credit valid for the first two months, which was perfect for our PoC needs. To allow communication between the clusters, we decided to find a free domain and ended up using provider freenom.com, as it also provides DNS management.

Workflow

We started by creating a simple network diagram to show the network topology and allow us to visualize what we are creating.

For clarity purposes, not all of the connections are visible on the diagram – peers can communicate with any orderer, and they also communicate with each other using a gossip protocol.

Then, we started learning how the HLF Operator works. Thankfully, we found a presentation of the tool from the Hyperledger Budapest meetup by the tool creator himself, which sped up the introduction process a lot.

The third step was to try out the Operator using the aforementioned tools. We decided to start with a single cluster setup and later expand it to achieve a distributed network. This step was relatively easy, as following the steps from the meetup was successful, and the network was running in no time.

Lastly, we expanded the network by another cluster. With this step done, we would have all the required knowledge to add even more clusters to the network. Hyperledger Fabric Operator documentation describes how to set up a single cluster setup using Istio – thanks to that, we could figure out a way of communication between the clusters.

Solution

The work resulted in a script that handles the deployment on two clusters. All we need to do is to provide it with the correct configuration, execute it and adjust DNS settings.

Resource estimates

The network that is set up with the script, consumes the following amount of resources:

  • Space ~6Gb per cluster
    • Peer+CouchDB 2GB (depending on chaincode and amount of data stored)
    • CA 1GB
    • Orderer 1GB
  • Memory ~3Gb per cluster
    • Peer+CouchDB 1.1GB
    • CA 0.25GB
    • Orderer 0.5GB

If you want to follow our deployment exactly, prepare the following:

  • 2 DigitalOcean clusters
    • Each cluster consists of 3 nodes – 2.5GB RAM usable, 2 vCPUs
    • Kubernetes version 1.22.8-do.1
  • 3 Free domains on freenom.com – one for each organization
    • OrdererOrg
    • Org1
    • Org2

How does it work:

  • Install HLF-operator and Istio on both clusters
  • Wait for Istio to assign public IP to the cluster
  • Set up DNS on freenom.com
    • To do that, go to freenom.com client area
    • Go to Services -> My Domains
    • For each domain open “Manage Domain” in a new tab
    • On each tab go to Manage Freenom DNS
    • Add the following records:
DomainNameTypeTTLTarget
org1.compeer0A3600cluster1ip
org1.compeer1A3600cluster1ip
org2.compeer0A3600cluster2ip
org2.compeer1A3600cluster2ip
ord.comcaA3600cluster1ip
ord.comord1A3600cluster1ip
ord.comord2A3600cluster2ip

Where cluster IP is received by this command:
kubectl get svc istio-ingressgateway -n istio-system -o json | jq -r ‘.status.loadBalancer.ingress[0].ip’

  • Deploy CAs on both clusters and wait for them to be running
    • For Orderer CA remember to add flag –hosts $DOMAIN, otherwise Istio won’t be able to redirect to correct cluster
  • Deploy Peers and Orderers on both clusters and wait for them to be running
    • Here flag –hosts $DOMAIN, is also necessary, for all deployments, since they need to communicate with each other
    • When deploying orderer on Cluster2 it will not recognize the Orderer CA, as it is running on Cluster1
    • To work around it, temporarily use the CA of Org2 for generating deployment config and before applying change the following variables
      • .spec.secret.enrollment.component.cahost – to Orderer CA domain
      • .spec.secret.enrollment.component.caport – to Istio gateway port (443 default)
      • .spec.secret.enrollment.component.catls.cacert – copy from Orderer1 config
      • .spec.secret.enrollment.tls.cahost – to Orderer CA domain
      • .spec.secret.enrollment.tls.caport – to Istio gateway port (443 default)
      • .spec.secret.enrollment.tls.catls.cacert – copy from Orderer1 config
      • .spec.secret.enrollment.tls.csr.hosts – to include Orderer CA domain
  • Create yaml connection configuration files for all organizations on both clusters
    • Use yq to merge them together
  • Install chaincode (can be run in the background)
  • Generate initial channel block on Cluster1
    • As consenter for now include only Org1, Org2 won’t be visible yet on Cluster1
  • Join Peers and Orderer from Cluster1 to the channel 
  • Generate Org2 definition and add Org2 to the channel
  • To add Orderer from Cluster2 as consenter, channel needs to be modified manually
    • Inspect channel config
    • Edit channel config to include Orderer2 in:
      • .channel_group.groups.Orderer.groups.OrdererMSP.values.Endpoints.value.addresses
      • .channel_group.groups.Orderer.values.ConsensusType.value.metadata.consenters
    • TLS cert can be found on this path inside the Orderer2 pod /var/hyperledger/tls/server/pair/tls.crt
      • Because of corrupted line endings they need to be trimmed (using sed -e “s/\r//g”) or the certificate comparison will fail
      • Certificate needs to be encoded in base64
    • Compute channel changes
    • Encode the update
    • Sign the update by OrdererOrg
    • Update the channel
  • Join Peers and Orderer from Cluster2 to the channel
  • Wait for chaincode to finish installing, approve, commit and init chaincode
  • All peers should now be able to read and write transactions

Conclusions

We achieved something that is not yet well documented – we can find a few articles about how to deploy the Hyperledger Fabric network on Kubernetes, but they are usually confusing for people that did not have any prior experience with this tool.

HLF Operator on the other hand generates most of the necessary configuration, making it a relatively easy task. Deployment of distributed networks using this tool is not well documented, there are only a few tips on how to deploy using Istio, but nothing that would explain how to do it on multiple cluster setup, so we hope this article will help many of you do it smoothly.

To see the exact commands that are needed to deploy this network, please have a look at our github repository, prepared especially for this article.