Categories
Blockchain Financial Services

Blockchain – Hyperledger vs Ethereum: a comparison

In some aspects, various blockchain technologies overlap with their intent and in other parts, they differ greatly, with their distinct features and scope. Let’s take a look at Hyperledger and Ethereum.

Hyperledger vs Ethereum: a comparison

Table of contents:

What is a blockchain?

Before we start working on our Hyperledger vs Ethereum comparison, let’s quickly recap what a blockchain is. If you rip off all the buzzwords and advanced crypto jargon, you can try to treat a blockchain as… a stone. What is written in stone will remain forever, right? But there is no single stone that holds the ultimate truth. Blockchain is made of nodes communicating with each other, and every blockchain node has its own stone. Users update it as time passes with new transactions. Before accepting an entry in a stone, information is reviewed and confirmed according to the blockchain consensus algorithm. It’s just a way of agreeing on the acceptance of valid information/transactions.

After an entry’s acceptance, every stone in a public network of such stones must reflect the change. So, when a transaction happens every stone is updated. Stones are compared to deduce the state of the blockchain. So this is the main reason why blockchain is so appealing. You can have a source of true history written in stone without a central authority. This source cannot be changed.

Is that all?

Looking at Hyperledger and Ethereum, we must first introduce all the contenders. While Bitcoin may be qualified as a public blockchain that handles money transactions (decentralized cryptocurrency). Ethereum and Hyperledger Fabric blockchains are much more than that. We call them blockchains 2.0, or evolved blockchain concepts that comprise of Virtual Machine engines. This means they can execute almost arbitrary Turing-complete code that was deployed into the blockchain (a computer program of some sort).

So now using our stone metaphor, we have scripts that can be written into the stone and you can execute automatic actions that alter the stone or make calculations.

What is Ethereum?

Ethereum is a public blockchain created in 2015 by a smart guy called Vitalik Buterin. He had a vision of extending the idea of Bitcoin’s decentralized cryptocurrency to decentralised applications (what are dapps about?). Those applications would comprise of smart contracts. Ethereum can be seen as a open-source platform that runs those smart contracts. They will be executed exactly as programmed without any possibility of downtime, censorship, fraud or third-party interference.

What is Hyperledger?

Hyperledger is not a blockchain in itself. Hyperledger is an open-source project and a hub for many blockchain projects under the Linux Foundation umbrella. Some projects that reside under the Hyperledger umbrella are:

  • Hyperledger Sawtooth: developed by Intel. Uses a brand new consensus algorithm called Proof of Elapsed Time (PoET) which helps to build networks with a large number of nodes with a small CPU consumption footprint. Supports both permissioned and permissionless blockchain networks.
  • Hyperledger Iroha: made by Japanese developers who originally created this solution for mobile use cases. Designed for simple creation and management of assets. It’s aiming to be easy to incorporate into infrastructure projects. Consists of new chain-based Byzantine Fault Tolerant consensus algorithm. Soon they will release iOS and Android support
  • Hyperledger Indy: Was created to provide independent identity on distributed ledgers. Private information is never stored in the ledger. With this solution, you will be in control of sharing your identity with others
  • Hyperledger Burrow: is a permissioned blockchain node that executes Ethereum smart contracts code (usually written in Solidity). You can specify an amount of gas needed to execute given contract. To invoke a contract you need sufficient permissions, not coins or tokens. Provides high transaction throughput thanks to the proof-of-stake Tendermint consensus engine.
  • Hyperledger Fabric: developed by IBM. Supports only permissioned networks. Provides elastic architecture — different components can be plugged in or out to fit the use case.

Comparison:

So let’s get to it — Hyperledger versus Ethereum! We’ll find out what the differences are between those blockchains in a few selected aspects. Then, hopefully, it will be easier to choose one over the other in certain blockchain applications.

Permissioned blockchains vs public blockchain networks

Ethereum is a public blockchain and permissionless network which means that it can be accessible to anyone for both read and write operations. On the other hand, Hyperledger Fabric is a private network aimed at solving problems specific to the enterprise landscape. Fabric is a permissioned network. This means that only privileged entities and nodes can participate in this decentralized platform. To receive access, one must submit enroll and be granted permission from a trusted Membership Service Provider (MSP). MSP is a component that issues and validates certificates, and later handles user authentication.

Private transaction between members

In Ethereum there are no means to issue a private transaction between members. That is, if we ignore the fact that in Quorum, JP Morgan’s fork of the Ethereum implementation, provides that feature. In our Hyperledger vs Ethereum comparison, it starkly contrasts with Hyperledger Fabric’s ability to offer that feature. Hyperledger Fabric can have multiple ledgers.

A ledger here is called a channel. A channel can be available only to certain members. This is a very important use case in enterprise integration scenarios, because it isn’t especially desirable to offer full transparency of internal business processes to the competitors. Certain contractual agreements should stay private.

Consensus Mechanism

Ethereum also shares the burden of most public blockchains: a requirement of costly consensus algorithm like “proof of work” to validate transactions and secure the network.

In the consensus model a transaction from one node, every node must confirm it in the form of a valid block before it’s accepted . The one node that creates a valid block first will get the reward in the form of an incentive for honest transactions validation. Nodes which create valid blocks and are rewarded for their work will get less and less rewards over time.

This consensus algorithm requires a lot of CPU power. At some point in the not-so-distant future, Ethereum will migrate to a proof-of-stake consensus algorithm. In contrast to that, Hyperledger Fabric in its default setup uses a no consensus algorithm (No-op). However, due to its pluggable architecture other consensus algorithms like Practical Byzantine Fault Tolerance can be configured. Consensus is abstracted away in a component called Ordering Service. You can even design and code your own consensus algorithm component if your needs can’t be satisfied with the existing implementations.

Hyperledger vs Ethereum: Cost of execution

Every transaction in Ethereum cost some gas, which is the way in which computing resources (CPU, storage) are valued in Ethereum. My colleague wrote a more in-depth article about Ethereum gas, if you’d like to dive in. Ether is the Ethereum’s native cryptocurrency. You can exchange it on some cryptoexchanges to your local currencies like USD or EUR.

As the Ether value as against national currencies is fluctuating, the gas costs, if expressed as a fixed factor, would also fluctuate. That’s the reason why in Ethereum we have the notion of gas and — separately — gas price (which is the value of the gas in the Ether unit). This way computational costs could stay constant to some extent even with high Ether market volatility. This translates to the observation that every transaction invoked on the Ethereum Virtual Machine will cost some real money.

Cryptocurrency is fuel for the costly proof of work consensus algorithm which secures the network. That gas cost not only incentivizes validating nodes to perform their duties. It also deters potentially hostile nodes from executing DDoS attacks that would cost them a small fortune. On the other hand, in Hyperledger Fabric there is no notion of gas. Every participant knows all the other participants of the network. In a setup like that, it’s easy to detect malicious users and activities and revoke their access to entire blockchain if the need arises.

Smart contracts

When writing smart contracts for Ethereum, the Solidity programming language is the main choice for blockchain developers (here’s our Solidity tutorial). In Hyperledger Fabric, smart contracts are called chain codes. In this case blockchain developers have an option to write chain codes in mainstream programming languages such as Golang or Node.js.

Chaincode runs in a secured Docker container isolated from the other processes. There is also one notable difference: Solidity language was designed to ensure that smart contracts written in that language give deterministic results. Node.js and Go with their runtimes weren’t designed to keep this rule in mind. So, as a programmer, you must be careful not to use non-deterministic functions while designing your chain code.

Hyperledger Fabric vs Ethereum – modularity

Ethereum has no notions of modularity. Hyperledger is designed as modular and different components can be switched on and off.

Transaction flow

In Ethereum, when a transaction happens it basically goes through two steps. The first transaction is added to ledger in some order and propagated to all peers. Then, the transaction is executed by all peers. Transactions must always execute deterministically to be sure that all the peers ends up in the same state. In Hyperledger Fabric, it’s a slightly different story. Clients send the transaction by choosing to endorse peers from those specified in the endorsing policy of channel. Endorsement policies are defined using domain-specific language.

The first transaction gets executed using chaincode in any order by chosen endorsement peers. The submitting client collects endorsements and validates signatures, then the transaction is sent to an ordering service. The order for each channel is specified and a Proposal of the transaction is sent to all peers in the channel. Peers validate the transaction and mark it as valid or invalid and state of the ledger is updated. As you can see, not all peers in the channel execute the same steps, as it was the case in Ethereum.

Which blockchain should I use?

I liked the Hyperledger Fabric vs Ethereum battle metaphor, but in truth Hyperledger Fabric and Ethereum were created to solve different problems. So they shouldn’t really be perceived as competition to each other. They are different products that happen to use the same backbone technology, which is the blockchain itself.

Ethereum vs Hyperledger – conclusion

Hyperledger Fabric was made for a business usage scenario, mostly enterprise solutions. Therefore, organizations can use the Hyperledger Fabric blockchain for their internal (inter-department) integration purposes — or more general cross-enterprise integration scenarios where the parties to the transactions know each other and have valid contractual agreements to communicate with. On the other hand, Ethereum was designed for public blockchain solutions with the notion of fully transparent and objective transaction execution even in an arbitrarily hostile environment. While to some chaos can be a ladder, this doesn’t work for application developers. At Espeo Blockchain, we also help people decide which tech they should pick and guide them through it.

Categories
Entrepreneurship Software

MVP Development for Startups and Mature Enterprises

Developing an app can be a time-consuming, expensive endeavor. As such, it’s important to work with a methodology that will deliver on ROI and business objectives. Many companies however choose the wrong approach. You have numerous startups which have failed because they took an idea, developed it for months, or even years, and never market tested it until launch. The results from taking this approach can range from disappointing to disastrous. To address this, companies have started working with MVPs, or Minimum Viable Products.

How to Develop an MVP

The methodology starts with identifying a problem and then building a bare bones product known as the MVP, in order to test assumptions and customer reactions to it.With each iteration of the MVP, the company gathers actionable data and metrics in order to determine various cause and effect relationships. 

If everything goes as expected, the MVP is developed further by adding new features and functionalities. However, if at a certain point, it becomes clear that the product idea is not viable, the company can pivot at a relatively low cost, and revert to a previous functional step of the development process. For example, if it becomes clear that the current development path will lead to a product that will not be financially viable for the current marketplace, the company can rollback changes easily, or even scrap the project in its entirety. However, the latter option is fairly rare when working with an MVP.

How to Develop an MVP

The process of developing an MVP starts with a planning phase. During this phase, you will want to map out the long-term goals of the product, identify the reasoning behind its development, and the criteria which will indicate whether the product is successful or not.You then need to take a look at your users. This is where you create user personas, identify use cases, and map out the journey each user needs to undertake in order to achieve their end goal with the product. 

Once the conceptual framework is in place, it’s time to start thinking about features. In an MVP, features are ranked according to importance, and the most important features relate to your user map and your end business goals. For example, if you own a chain of coffee shops, and you would like to build an app that reduces the number of time people wait in line for their coffee, you might be looking to implement features that allow customers to pre-order coffee online before they reach the coffee shop.

In this case, you will need several essential features: clients have to be able to pay through the app, they will have to be registered in a database, and they will need to receive a proof-of-payment on their phone, in order to redeem their order when they reach the coffee-shop. At this point, we’ve identified three essential features for your app. In order to build an MVP, we will have to break these features down even further, and perhaps only market test one or two of them, so that we can reduce costs and development time

Let’s say we’ve decided to test the concept: are clients really interested in pre-ordering coffee? The MVP will perhaps allow customers to simply pre-order coffee without paying for it, and the app will log their order into a database. The experiment can be run for a couple of days with a handful of loyal customers, in order to test results. If everything looks good, the mobile payment feature can be added and tested next.

[contact-form-7 id=”13387″ title=”Contact download_8_reasons”]

Features to avoid in an MVP

That said, there are features which are almost universally poorly suited for an MVP. Features that are completely aesthetic in nature, for example, do not have to be added to the MVP, because they provide very little quantifiable value. These features should be added only after the core functionality has been tested. Other features such as social media integration also fall into this category. 

You then have copycat features. Adding features that are similar to more established apps will extend your timeline and budget, but they will not provide new insights into the usefulness of your product. Copycat features have already been tested by the larger, more successful app, and as such, they can be added at a later stage of the product development.

MVP Development for Startups and Mature Enterprises

Finally, you have features which are requested by early users. This may seem a little counter intuitive because one of the main purposes of the MVP is to test the users’ response to a product. However, features requested by early users might not actually be a good fit for your business goals. For example, some users may request social media integration at a very early stage of the product, which would take time and resources to implement, without providing any quantifiable value to the MVP. These requested features should be noted down and kept in mind for later versions of the product.

The Most Common MVP Development Pitfall

The main purpose of the MVP is to provide validated learning. Validated learning is the iterative process which measures the effectiveness of a product in reaching the set business goals. As such, it’s important to keep in mind that any feature added to the MVP has to have a measurable impact across relevant metrics. Some companies will make the mistake of not taking this into account, and they will go on to view the MVP as the most stripped down version of a product, removing essential features in the process. 

To avoid this mistake, always keep the business goal in mind, as this will help you reach a balance between cost-effectiveness, and validated learning. However, balance is the key word here. Some companies will go overboard with the initial features of the MVP, to the tune of the MVP occupying 97% of their backlog which contradicts the whole idea of an MVP. The MVP should have several key features that will be tested, with the rest of the functionality being implemented once the market responds positively to the concept behind your product.

The Benefits of Developing an MVP

So what are the benefits of developing an MVP? The first benefit would be a rapid development process. It usually takes one or two months to have a market ready MVP. You will also need a much smaller budget to develop an MVP since most features will not make it into the product. 

Since the MVP is a lightweight version of a larger final product, the risk for investors is much smaller. With a reduced development timeframe and budget, investors are much more willing to test an idea and see if it is well received by the market. And, once the concept has proven itself within the market, stakeholders and investors will be much more willing to buy in.

Finally, the MVP development process itself builds an audience. At first, the MVP may be used to test market reactions, and see if a product is welcome by users. But once the MVP starts to gain traction, some users will become early adopters, and develop loyalty towards the product. This means that by the time you get to market, not only will your basic assumptions about the user base be validated, you will have a core set of users ready to support the product.

See also:

Categories
Software Technology

App Security: Why is it worth it to implement JWT based authentication in your app?

App speed and security are of huge importance. Our main goal aligned with our customer’s goal is to deliver a satisfactory product as quickly as possible and within budget. There are a number of practices, recommendations and even proposals on how to run projects, but implementation details are extremely important – small changes that allow us to write competitive applications.

When creating a web service one of the most important things is choosing the right authorization method.There are a number of choices: OpenID, SAML, Kerberos and OAuth2 which is the most popular open standard authorization framework. The authorization process for OAuth2 consists of sending the client an authorization request with credentials to issue a random token from the authorization server. This is sent to the resource server to verify if the client is authorized to use the resource and perform specific operations.

The framework is supported by Microsoft (for several APIs and Azure Active Directory service), Facebook (only GraphAPI –
the primary way for apps to read and write to the Facebook social graph) and Google (authorization mechanism for every their APIs). The fact that OAuth2 is used by such important companies in the IT market, vouches for the quality and benefits of this method. In most cases a username and token are required in OAuth2, and additionally it specifies the way in which the tokens are to be sent… and this is where JWT (JSON Web Token)comes forward. JWT is not a protocol or an authorization framework, but it determines the format of the authorization token, which, as it turns out, can have a measurable impact on the functionality of the entire mechanism.

Why is JWT gaining so many supporters

JWT became an open standard in 2015, and in the same year RFC was also created for JSON Web Token Profile for OAuth 2.0 Client Authentication and Authorization Grants, suggesting the possibility of using the OAuth2 protocol with the JWT format for tokens. It gained many fans because of its simplicity and ease to use. As the name suggests, the format of the token is presented in JavaScript Object Notation (JSON). It is a very common data format used for communication between the browser and the server. Thus giving the opportunity for a concise and clear way of transferring information between the two parties in a JSON object.

Further, thanks to the huge amount of parsers available in the programming languages, we can directly convert the received token into an object. It is also worth comparing it to other popular formats such as SAML, SWT or even ordinary string UUID. As for the structure, it is more concise, we do not need to convert a huge token presented in the form of XML as in the case of SAML (obviously the problem concerns front-end site). However, the biggest advantage is the possibility of transferring a large amount of additional information in one token, a so-called claimsy. Support in the form of a huge number of token signing and token verification libraries for virtually any programming language suggests the usability of JWT on a large scale while maintaining an adequate level of security.

What can we pass

A big advantage is the ability to format JWT claims transfer in a token. Claims are statements about an entity (typically, the user) and additional data. Additional token content allows you to limit the number of database queries. Only basic information about the user is collected at the moment of logging in. It should also be noted that the format works well in the context of microservices. It is possible to transfer data between microservices without having to ask for data from a central session microservice. As such if the signature is correct, then the received data is also reliable and we can be trusted.

App Security: Why is it worth it to implement JWT based authentication in your app?

Let’s analyze the content of individual elements of a JWT token, in particular the Payload, which contains our additional information:
Header – informs about the algorithm that was used to generate the token (the standard defines algorithms such as HMAC SHA256 RSA)
Signature – is the digital signature of our token, encrypted in Base64 and calculated from combination of the Header and Payload together with the secret key – in our case the hash function is HMAC-SHA256.
Payload – the content looks familiar to a typical response in JSON format to a specific resource in REST services. Creating claims can be compared to the preparation of a DTO (Data Transfer Object). Set of information to be returned within the token is defined in the server code.

As you can see in the example above – information about the user’s hobby, country of residence, the currency in which he performs his transactions and standard user’s information of the system – id, username and its roles will circulate in the token. It is not a coincidence that I pointed out the types of claims … it is worth stressing that you have to use your own claim names carefully because they may already be registered in iana.org and cause name conflicts. In the example, we’re using two reserved claims – exp (token expiration time) and jti (token id to prevent repetition).

Security

Security is often associated with hiding all data. The JWT standard does not encrypt data, but only converts it to Base64. The creators’ assumption is to be sure that the data being sent to the authorization server is created by a reliable source, thus – preventing unauthorized access. This is why it’s also worth to encrypt the data using SSL protocol to prevent theft of the token. The most important thing is that we are sure about the sender’s identity and that the data received has not been compromised in any way. Verification of the token is quite simple.

App Security: Why is it worth it to implement JWT based authentication in your app?

Servers have secret keys and use them to generate a JWT token. This is of great importance, as because of this they are able to verify its correctness. If the credentials match a JWT token is generated at the request of the client. Thanks to the fact that the application server knows the secret key, it can generate a signature for the JWT token received from the user and compare it with the signature. If everything goes correctly, the user is authenticated and can accesses certain data. Otherwise it may indicate some kind of attack. Thanks to this, access to resources on the server is safer, particularly because authentication with user credentials is a one-time process.

Microservices

It would be a sin not to mention microservices. Especially that JWT is doing so well within this architecture. At this point, it is worth distinguishing between two approaches: writing monolithic applications and those based on microservices that are independent of each other.

Protection for monolithic applications is quite simple. There is an application server that manages user authorization and authentication in advance. In addition, we are sure that all services on which we perform queries are implemented on one and the same server. We do not need to worry about whether each service has authenticated the user separately. It is fully decentralized.
Microservices are different and the main problems are the communication between them and the answer to the question: How can we authenticate the user and be sure that other microservices know about it? As it turns out, JWT ensures that we send information between all microservices about the fact that a given user has access to specific resources.

We use the OAuth2 credential grant (client credential grant) that allows clients to obtain access tokens by providing their client id and secret. As a result, each microservice receives its own client identity and credentials. Then, this data is sent along with the request for token access to the authorization server.

The advantage of this solution is that you can withdraw access to the microservices at any time if we verify the user’s password has been compromised. In addition, the management of scopes, roles, and credentials of microservices are completely controlled. Thanks to JWT claims (more precisely the permissions contained in the token) we know exactly to which resources the user has access, which makes JWT a perfect fit for such architecture.

App Security: Why is it worth it to implement JWT based authentication in your app?

Summary

The JSON Web Token is an increasingly popular format for representing tokens; it is slowly becoming a standard token format and the number of users is growing every day. Thanks to its compact size, lightness and independence, it offers great customization possibilities. The token containing all the information needed for verification without a continuous query of the database is amazingly useful. Of course, we can implement a unique solution based on traditional user’s session but this is unnecessary when such a popular solution is in place. Thanks to JWT, we can easily identify the user between particular sites, and the unique session model could not do it better.

Sources:

Categories
Blockchain Supply Chain Technology

How blockchain traceability can change your organization

Businesses, governments – various entities can benefit from decentralization. However, even criminals may derive some use from decentralized operation modes and various cryptographic primitives. The goals and objectives of those three categories of organizations are different, so the way blockchain traceability can be used also varies. I’ll look at predictive markets, DAOs …and crypto anonymity.

Businesses and blockchain traceability & transparency

Businesses (for-profit organizations) owned by  groups of shareholders often value the transparency of the way the organization is run. Those organizations aren’t led by a single person, but by an elected group of directors. The most fundamental feature of businesses governed by smart contracts is that all of the transactions and decisions are stored in a publicly verifiable ledger. This sort of businesses can be called a DAO (a decentralized organization). No director can refute the decisions they make, as their cryptographic signatures can’t be forged (direct accountability).

In DAOs, shareholders or members also have a direct and immediate impact on the direction of growth and future decisions. All costs or expenses in organizations like that are accountable, including employee remuneration. In an environment like that any gender, religious, political or other biases can’t exist.

Governments and prediction markets

Governments run in a decentralized mode are a form of a larger DAO. There are some thought experiments to organize governments in the form of a futarchy. In a futarchy, the legislative branch bases on the results of prediction markets. Prediction markets are sort of like betting or voting systems, and they proved to be an accurate way of extracting value from the wisdom of crowds. Any citizen participating in nation-wide prediction markets can have an immediate impact on the bills passed. What’s more, they can easily see the impact of those bills on their own welfare. That’s probably the best form of traceability that exists!

As the decisions are transparent (and the assets/value allocation is transparent) the money allocation to projects/bills is unbiased, and the money is assigned to objectively the best contractors. We won’t see any shady connections between government representatives and their families or friends.

Criminal organizations – blockchain traceability vs. anonymity

Speaking of shady. There’s something we should be aware of. Criminals can also use the blockchain in creative ways, sadly. For example, let’s look at markets that trade in illicit goods or those offering nefarious services. The objective is to tangle up all the dealings and transactions conducted to hide both the nature and the parties to the transaction. Seems like something that just won’t work on the blockchain? Wrong. Distributed systems include, most importantly, the full anonymity of the transacting parties. These are paired with encryption algorithms of the data exchanged by the parties. This allows such organizations to reach their objectives.

Cryptocurrency systems where you can’t see the parties to the transaction or the actual amounts (but with full guarantee of the actual value transfer!) are perfect tools for organizations that value their… privacy. If those parties get caught, the deniability of the transaction is a vitally important feature. (Un)fortunately, that’s what some of the more complex cryptocurrency systems can offer. Those are the problems regulators should definitely research.

Blockchain for different needs

To conclude, various organizations have different needs. Distributed and blockchain technologies are not one-size-fits-all techniques. They can combine and match these technologies in different ways. Prediction markets won’t work for all, and neither will an anonymous crypto system. The goal is to end up with features that suit the needs of a given organization. As we can see, they can sometimes provide an answer to contradicting needs, such as both transparency and deniability.
Just so you know, I’m organizing a workshop on Stellar soon, check it out. Also, if you’re wondering how traceability (or any other feature I wrote about) can work in practice, say, in your company, write to me using the box below.