Categories
Entrepreneurship Finance Financial Services Software

Micropayments: How your Business Can Benefit From Them

To understand ‘Micropayment’ is to understand the technology connected to it. Technology is constantly improving, affecting every aspect of our lives. One of the biggest challenges in life is simplifying our finances and the best way to do this is via Fintech (financial technology)- a developing area that brings together the best financial products worldwide at the lowest possible price. A crucial part of this is the processing of very small fees that cannot be handled by traditional credit card companies.

This has energized micropayment systems and the whole infrastructure connected with it. While earlier sending small amounts of money was seen as costly, because you still had to pay transaction fees, there were some indicators that micropayments could work:

The popularity of mobile devices

Access to financial services was limited by a lack of technology and the appropriate devices. As a result many transactions were completed on a cash only basis when they occurred outside of the normal banking system. However, the huge popularity of mobile services created an opportunity to provide financial services over its wireless network. Research showed a significant and growing market demand that was particularly important for all GSM (Eng. Global System for Mobile Communications) players. This demand intimated that it was technically feasible and profitable to deploy financial services over mobile networks. The big picture showed that mCommerce might fill a major service gap in developing countries that is critical to their social and economic evolution. Practice shows that the range of features accessible in particular environments can be applied elsewhere if the target markets are similar. With only minor variations from the main stream, the features of all systems need to include:

  • Over-the-air prepaid top-ups using the cash already in the account (like ‘blik).
  • The ability to transfer any amount of money between users’ accounts.
  • Provision for cash deposits and withdrawals.
  • The ability for a third party to make deposits into a customer account (employer, family member or a microfinance organization).
  • The ability to charge for bill payments.
  • The ability to make retail purchases at selected economic outlets.
  • The ability to transfer airtime credits between users.

Since all the above points are now achievable, micropayment has recently been reconsidered as a viable technology, largely due to the development of cellular networks. The main reason for this is not technological but more down to simple economics. Independent online service providers receive much revenue from mobile users. Mobile networks often charge users for admission to low-cost services on a fixed network. Alongside this many applications require a solution for the commissions placed on small transactions containing mass data storage and message exchange.

Insomuch as micropayment systems are designed to purchase exceptionally low-cost items, it is crucial that the value of each individual process remains very small.

online micropayments

Ad blockers plague

As ad blockers have gained popularity there has been renewed interest in micropayments. Originally the main focus in design was on content but emerging technologies using block chain have created amazing opportunities for artists, journalists, etc., as the content no longer has to be ad-friendly. Micropayment has allowed the author to be in absolute control of content distribution and its economic worth. Simply put micropayments drive browsers, empowering creators and the audience that follows them.

Closer examination reveals that when ads are kept out of the way, micropayments allow the author to more easily control their own income. They help reveal the true value of content which assists in ascertaining the authors’ economic sustainability. It is expected that micropayments will continue to evolve just as it did from paid for ads into its current form.

Technicalities – the protocol

As was previously established, the main issue with low-value transactions has been that processing and transaction fees diminish the final settlement amount. Payment processors place additional costs for a multiplicity of reasons including infrastructure costs, administrative costs, and paid mechanisms for fraud prevention and dispute resolution. In the past two decades much research has been undertaken on using digital communications and cryptography to reduce or erase these costs. For banking facilities these fees ideally need to be down to the fraction-of-a-cent range.

It can be expected that content servers for the global information infrastructure will soon operate billions of these low value transactions that are computationally complex. Whilst costly cryptographic protocols are now impractical and obsolete the micro-payment process can be bootstrapped with already well-known payment protocols for larger amounts, but does not depend on them for each micro-transaction. Special attention is given to its integration into IBM’s Internet Keyed Payment Systems (iKP) at its most basic level.
The product itself allows for the possibility of a payment protocol in wireless networks. The protocol usually assumes two techniques of transaction execution:

  • In on-line mode with the participation of a trusted website – for macropayments,
  • In off-line mode using electronic money, mainly for small value transactions – for micropayments.

The main purpose is to predict scenarios of various events and transactions in the protocol – and to be able to analyze any part of it. Paramount within this are the aspects of payment security such as asymmetric cryptography techniques, public key infrastructures and many more. Needless to say that for the evaluation of any protocol, performance must perfectly blend with the criteria specific to the wireless environment.

micropayments platforms

In summary

In summary micropayment platform schemes that are dedicated for processing small transactions work in two main ways.One is that a seller or service provider establishes an account with a third-party micropayment provider who accumulates, stores and distributes the monies accrued. Both seller and user/buyer are required to establish an account with the same micropayment provider for easier and safer implementation. The provider manages a digital wallet where all the payments are stored until they get to to a larger amount and can then be sent to the recipient.As an example let’s say a site called ‘The Freelance’ is a market workplace for freelancers to connect with companies to develop small projects. A company hires a developer from ‘The Freelance’ to make few changes on their website for $1/hr. If the developer works on it for 8 hours, the task giver – the company – pays ‘The Freelance’. In this case ‘The Freelance’ collects all the fees. It also stores the remainder in a developers’ digital wallet. If a developer is good and garners many fees, ‘The Freelance’ accumulates IOUs to the point where the wallet contains a significant sum, say $500, which is then sufficient to be withdrawn. ‘The Freelance’ then pays the developer directly into his account.

The second is that micropayment systems can operate as a system for prepaid transactions. A user/buyer makes use of a micropayment processor account by depositing in advance a certain amount of money in it. As long as the seller (the other side of the primary transaction) uses the same account provider everything works smoothly as the user’s account with the provider is easily debited for the amount of the purchase. Simply put the payment is made by using a micropayment processing account. Let’s illustrate this with the most common example: PayPal. PayPal is a very popular micropayment provider who has its own requirements for micropayments regarding the maximum amount of the transaction. According to PayPal a micropayment transaction is less than $10. So let’s imagine that a PayPal user decides to deposit $200 in their account. From that point user can become a buyer by purchasing an item for $5 from a webstore. The purchase price is debited from the PayPal account and used to cover the payment. On completion the balance in the buyers’ PayPal account will be $195 minus PayPal’s fees for micropayment transactions, the webstore’s balance account is plus $5, and PayPal gains the provision fee.

In all these scenarios, commercial organizations have much more to gain by addressing the problem of fiscal cash transactions by micropayments. Cash is not only more difficult to use, but you waste a lot of time moving it outside the banking sector.

In the 21st century no country exists beyond the scope of the banking sector and so for their own economic progress they should be encouraged to move away from cash. The extra motivation here is that the resulting low cost solutions and mechanisms that work in these environments can then be efficiently applied in all types of developed economies.

Want to know more about Online Payment Solutions and our recommendations?! Read this post:

How to Choose Online Payment Solution!

[contact-form-7 id=”13387″ title=”Contact download_8_reasons”]
Categories
Software Technology

Build your own error monitoring tool

In this tutorial, I’ll describe how you can create your own error watcher. The final version of this project can be found on GitHub. I’ll later refer to the code from this repository, so you can check it out.

Why use even a simple error monitoring tool?

If your application is on the production or will be in near future, you should look for some kind of error monitoring tool, otherwise you’ll have a really big problem. And as a developer, let me tell you that looking for errors manually in your production environment isn’t cool.

Find the cause of the problem before your customers notice

For example, let’s say your application is doing some background processing which isn’t visible at first glance to the end user. The process fails at the one of the background steps. If you have an error monitoring tool, you’ll have the possibility to fix the bug before your customers notice it.

Reduce time of finding to fix time

Without a monitoring tool, when a bug is reported your team would probably start looking through logs manually. This significantly extends the fix time. Now, imagine that your team gets a notification right away when the error appears – you can now skip that time-consuming part.
error monitoring tool

Monitoring infrastructure

In this tutorial, we’ll use Elasticsearch + Logstash + Kibana stack to monitor our application. ELK is free when you use Open Source and Basic subscriptions. If you want to use custom functionalities, e.g. alerting, security, machine learning, you’ll need to pay.
Unfortunately, alerting isn’t free. If you’d like to send an alert message to a Slack channel or email someone about a critical error, you’ll need to use “semi-paid” X-Pack. Some parts are free in Basic subscription.
However, we can implement our own watcher to bypass Elastic’s high costs. I’ve got a good news for you – I’ve implemented it for you already. We’ll get back to it later.
The image below describes how our infrastructure is going to look like.
monitoring infrastructure
Logstash reads the logs, extracts the information we want, and then sends transformed data to Elasticsearch.
We will query Elasticsearch for recent logs containing error log level using our custom Node.js Elasticsearch Watcher. The Watcher will send alert messages into a Slack channel when the query returns some results. The query will be executed every 30s.
Kibana is optional here, however it’s bundled in the repo so if you’d like to analyze application logs in some fancy way, here you go. I won’t describe it in this article, so visit the Kibana site to see what you can do with it.

Dockerized ELK stack

Setting up Elasticsearch, Logstash and Kibana manually is quite boring, so we’ll use an already Dockerized version. To be more precise, we’ll use Docker ELK repository which contains what we need. We’ll tweak this repo to meet our requirements, so either clone it and follow the article or browse the final repo.
Our requirements:

  • Reading logs from files
  • Parsing custom Java logs
  • Parsing custom timestamp

We’re using Logback in our project and we have a custom log format. Below, you can see the Logback appender configuration:

Here are the sample logs:

Firstly, we need to update the docker-compose.yml file to consume our logs directory and custom patterns for Logstash. The Logstash service needs two extra lines in its volume section:

The first line binds our pattern directory. The second attaches logs to container. $LOGS_DIR variable will later be added to an .env file, which will give us ability to change logs dir without modifying the repository. That’s all we need.
If you’d like to persist data between container restarts, you can bind Elasticsearch and Logstash directories to some directory outside the Docker.
Here’s the .env file. You can replace logs dir with your path.

How to configure Logstash to consume app logs

Logstash’s pipeline configuration can be divided into three sections:

  • Input – describes sources which Logstash will be consuming.
  • Filter – processes logs e.g. data extraction, transformation.
  • Output – sends data to external services


The code above is the full Logstash configuration.
The input section is quite simple. We define basic input properties such as logs path and logs beginning position when starting up Logstash. The most interesting part is the codec where we configure handling multiline Java exceptions. It will look up for beginning by, in our example, a custom timestamp and it will treat all text after till next custom timestamp as one log entry (document in Elasticsearch).
I’ve included a patterns directory, so we can use our custom pattern in multiline regex. It’s not required, you can use normal regex here.
The filter section is the most important part of a Logstash configuration. This is the place where the magic happens. Elastic defined plenty of useful plugins which we can use to transform log events. One of them is Grok, which we’ll use in the monitoring tool.
Grok parses and structures text, so you can grab all fields from your logs, e.g. timestamp, log level, etc. It works like regex. Just define your log pattern with corresponding field names and Grok will find matching values in the event. You can use default Grok patterns or create your own to match custom values.
In our example, we’ll use a custom timestamp, so we need to define a custom pattern. Grok allows us to use custom patterns ad hoc in message pattern. However, we want to use it more than once, so we defined a patterns file which we can include in places where we need the pattern e.g. multiline codec and Grok. If you use a standard timestamp format, just use the default one.
Here’s the pattern file:

The file structure is the same as in other Grok patterns. The first word in the line is the pattern name and rest is the pattern itself. You can use default patterns while defining your pattern. You can also use Regex, if none of the default matches your needs. In our case, the log format is e.g. 20180103 00:01:00.518, so we’re able to use already defined patterns.
In the output section, we define that transformed logs will be sent to the Elasticsearch.

Docker file permissions

One thing that took me some time to figure out was configuration of the file permissions of the logs accessed by dockerized Logstash.
If your logs are created as a user with id 1000, you won’t notice the problem and you can skip this step. However, you’re most likely dealing with quite the opposite. For example, you run your application on Tomcat and the logs are created by the Tomcat user and then bound as a volume to the Logstash container. The Tomcat user is not the first user (1000) in the system, so user id won’t match in the container. Default Logstash image runs as a user 1000, so it can read logs only with permission of user 1000. It doesn’t have access to other users’ files.

The trick here is to switch to root in Docker file and create new group with id matching the log creator group id and then add the Logstash user to it. Then we add the user which runs the container to the group which owns the logs on the server (sudo usermod -a -G <group> <user>). After that, we switch back to the Logstash user for security reasons to secure the container.  

Filebeats – log agent

error monitoring infrastructure
The implementation described so far can be used for one application. We could scale it to support many applications, but it wouldn’t be fun nor easy. Logstash reads lines one by one and sends them after transformation to Elasticsearch.
I’ve got a better solution for you. Elastic created family software called the Beats. One of them is Filebeats which kills the pain of log access. The Filebeats is simply a log shipper. It takes logs from the source and transfers them to Logstash or directly to Elasticsearch. With it, you can forward your data, although it can also do some of the things what Logstash does, e.g. transforming logs, dropping unnecessary lines, etc. But, Logstash can do more.
If you have more than one application or more than one instance of the application, then Filebeats is for you. Filebeats transfers logs directly to the Logstash to port defined in the configuration. You just define where should they look for logs and you define the listening part in the Logstash.
The file permission problem will of course be present if you want to run the dockerized version of the Filebeats, but that’s the cost of virtualization.
I suggest you use Filebeats for production purposes. You will be able to deploy ELK on the server which won’t be actually the prod server. Without Filebeats (with Logstash only) you’ll need to place it on the same machine where the logs reside.

Sending Slack alert message

Elastic delivers the Watcher functionality within X-Pack, bundled into the Elasticsearch and, what’s more, there is an already defined Slack action which can send custom messages to your Slack channel, not to mention more actions. However, as stated before, it’s not free. The Watcher is available in Gold subscription, so if that’s ok for you, then you can skip rest of the article. If not, let’s go further.
When I noticed that the Elastic Watcher is a paid option, I thought that I could do my own watcher which would send alert messages to Slack. It’s just a scheduled job which checks if there’s something to send, so it shouldn’t be hard, right?
The Watcher
I created an npm package called Elasticsearch Node.js Watcher, which does the basics of what the X-Pack’s Watcher does, namely watching and executing actions when specified conditions are satisfied. I chose Node.js for the the Watcher, because it’s the easiest and fastest option for a small app which would do all of the things I need.
This library takes two arguments when creating a new instance of a watcher:

  • Connection configuration – it defines connection parameters to Elasticsearch. Read more about it in the documentation.
  • The Watcher configuration – describes when and what to do if there’s a hit from Elasticsearch. It contains five fields (one is optional):
    • Schedule – The Watcher uses it to schedule cron job.
    • Query – query to be executed in Elasticsearch, the result of which will be forward to predicate and action.
    • Predicate – tells if action should be executed.
    • Action – task which is executed after satisfying predicate.
    • Error handler (optional) – task which will be executed when an error appears.

We need to create a server which would start our Watcher, so let’s create index.js with Express server. To make the environment variables defined in .env file visible across Watcher’s files, let’s also include dotenv module.

The meat. In our configuration, we defined to query Elasticsearch every 30 seconds using the cron notation. In the query field we defined the index to be searched. By default, Logstash creates indexes named logstash-<date>, so we set it to logstash-* to query all existing indices.

To find logs, we use Query DSL in the query field. In the example, we’re looking for entries with Error log level which appeared in last 30 seconds. In the predicate field, we’ll define the condition of hits number at greater than 0 since we don’t want to spam Slack with empty messages. The action field references the Slack action described in the next paragraph.

Slack alert action

To send a message to a Slack channel or a user, we needed to set up an incoming webhook integration first. As a result, you’ll get a URL which you should put it in the Watcher’s .env file:

Ok, the last part. Here, we’re sending a POST request to Slack’s API containing a JSON with a formatted log alert. There’s no magic here. We’re just mapping Elasticsearch hits to message attachments and adding some colors to make it more fancy. In the title you can find information about the class of the error and the timestamp. See how you can format your messages.

Dockerization

Finally, we’ll dockerize our Watcher, so here’s the code of Dockerfile:

For development purposes of the Watcher, that’s enough. ELK will keep running and you’ll be able to restart the Watcher server after each change. For production, it would be better to run the Watcher alongside ELK. To run the prod version of the whole infrastructure, let’s add a Watcher service to docker-compose file. Service needs to be added to the copied docker-compose file with a “-prod” suffix.

Then, we can start up our beautiful log monitor with one docker-compose command.
docker-compose -f docker-compose-prod.yml up -d
In the final repository version, you can just execute make run-all command to start the prod version. Checkout the Readme.md, it describes all needed steps.

But…

This solution is the simplest one. I think the next step would be to aggregate errors. In the current version you’ll get errors one by one in your Slack channel. This is good for dev/stage environment, because they’re used by few users. Once you’re on production, you’ll need to tweak Elasticsearch’s query, otherwise you’ll be flooded with messages. I’ll leave it for you as homework 😉

You need to analyze the pros and cons of setting up all of this by yourself. On the market, we have good tools such as Rollbar or Sentry, so you need to choose if you want to use the “Free” (well, almost, because some work needs to be done) or the Paid option.
I hope you found this article helpful.

Categories
Design Software

7 Tips for Good UI Design

A good UI design increases conversion rates. That’s simple. But how does human-oriented design play into it? The mobile revolution, as well as the web revolution before it, constantly forced us to keep restructuring and reconsidering what simplicity means for human-centered design and practically every experience we create. In the UI user community, we’ve launched a trend called “human-centered” that helps you focus and lead every design project in a particular way.

There are numerous human experiences associated with using an application. They allow you to take a different perspective towards various projects to see how one solution is better than the other. The best ways of solving problems should be based on human needs. Ideally, you should fulfil those needs in a way that involves best practices and technological innovations.

Our team at ESPEO is working daily to accelerate UX performance at the user interface level, understand how to integrate the project with the UX toolbar and make sure that we create our products in a comprehensive way.

user interface design

Tip one: Let yourself be direct – and stay honest

The rule of thumb is to be direct instead of indecisive. This way, you convey your message with certainty and confidence. Your design presents ideas or products determined to contribute to a user’s success. Leaders don’t end their messages with question marks, using hedging expressions such as “perhaps”, “maybe”, “interested?” and “want to?”. Your UI can be bit more authoritative.

Honesty with a user pays off, so in the whole narrative consider social proof instead of just talking about qualities. Nothing will increase your conversion as seriously as being sincere with the users. Showing endorsements and talking about your offering gives a significant performance for a call to action. Therefore, point towards proof of customer satisfaction via references or testimonials. If numbers are large – consider visualization of the data, as it validates your point in a clear way.
 

Tip two: Conversion – ease of use

A good way of achieving that is simply trying out a one-column layout instead of multicolumn. A one-column layout gives clarity and a more consistent narrative. Users have an easy path to move into a predictable scenario, whereas a multi-column approach moves the users’ attention onto other features. Therefore, there’s a great risk of being distracted from the core purpose of a page. Lead people with plain content and a smooth experience, and, at the end, a visible call to action.

Features of your own design style such as color, depth, and contrast may be deployed as a reliable tool to build communication with a user. With basic elements of specific design, users may be able to follow the fundamental language of navigating in any interface: what can I do, how can I do it? This needs a specific road map containing styles of clickable actions (links, buttons), selected elements (chosen items), and views of plain text. Each style should be easy to differentiate from the other in order to be applied consistently across an interface. This rule, if applied properly, keeps users happy and eager to interact further.

Looking for more Conversion tips? Check out our article: How to optimize your ecommerce page for better conversion.

UI web app

Tip three: Uniting similar functions (merging!)

With every design, it’s easy to repeatedly create similar sections, elements and features which all have the same purpose. It’s the same force for the whole universe: entropy – the disorder and chaos of matter – increases with the volume of information. Uniting similar functions instead of fragmenting the UI will clear out some design complications: remove duplicated functions labelled in various ways, so it doesn’t clutter the content. In a few words, the more UI fragmentation there is, the higher the learning curve is for your users to catch up with. To prevent UI refactoring, the key competence is to merge similar functions together.

Repeating your call to action is a completely different case. Repeating the CTA is a strategy that is more applicable to longer (or wider) content. It can be also repeated across numerous pages. Naturally, users will be frustrated with one item persistently displayed a few times on one view. The wise choice in all the noise is to have one soft actionable element at the top, and second prominent one at the bottom. This is because users reaching the bottom of page seem to pause and consider their next step. So, that division has a great potential to repeat a solid offer or close a deal.

Tip four: Expose the options (have a clear view)

The revolution of pull down menus hides a set of actions which are the proper goal of every page or app. It also introduces the unnecessary effort of searching and discovering functions. All those actions are supposed to be centralized, almost like an app’s ‘spine’, so the scenario of user’s path keeps all the goals in the right order. Clarity and space for content is needed to bring attention to the right places. Possible actions should be visible upfront in an obvious way. On the other hand, the options that don’t need prediction or don’t require learning (as in sets of date and time references), can be placed in menus.

Maintaining the focus on goals is a demanding task for designers. It’s much easier to drown the user with links. It’s easy to create a page with lots of links going left and right in the hope of meeting as many customer needs as possible. If, however, the designer is a true artist, he or she is capable of creating a page with a wide content volume built towards a specific call to action at the bottom. Caution is required at every step, because any link above the primary CTA increases the danger of taking the users’ attention away from established purposes. Users then simply veer into directions they weren’t primarily intended to follow. The mission of lowering the number of distractions like the number of links on a page, and possibly toning down the discovery style pages (a bit heavier on the links) can be achieved with tunnel style pages (with fewer links and higher conversions).
 

Tip five: Showing state and benefit (give the users the tips and feedback they need!)

A nice way of building an understanding between users and designers is an UI with different states of displayed elements (read or unread emails, sent or packed orders). The customer feels secure with the knowledge of an item’s condition. The message about its state must provide the kind of feedback which is expected. It also brings satisfaction for users, as they can establish if their actions were successfully carried out, or if it’s time for the next step.

And here we have another piece of truth: buttons which reinforce a benefit might lead to higher conversions. Users simply know what they want. The designer is supposed to know the action from which the user benefits and put it right on the CTA. It’s all about looking at the ‘transaction’ from the right perspective. The benefit can also be placed next to the CTA, as a reminder why they are about to take that action.

Tip five: Showing state and benefit (give the users the tips and feedback they need!)

Tip six: Gradual engagement

To build interest and gradual engagement, some subtlety is required. Instead of a pushy signup form, there must be some ease of use. The expectation that clients sign up immediately may scare them off. A page or app can be an opportunity to show them the product’s capabilities or a chance for them to perform a task through which something of value is demonstrated. It can build excitement and curiosity alongside initializing interactions.

These actions lead to a personalization of experiences. At the end, the user interacts with your product and sees its internal values. Gradual engagement is really a way of building an understanding of the product or service without conscious acknowledgment. It is a great way to postpone the signup process and prolong the user’s attention through using and customizing the application.

Tip six: Gradual engagement

Tip seven: Recognition to recall

This principle of design is grounded in psychology. Psychologists suggest that when people have a choice, it’s easier for them to identify with something already known and immediately recognizable. To make an experience easier, recognition should rely on hints which help us consider our past experience. Recall requires us to probe the depths of memory all on our own, alternatively, it requires some guidance. This is similar to multiple choice questions on exams – they can be faster to complete than open-ended ones. The challenge with recognition is to give users recognizable items which they have been exposed to before, instead of expecting them to have an idea of their own.

Tip seven: Recognition to recall
Looking for more tips for UX and good UI design – read about UX Desing Trends to watch in 2018!

Categories
Blockchain Financial Services Software

What are DApps about? Decentralized applications explained

In a world where the terms “blockchain” and “start an ICO” are a staple of news in online press, it’s not surprising when something new emerges in that field. Enter a new model for building successful and massively scalable applications. Thanks to blockchain technology (and the massive interest surrounding it), we now have a new type of application called a “Decentralized Application” (DApps). These are sometimes referred to as blockchain applications. So, what are DApps exactly?

What are DApps about?

There are many different explanations as to what DApps are. The term “decentralized applications” isn’t strictly related to blockchain, however, DApps started to be recognized in recent years precisely because of blockchain. Generally, DApps are applications that run on some kind of a P2P network – multiple computers rather than a single one. Think of BitTorrent or TOR as a decentralized applications.
For an application to be considered a blockchain DApp that uses tokens, it must meet the following criteria:

  1. The application must be completely open-source. It must operate autonomously, and with no entity controlling the majority of its tokens. The application may adapt its protocol in response to proposed improvements and market feedback. However, all changes must be decided by consensus of its users.
  2. The application’s data and records of operation must be cryptographically stored in a public, decentralized blockchain in order to avoid any central points of failure.
  3. The application must use a cryptographic token (Bitcoin or a token native to its system) which is necessary to access the application. Any contribution of value from (miners / farmers) should be rewarded in the application’s tokens.
  4. The application must generate tokens and have an inbuilt consensus mechanism (Bitcoin uses the Proof of Work Algorithm).

So, to be clear: in this article, whenever I’m mentioning “DApp/DApps”, I’m only referring to the ones running on the blockchain (so, blockchain applications) and the ones that use tokens.

Blockchain applications? Three types of DApps

The Ethereum white paper distinguishes between 3 types of DApps.

  • The first category is financial applications that run on the blockchain.

These provide users with a way of managing their own money. Bitcoin is a DApp in the first category. Bitcoin provides the monetary system that is completely decentralized and distributed. There is no central authority that controls the money and all the power of managing money resides on the users and the protocol. Users are the owners of their money and they can do whatever they want with that money. Other examples of the DApp from the first category are the various “alt-coins”.

  • The second category are semi-financial applications which mix money with information outside the blockchain.

For example, insurance apps that refund money for a plane ticket if the plane is delayed (Fizzy). The ICO itself also belongs to the second category of DApps. It mixes a token sale with all the crowdsale functionalities for the idea for which the ICO is held.

  • Finally, applications that fall within the third category. These DApps utilize all the features that decentralized and distributed systems have to offer.

These don’t have to be financial at all. Good examples are online voting or decentralized governance (e.g. DAO, decentralized organization). These types of blockchain applications are the most popular ones. Dubai is thinking of using blockchain and building the first blockchain-run government. Another possibility of using blockchain in this category is energy distribution apps. So, the basic idea is that if I have solar panels on my home, and these solar panel produce more energy than I use, then I can sell the excess of power directly to my neighbor.
The classification of the three types of DApps is based on the Ethereum white paper. There’s another classification of DApps out there. You can find it here under Classification of Dapps“.

The difference between DApps and smart contracts

Now that you know what DApps are, you may ask yourself another question. How do these smart contracts you’ve heard about fit into all that?
Smart contracts are programs that are executed and run on the blockchain. A smart contract defines the conditions to which all parties using the contract agree. So if the required conditions are met, certain actions are executed. When I buy tokens on the new ICO, a smart contract has rules written into itself. For example, if the ICO doesn’t raise enough money, all of the money I have invested will be returned to me, or that I cannot transfer new tokens until the ICO is successfully concluded.
DApps (Ethereum based) are blockchain applications where the smart contract is what allows it to connect to the blockchain. The easiest way to understand this is to compare a DApp with a Web App.

Frontend and backend

Traditional web applications have a frontend and a backend. The frontend is all that the users sees when entering a webpage. All of the HTML, CSS and JS are used to display the frontend and are used to connect to the backend.
The backend is where all the mechanics are implemented for the website, for example a database connection and serving the client information about their profile. Java, Python or Node.js are used on the back-end, combined with a SQL database.
DApps are similar to web apps, they may have the frontend (GUI in general) but what differentiates them from Web Apps is the backend. Instead of the Java API and a traditional database, we have a smart contract that connects to the blockchain and contains all of the logic for the application.
As opposed to traditional, centralized applications, where the backend code runs on centralized servers, DApps have their backend code running on a blockchain network. Each operation needs to meet the consensus of the network and is computed on every node of the network. So, decentralized applications consist of the whole package, from backend to frontend. The smart contract is only the backend part of the DApp.

What are DApps – best examples

In this section, I will present some interesting projects and blockchain applications that are built on the Ethereum platform just to show how varied DApps can be.

EtherTweet

From the EtherTweet website: the service provides a basic Twitter-like functionality. As it’s deployed on Ethereum blockchain, it means no central authority has control of what people publish or what they post. As a result, the user has all the control of their EtherTweets.

Etheria

Etheria is a game build on the Ethereum platform. It’s similar to Minecraft but what differentiates it from the rest is that you can purchase all the land tiles for ether. The user interacts with the game by sending commands. Those commands later go to the smart contract which controls the game behaviour. It’s completely decentralized and everyone can interact with it.

Gnosis

Gnosis is a prediction market platform which allows its user to participate in voting for different predictions. The platform offers users a place where they vote on predictions regarding various topics, from the weather to election results.

FirstBlood

A platform which lets users/players challange each other in DOTA2 and win rewards based on the bet or the tournament. FirstBlood offers different tournaments which are listed on the website, where players can participate and win rewards. The platform awards every winner with tokens. As it’s based on the blockchain, every match history is written and stored on the blockchain.

The future of DApps

Blockchain is really a young technology. The Bitcoin whitepaper was released in 2008, and Satoshi Nakamoto mined the first Bitcoin block in 2009. People knew that bitcoin was revolutionary but it took a few years for developers to truly figure out why. They later understood that Bitcoin was built upon a really revolutionary technology – blockchain.
We’re still in the process of understanding blockchain and what we can do with it – and understanding its real potential. Every now and then, more interesting uses of blockchain emerge from the community in the form of a really interesting ICO for a new DApp. I’d never have thought of using blockchain as a mean of distributing energy or for autonomous government projects.

What are DApps compared to the tech of the 90s?

I often compare the current state of blockchain and DApps to the early life of the Web. How we used the WWW back in the 90s and 00s is way different from how we use the web now. Something similar will happen to the blockchain and DApps. Seems like we’re still trying to find the real potential of this technology. In my opinion, many more interesting blockchain applications are on their way. I’m sure we’ll encounter uses that we’ve never thought about. I can’t wait to see what the future brings. And if you’re interested in building your own distributed application or blockchain product, click here – we can help.