Categories
Software Technology

Build your own error monitoring tool

In this tutorial, I’ll describe how you can create your own error watcher. The final version of this project can be found on GitHub. I’ll later refer to the code from this repository, so you can check it out.

Why use even a simple error monitoring tool?

If your application is on the production or will be in near future, you should look for some kind of error monitoring tool, otherwise you’ll have a really big problem. And as a developer, let me tell you that looking for errors manually in your production environment isn’t cool.

Find the cause of the problem before your customers notice

For example, let’s say your application is doing some background processing which isn’t visible at first glance to the end user. The process fails at the one of the background steps. If you have an error monitoring tool, you’ll have the possibility to fix the bug before your customers notice it.

Reduce time of finding to fix time

Without a monitoring tool, when a bug is reported your team would probably start looking through logs manually. This significantly extends the fix time. Now, imagine that your team gets a notification right away when the error appears – you can now skip that time-consuming part.
error monitoring tool

Monitoring infrastructure

In this tutorial, we’ll use Elasticsearch + Logstash + Kibana stack to monitor our application. ELK is free when you use Open Source and Basic subscriptions. If you want to use custom functionalities, e.g. alerting, security, machine learning, you’ll need to pay.
Unfortunately, alerting isn’t free. If you’d like to send an alert message to a Slack channel or email someone about a critical error, you’ll need to use “semi-paid” X-Pack. Some parts are free in Basic subscription.
However, we can implement our own watcher to bypass Elastic’s high costs. I’ve got a good news for you – I’ve implemented it for you already. We’ll get back to it later.
The image below describes how our infrastructure is going to look like.
monitoring infrastructure
Logstash reads the logs, extracts the information we want, and then sends transformed data to Elasticsearch.
We will query Elasticsearch for recent logs containing error log level using our custom Node.js Elasticsearch Watcher. The Watcher will send alert messages into a Slack channel when the query returns some results. The query will be executed every 30s.
Kibana is optional here, however it’s bundled in the repo so if you’d like to analyze application logs in some fancy way, here you go. I won’t describe it in this article, so visit the Kibana site to see what you can do with it.

Dockerized ELK stack

Setting up Elasticsearch, Logstash and Kibana manually is quite boring, so we’ll use an already Dockerized version. To be more precise, we’ll use Docker ELK repository which contains what we need. We’ll tweak this repo to meet our requirements, so either clone it and follow the article or browse the final repo.
Our requirements:

  • Reading logs from files
  • Parsing custom Java logs
  • Parsing custom timestamp

We’re using Logback in our project and we have a custom log format. Below, you can see the Logback appender configuration:

Here are the sample logs:

Firstly, we need to update the docker-compose.yml file to consume our logs directory and custom patterns for Logstash. The Logstash service needs two extra lines in its volume section:

The first line binds our pattern directory. The second attaches logs to container. $LOGS_DIR variable will later be added to an .env file, which will give us ability to change logs dir without modifying the repository. That’s all we need.
If you’d like to persist data between container restarts, you can bind Elasticsearch and Logstash directories to some directory outside the Docker.
Here’s the .env file. You can replace logs dir with your path.

How to configure Logstash to consume app logs

Logstash’s pipeline configuration can be divided into three sections:

  • Input – describes sources which Logstash will be consuming.
  • Filter – processes logs e.g. data extraction, transformation.
  • Output – sends data to external services


The code above is the full Logstash configuration.
The input section is quite simple. We define basic input properties such as logs path and logs beginning position when starting up Logstash. The most interesting part is the codec where we configure handling multiline Java exceptions. It will look up for beginning by, in our example, a custom timestamp and it will treat all text after till next custom timestamp as one log entry (document in Elasticsearch).
I’ve included a patterns directory, so we can use our custom pattern in multiline regex. It’s not required, you can use normal regex here.
The filter section is the most important part of a Logstash configuration. This is the place where the magic happens. Elastic defined plenty of useful plugins which we can use to transform log events. One of them is Grok, which we’ll use in the monitoring tool.
Grok parses and structures text, so you can grab all fields from your logs, e.g. timestamp, log level, etc. It works like regex. Just define your log pattern with corresponding field names and Grok will find matching values in the event. You can use default Grok patterns or create your own to match custom values.
In our example, we’ll use a custom timestamp, so we need to define a custom pattern. Grok allows us to use custom patterns ad hoc in message pattern. However, we want to use it more than once, so we defined a patterns file which we can include in places where we need the pattern e.g. multiline codec and Grok. If you use a standard timestamp format, just use the default one.
Here’s the pattern file:

The file structure is the same as in other Grok patterns. The first word in the line is the pattern name and rest is the pattern itself. You can use default patterns while defining your pattern. You can also use Regex, if none of the default matches your needs. In our case, the log format is e.g. 20180103 00:01:00.518, so we’re able to use already defined patterns.
In the output section, we define that transformed logs will be sent to the Elasticsearch.

Docker file permissions

One thing that took me some time to figure out was configuration of the file permissions of the logs accessed by dockerized Logstash.
If your logs are created as a user with id 1000, you won’t notice the problem and you can skip this step. However, you’re most likely dealing with quite the opposite. For example, you run your application on Tomcat and the logs are created by the Tomcat user and then bound as a volume to the Logstash container. The Tomcat user is not the first user (1000) in the system, so user id won’t match in the container. Default Logstash image runs as a user 1000, so it can read logs only with permission of user 1000. It doesn’t have access to other users’ files.

The trick here is to switch to root in Docker file and create new group with id matching the log creator group id and then add the Logstash user to it. Then we add the user which runs the container to the group which owns the logs on the server (sudo usermod -a -G <group> <user>). After that, we switch back to the Logstash user for security reasons to secure the container.  

Filebeats – log agent

error monitoring infrastructure
The implementation described so far can be used for one application. We could scale it to support many applications, but it wouldn’t be fun nor easy. Logstash reads lines one by one and sends them after transformation to Elasticsearch.
I’ve got a better solution for you. Elastic created family software called the Beats. One of them is Filebeats which kills the pain of log access. The Filebeats is simply a log shipper. It takes logs from the source and transfers them to Logstash or directly to Elasticsearch. With it, you can forward your data, although it can also do some of the things what Logstash does, e.g. transforming logs, dropping unnecessary lines, etc. But, Logstash can do more.
If you have more than one application or more than one instance of the application, then Filebeats is for you. Filebeats transfers logs directly to the Logstash to port defined in the configuration. You just define where should they look for logs and you define the listening part in the Logstash.
The file permission problem will of course be present if you want to run the dockerized version of the Filebeats, but that’s the cost of virtualization.
I suggest you use Filebeats for production purposes. You will be able to deploy ELK on the server which won’t be actually the prod server. Without Filebeats (with Logstash only) you’ll need to place it on the same machine where the logs reside.

Sending Slack alert message

Elastic delivers the Watcher functionality within X-Pack, bundled into the Elasticsearch and, what’s more, there is an already defined Slack action which can send custom messages to your Slack channel, not to mention more actions. However, as stated before, it’s not free. The Watcher is available in Gold subscription, so if that’s ok for you, then you can skip rest of the article. If not, let’s go further.
When I noticed that the Elastic Watcher is a paid option, I thought that I could do my own watcher which would send alert messages to Slack. It’s just a scheduled job which checks if there’s something to send, so it shouldn’t be hard, right?
The Watcher
I created an npm package called Elasticsearch Node.js Watcher, which does the basics of what the X-Pack’s Watcher does, namely watching and executing actions when specified conditions are satisfied. I chose Node.js for the the Watcher, because it’s the easiest and fastest option for a small app which would do all of the things I need.
This library takes two arguments when creating a new instance of a watcher:

  • Connection configuration – it defines connection parameters to Elasticsearch. Read more about it in the documentation.
  • The Watcher configuration – describes when and what to do if there’s a hit from Elasticsearch. It contains five fields (one is optional):
    • Schedule – The Watcher uses it to schedule cron job.
    • Query – query to be executed in Elasticsearch, the result of which will be forward to predicate and action.
    • Predicate – tells if action should be executed.
    • Action – task which is executed after satisfying predicate.
    • Error handler (optional) – task which will be executed when an error appears.

We need to create a server which would start our Watcher, so let’s create index.js with Express server. To make the environment variables defined in .env file visible across Watcher’s files, let’s also include dotenv module.

The meat. In our configuration, we defined to query Elasticsearch every 30 seconds using the cron notation. In the query field we defined the index to be searched. By default, Logstash creates indexes named logstash-<date>, so we set it to logstash-* to query all existing indices.

To find logs, we use Query DSL in the query field. In the example, we’re looking for entries with Error log level which appeared in last 30 seconds. In the predicate field, we’ll define the condition of hits number at greater than 0 since we don’t want to spam Slack with empty messages. The action field references the Slack action described in the next paragraph.

Slack alert action

To send a message to a Slack channel or a user, we needed to set up an incoming webhook integration first. As a result, you’ll get a URL which you should put it in the Watcher’s .env file:

Ok, the last part. Here, we’re sending a POST request to Slack’s API containing a JSON with a formatted log alert. There’s no magic here. We’re just mapping Elasticsearch hits to message attachments and adding some colors to make it more fancy. In the title you can find information about the class of the error and the timestamp. See how you can format your messages.

Dockerization

Finally, we’ll dockerize our Watcher, so here’s the code of Dockerfile:

For development purposes of the Watcher, that’s enough. ELK will keep running and you’ll be able to restart the Watcher server after each change. For production, it would be better to run the Watcher alongside ELK. To run the prod version of the whole infrastructure, let’s add a Watcher service to docker-compose file. Service needs to be added to the copied docker-compose file with a “-prod” suffix.

Then, we can start up our beautiful log monitor with one docker-compose command.
docker-compose -f docker-compose-prod.yml up -d
In the final repository version, you can just execute make run-all command to start the prod version. Checkout the Readme.md, it describes all needed steps.

But…

This solution is the simplest one. I think the next step would be to aggregate errors. In the current version you’ll get errors one by one in your Slack channel. This is good for dev/stage environment, because they’re used by few users. Once you’re on production, you’ll need to tweak Elasticsearch’s query, otherwise you’ll be flooded with messages. I’ll leave it for you as homework 😉

You need to analyze the pros and cons of setting up all of this by yourself. On the market, we have good tools such as Rollbar or Sentry, so you need to choose if you want to use the “Free” (well, almost, because some work needs to be done) or the Paid option.
I hope you found this article helpful.

Categories
Design Software

7 Tips for Good UI Design

A good UI design increases conversion rates. That’s simple. But how does human-oriented design play into it? The mobile revolution, as well as the web revolution before it, constantly forced us to keep restructuring and reconsidering what simplicity means for human-centered design and practically every experience we create. In the UI user community, we’ve launched a trend called “human-centered” that helps you focus and lead every design project in a particular way.

There are numerous human experiences associated with using an application. They allow you to take a different perspective towards various projects to see how one solution is better than the other. The best ways of solving problems should be based on human needs. Ideally, you should fulfil those needs in a way that involves best practices and technological innovations.

Our team at ESPEO is working daily to accelerate UX performance at the user interface level, understand how to integrate the project with the UX toolbar and make sure that we create our products in a comprehensive way.

user interface design

Tip one: Let yourself be direct – and stay honest

The rule of thumb is to be direct instead of indecisive. This way, you convey your message with certainty and confidence. Your design presents ideas or products determined to contribute to a user’s success. Leaders don’t end their messages with question marks, using hedging expressions such as “perhaps”, “maybe”, “interested?” and “want to?”. Your UI can be bit more authoritative.

Honesty with a user pays off, so in the whole narrative consider social proof instead of just talking about qualities. Nothing will increase your conversion as seriously as being sincere with the users. Showing endorsements and talking about your offering gives a significant performance for a call to action. Therefore, point towards proof of customer satisfaction via references or testimonials. If numbers are large – consider visualization of the data, as it validates your point in a clear way.
 

Tip two: Conversion – ease of use

A good way of achieving that is simply trying out a one-column layout instead of multicolumn. A one-column layout gives clarity and a more consistent narrative. Users have an easy path to move into a predictable scenario, whereas a multi-column approach moves the users’ attention onto other features. Therefore, there’s a great risk of being distracted from the core purpose of a page. Lead people with plain content and a smooth experience, and, at the end, a visible call to action.

Features of your own design style such as color, depth, and contrast may be deployed as a reliable tool to build communication with a user. With basic elements of specific design, users may be able to follow the fundamental language of navigating in any interface: what can I do, how can I do it? This needs a specific road map containing styles of clickable actions (links, buttons), selected elements (chosen items), and views of plain text. Each style should be easy to differentiate from the other in order to be applied consistently across an interface. This rule, if applied properly, keeps users happy and eager to interact further.

Looking for more Conversion tips? Check out our article: How to optimize your ecommerce page for better conversion.

UI web app

Tip three: Uniting similar functions (merging!)

With every design, it’s easy to repeatedly create similar sections, elements and features which all have the same purpose. It’s the same force for the whole universe: entropy – the disorder and chaos of matter – increases with the volume of information. Uniting similar functions instead of fragmenting the UI will clear out some design complications: remove duplicated functions labelled in various ways, so it doesn’t clutter the content. In a few words, the more UI fragmentation there is, the higher the learning curve is for your users to catch up with. To prevent UI refactoring, the key competence is to merge similar functions together.

Repeating your call to action is a completely different case. Repeating the CTA is a strategy that is more applicable to longer (or wider) content. It can be also repeated across numerous pages. Naturally, users will be frustrated with one item persistently displayed a few times on one view. The wise choice in all the noise is to have one soft actionable element at the top, and second prominent one at the bottom. This is because users reaching the bottom of page seem to pause and consider their next step. So, that division has a great potential to repeat a solid offer or close a deal.

Tip four: Expose the options (have a clear view)

The revolution of pull down menus hides a set of actions which are the proper goal of every page or app. It also introduces the unnecessary effort of searching and discovering functions. All those actions are supposed to be centralized, almost like an app’s ‘spine’, so the scenario of user’s path keeps all the goals in the right order. Clarity and space for content is needed to bring attention to the right places. Possible actions should be visible upfront in an obvious way. On the other hand, the options that don’t need prediction or don’t require learning (as in sets of date and time references), can be placed in menus.

Maintaining the focus on goals is a demanding task for designers. It’s much easier to drown the user with links. It’s easy to create a page with lots of links going left and right in the hope of meeting as many customer needs as possible. If, however, the designer is a true artist, he or she is capable of creating a page with a wide content volume built towards a specific call to action at the bottom. Caution is required at every step, because any link above the primary CTA increases the danger of taking the users’ attention away from established purposes. Users then simply veer into directions they weren’t primarily intended to follow. The mission of lowering the number of distractions like the number of links on a page, and possibly toning down the discovery style pages (a bit heavier on the links) can be achieved with tunnel style pages (with fewer links and higher conversions).
 

Tip five: Showing state and benefit (give the users the tips and feedback they need!)

A nice way of building an understanding between users and designers is an UI with different states of displayed elements (read or unread emails, sent or packed orders). The customer feels secure with the knowledge of an item’s condition. The message about its state must provide the kind of feedback which is expected. It also brings satisfaction for users, as they can establish if their actions were successfully carried out, or if it’s time for the next step.

And here we have another piece of truth: buttons which reinforce a benefit might lead to higher conversions. Users simply know what they want. The designer is supposed to know the action from which the user benefits and put it right on the CTA. It’s all about looking at the ‘transaction’ from the right perspective. The benefit can also be placed next to the CTA, as a reminder why they are about to take that action.

Tip five: Showing state and benefit (give the users the tips and feedback they need!)

Tip six: Gradual engagement

To build interest and gradual engagement, some subtlety is required. Instead of a pushy signup form, there must be some ease of use. The expectation that clients sign up immediately may scare them off. A page or app can be an opportunity to show them the product’s capabilities or a chance for them to perform a task through which something of value is demonstrated. It can build excitement and curiosity alongside initializing interactions.

These actions lead to a personalization of experiences. At the end, the user interacts with your product and sees its internal values. Gradual engagement is really a way of building an understanding of the product or service without conscious acknowledgment. It is a great way to postpone the signup process and prolong the user’s attention through using and customizing the application.

Tip six: Gradual engagement

Tip seven: Recognition to recall

This principle of design is grounded in psychology. Psychologists suggest that when people have a choice, it’s easier for them to identify with something already known and immediately recognizable. To make an experience easier, recognition should rely on hints which help us consider our past experience. Recall requires us to probe the depths of memory all on our own, alternatively, it requires some guidance. This is similar to multiple choice questions on exams – they can be faster to complete than open-ended ones. The challenge with recognition is to give users recognizable items which they have been exposed to before, instead of expecting them to have an idea of their own.

Tip seven: Recognition to recall
Looking for more tips for UX and good UI design – read about UX Desing Trends to watch in 2018!

Categories
Blockchain Financial Services Software

What are DApps about? Decentralized applications explained

In a world where the terms “blockchain” and “start an ICO” are a staple of news in online press, it’s not surprising when something new emerges in that field. Enter a new model for building successful and massively scalable applications. Thanks to blockchain technology (and the massive interest surrounding it), we now have a new type of application called a “Decentralized Application” (DApps). These are sometimes referred to as blockchain applications. So, what are DApps exactly?

What are DApps about?

There are many different explanations as to what DApps are. The term “decentralized applications” isn’t strictly related to blockchain, however, DApps started to be recognized in recent years precisely because of blockchain. Generally, DApps are applications that run on some kind of a P2P network – multiple computers rather than a single one. Think of BitTorrent or TOR as a decentralized applications.
For an application to be considered a blockchain DApp that uses tokens, it must meet the following criteria:

  1. The application must be completely open-source. It must operate autonomously, and with no entity controlling the majority of its tokens. The application may adapt its protocol in response to proposed improvements and market feedback. However, all changes must be decided by consensus of its users.
  2. The application’s data and records of operation must be cryptographically stored in a public, decentralized blockchain in order to avoid any central points of failure.
  3. The application must use a cryptographic token (Bitcoin or a token native to its system) which is necessary to access the application. Any contribution of value from (miners / farmers) should be rewarded in the application’s tokens.
  4. The application must generate tokens and have an inbuilt consensus mechanism (Bitcoin uses the Proof of Work Algorithm).

So, to be clear: in this article, whenever I’m mentioning “DApp/DApps”, I’m only referring to the ones running on the blockchain (so, blockchain applications) and the ones that use tokens.

Blockchain applications? Three types of DApps

The Ethereum white paper distinguishes between 3 types of DApps.

  • The first category is financial applications that run on the blockchain.

These provide users with a way of managing their own money. Bitcoin is a DApp in the first category. Bitcoin provides the monetary system that is completely decentralized and distributed. There is no central authority that controls the money and all the power of managing money resides on the users and the protocol. Users are the owners of their money and they can do whatever they want with that money. Other examples of the DApp from the first category are the various “alt-coins”.

  • The second category are semi-financial applications which mix money with information outside the blockchain.

For example, insurance apps that refund money for a plane ticket if the plane is delayed (Fizzy). The ICO itself also belongs to the second category of DApps. It mixes a token sale with all the crowdsale functionalities for the idea for which the ICO is held.

  • Finally, applications that fall within the third category. These DApps utilize all the features that decentralized and distributed systems have to offer.

These don’t have to be financial at all. Good examples are online voting or decentralized governance (e.g. DAO, decentralized organization). These types of blockchain applications are the most popular ones. Dubai is thinking of using blockchain and building the first blockchain-run government. Another possibility of using blockchain in this category is energy distribution apps. So, the basic idea is that if I have solar panels on my home, and these solar panel produce more energy than I use, then I can sell the excess of power directly to my neighbor.
The classification of the three types of DApps is based on the Ethereum white paper. There’s another classification of DApps out there. You can find it here under Classification of Dapps“.

The difference between DApps and smart contracts

Now that you know what DApps are, you may ask yourself another question. How do these smart contracts you’ve heard about fit into all that?
Smart contracts are programs that are executed and run on the blockchain. A smart contract defines the conditions to which all parties using the contract agree. So if the required conditions are met, certain actions are executed. When I buy tokens on the new ICO, a smart contract has rules written into itself. For example, if the ICO doesn’t raise enough money, all of the money I have invested will be returned to me, or that I cannot transfer new tokens until the ICO is successfully concluded.
DApps (Ethereum based) are blockchain applications where the smart contract is what allows it to connect to the blockchain. The easiest way to understand this is to compare a DApp with a Web App.

Frontend and backend

Traditional web applications have a frontend and a backend. The frontend is all that the users sees when entering a webpage. All of the HTML, CSS and JS are used to display the frontend and are used to connect to the backend.
The backend is where all the mechanics are implemented for the website, for example a database connection and serving the client information about their profile. Java, Python or Node.js are used on the back-end, combined with a SQL database.
DApps are similar to web apps, they may have the frontend (GUI in general) but what differentiates them from Web Apps is the backend. Instead of the Java API and a traditional database, we have a smart contract that connects to the blockchain and contains all of the logic for the application.
As opposed to traditional, centralized applications, where the backend code runs on centralized servers, DApps have their backend code running on a blockchain network. Each operation needs to meet the consensus of the network and is computed on every node of the network. So, decentralized applications consist of the whole package, from backend to frontend. The smart contract is only the backend part of the DApp.

What are DApps – best examples

In this section, I will present some interesting projects and blockchain applications that are built on the Ethereum platform just to show how varied DApps can be.

EtherTweet

From the EtherTweet website: the service provides a basic Twitter-like functionality. As it’s deployed on Ethereum blockchain, it means no central authority has control of what people publish or what they post. As a result, the user has all the control of their EtherTweets.

Etheria

Etheria is a game build on the Ethereum platform. It’s similar to Minecraft but what differentiates it from the rest is that you can purchase all the land tiles for ether. The user interacts with the game by sending commands. Those commands later go to the smart contract which controls the game behaviour. It’s completely decentralized and everyone can interact with it.

Gnosis

Gnosis is a prediction market platform which allows its user to participate in voting for different predictions. The platform offers users a place where they vote on predictions regarding various topics, from the weather to election results.

FirstBlood

A platform which lets users/players challange each other in DOTA2 and win rewards based on the bet or the tournament. FirstBlood offers different tournaments which are listed on the website, where players can participate and win rewards. The platform awards every winner with tokens. As it’s based on the blockchain, every match history is written and stored on the blockchain.

The future of DApps

Blockchain is really a young technology. The Bitcoin whitepaper was released in 2008, and Satoshi Nakamoto mined the first Bitcoin block in 2009. People knew that bitcoin was revolutionary but it took a few years for developers to truly figure out why. They later understood that Bitcoin was built upon a really revolutionary technology – blockchain.
We’re still in the process of understanding blockchain and what we can do with it – and understanding its real potential. Every now and then, more interesting uses of blockchain emerge from the community in the form of a really interesting ICO for a new DApp. I’d never have thought of using blockchain as a mean of distributing energy or for autonomous government projects.

What are DApps compared to the tech of the 90s?

I often compare the current state of blockchain and DApps to the early life of the Web. How we used the WWW back in the 90s and 00s is way different from how we use the web now. Something similar will happen to the blockchain and DApps. Seems like we’re still trying to find the real potential of this technology. In my opinion, many more interesting blockchain applications are on their way. I’m sure we’ll encounter uses that we’ve never thought about. I can’t wait to see what the future brings. And if you’re interested in building your own distributed application or blockchain product, click here – we can help.

Categories
Blockchain Financial Services

Adapting blockchain for business

The intricacies of blockchain technology can still perplex some people. So, how does a business adapt blockchain to its needs? That’s the question I’d like to answer in this article. I’ll start off with a few typical purposes, go through a popular funding strategy (ICO) and finish with some interesting blockchain-based projects.

The interest in blockchain seems to be constantly on the rise. Startups rise and fall on its hype. Corporations spend huge amounts on blockchain-related R&D. Some projects can potentially be disruptive for current businesses and economies.

What can I do with it?

As you might have heard, blockchain is the technology powering Bitcoin. But it’s so much more than just cryptocurrencies! I’m not going to dive into the minute details of how blockchain works. Instead, I’ll tell you about some of its core strengths.

Blockchains leverage modern cryptography, especially private and public keys. You can think of the public key as your username and the private key as your password. Just like a password, a private key should never be shared. This means it strictly represents a particular digital identity and allows each transaction to be signed off. This makes blockchain perfect for representing asset ownership by private key ownership. Assets can be either digital (e.g. Bitcoins) or physical (represented by digital tokens).

What are its strengths?

Think of blockchain as a distributed database. Instead of having a trusted party which controls a single server (as is the case with a traditional database), all the data is present on all nodes and each change is agreed upon by achieving consensus. Strong cryptography ensures that data can’t be tampered with after consensus has been achieved. This accounts for auditability, immutability and verifiability which is required in many applications, e.g. storing healthcare records.

Usually blockchain (e.g. Bitcoin) is a digital ledger, storing account balances and transactions. The possibilities, however, don’t end there, because with the arrival of smart contracts blockchains became platforms for virtually any distributed application. Such a smart contract can model a DAO (decentralized anonymous organization), a crowdfunding campaign or a very specific set of rules for a particular transaction. This way, blockchain becomes a global platform for computation, while maintaining all the strengths I’ve mentioned.

Initial coin offerings

So, you have an idea on how to incorporate a new cryptocurrency into your project. However, you still lack the funds. This is where an ICO (initial coin offering) comes in.

Initial public offerings serve the purpose of raising capital in exchange for ownership in a company. ICOs are similar in purpose but instead of selling ownership a newly launched coin is being offered. What the coin represents is solely up to your business idea, e.g. just the coin itself bought for internal use on the platform (see Humaniq ICO) or a share in organization profits (see vDice ICO).

However, successfully gathering funds through an ICO is not an easy task. You need to invest time and money into a campaign, generate hype and excitement around your product, as well as provide a clear and informative whitepaper describing both the product and the coin. Because of many scams and failed ICOs in the past you need to convince the potential buyers of your coin to trust you with their money. Last but not least, you allocate funds for some development. You need the actual ICO implementation and your position would be stronger with at least PoC implementation of your product.

There’s a lot more to a success of an ICO, make sure you’ve got everything right because you might only get one shot at it. Your business idea is important, but you need to put a lot of effort in if you want to make your new cryptocurrency truly revolutionary. That includes getting the technical part right.

Interesting projects

According to tokenmarket.net, there are 27 open ICOs at the time of writing this article. This shows how many new projects utilizing blockchain are launched on the current market. Some of them are solving really interesting challenges. I’ll describe a few blockchain projects that caught my eye. I’m not going to dig too deep into details, I just want to show you the wide spectrum of ideas where blockchain is applicable.

Ethereum

Ethereum is one of the most successful blockchain platforms for running smart contracts. It has its own internal currency called “ether”. Each transaction, be it transfer of ether or interaction with a smart contract, costs a certain amount of “gas”, which is paid in ether. The initial crowdsale of ether tokens gained around $18m. Ethereum has a vibrant community and many blockchain projects are introduced with Ethereum as their base.

Bancor

The idea behind Bancor is introduction of “smart tokens” which are coins implemented as smart contracts (currently ethereum-based) backed by reserves of other tokens. This allows an instant exchange of tokens with no spread – and at a continuously calculated priced, based on the token supply and demand.

Humaniq

Ethereum and Bancor leverage blockchain for its further development along with the cryptocurrencies world. However, Humaniq’s target is to disrupt the current financial status quo. The project aims at building a mobile bank that requires only a smartphone. It should give access to financial services to people in developing countries, potentially bringing them out of poverty and encouraging entrepreneurship. The ICO was closed at much more modest $5m but the humanitarian aspect of the project can’t be ignored.

Steem

Posting valuable and high quality content to social networks generates revenue for these networks itself. That’s where Steem surfaces – with the idea of rewarding users for generating content, as well as voting for pieces they especially like. This inspires users to contribute superior content. There are 3 cryptocurrencies linked with the project: Steem, Steem Power and Steem Dollars, each of them with its own purpose. The idea is explained in detail in the whitepaper.

ICO

Basic Attention Token

Current online advertising is profitable for the publishers and advertisers. Users are bombarded with an abundance of ads. So, they usually lean towards solutions such as AdBlock. The introduction of BAT ethereum-based cryptocurrency combined with a new browser called “Brave” tries to change this by rewarding users for the attention they pay to the ads, while allowing publishers to buy and advertisers to sell ads. This should result in better targeted ads and less fraud. It’s designed to be beneficial for all parties of the Internet ad business. The idea seemed so attractive for investors that the ICO sold out $35m worth of tokens in 30 seconds!

Your move

Now you know basic blockchain strengths and how to leverage them. As you can see, it has many applications across all kinds of businesses, even though it’s not applicable everywhere. If your innovative business idea includes launching a new cryptocurrency, you can consider an ICO as a source of funding.

Be wary that although the ICO success stories are spectacular, it’s not exactly a breeze. You can always shoot us a message or email if you’re still confused about how to use blockchain for your business. And if you’d like to see how an ICO can be built on the Ethereum platform, head on here.