Categories
Software Technology Uncategorized

How geolocation and big data can be used to the advantage of your business apps

Nearly all industries have improved their efficiency through the use of geolocation. When it is combined with big data it allows for greater client satisfaction including more individualized customer experience, better targeted marketing and enables you to learn more about your own company.

Geolocation Apps & Big Data

Geolocation is most commonly used in marketing for geofencing, which targets clients who enter a certain location or area. This is often seen in shop apps which recognize when a client is near one of their stores and will then send them special offers, discount coupons, or other promotional information.

In similar vein big data analyses purchases to change or improve offers to better suit the needs of the consumer and allows a company to make their own products more attractive than those offered by the competition. For example a retailer can send special promotions and offers to a client whilst they are shopping in a competitors store. Equally, offers with a small time window can be sent to customers who are currently shopping in one of the company’s own on-line stores. This last example utilizes Beacon technology which works like geofencing but on a smaller scale, i.e. tracking the exact location of a client in the shop.

geolocation apps

Transportation and Logistics

Geolocation is indispensable for transportation and logistics where it is able to get the most out of the huge amount of data produced. Applications utilizing geolocation collate information about traffic jams, road works, quickest routes and allow for the communication of current locations and delivery times between the customer and the service/goods provider. For example, the client can keep track of the person who is delivering his food, and conversely a restaurant can also know exactly when the customer arrives to collect their order. Even within companies it is possible to create more realistic schedules and production timelines based on geolocation information shared between employees.

Social apps

Geolocation has also become an integral part of many social apps allowing users to leave digital markers as they use restaurants, hotels, and bars so that they can  rate their experiences and leave comments for other customers. This also provides increased levels of user engagement as most recently exampled in the huge success of Pokémon Go which showed the possibilities of augmented reality making old forms of advertising suddenly come alive. The same technology can be applied to museums, galleries and other architectural spaces to create virtual tours and even help people negotiate other public facilities such as hospitals or government buildings.

big data apps

Summing up

Geolocation and its various applications are constantly improving, just as are the analytic tools used to interpret and apply the information produced by it. However, it is worth mentioning security concerns that are inevitably produced by the collecting and use of such data. In this regard we should only ask for information that we really need and we should be always transparent about how and when it is to be used. In this way users can always feel safe using our products and can make informed choices on whether or not to disable location on their app.

Want to know more about Espeo? Read about our services HERE.

See also:

Categories
Entrepreneurship Finance Financial Services Software

Micropayments: How your Business Can Benefit From Them

To understand ‘Micropayment’ is to understand the technology connected to it. Technology is constantly improving, affecting every aspect of our lives. One of the biggest challenges in life is simplifying our finances and the best way to do this is via Fintech (financial technology)- a developing area that brings together the best financial products worldwide at the lowest possible price. A crucial part of this is the processing of very small fees that cannot be handled by traditional credit card companies.

This has energized micropayment systems and the whole infrastructure connected with it. While earlier sending small amounts of money was seen as costly, because you still had to pay transaction fees, there were some indicators that micropayments could work:

The popularity of mobile devices

Access to financial services was limited by a lack of technology and the appropriate devices. As a result many transactions were completed on a cash only basis when they occurred outside of the normal banking system. However, the huge popularity of mobile services created an opportunity to provide financial services over its wireless network. Research showed a significant and growing market demand that was particularly important for all GSM (Eng. Global System for Mobile Communications) players. This demand intimated that it was technically feasible and profitable to deploy financial services over mobile networks. The big picture showed that mCommerce might fill a major service gap in developing countries that is critical to their social and economic evolution. Practice shows that the range of features accessible in particular environments can be applied elsewhere if the target markets are similar. With only minor variations from the main stream, the features of all systems need to include:

  • Over-the-air prepaid top-ups using the cash already in the account (like ‘blik).
  • The ability to transfer any amount of money between users’ accounts.
  • Provision for cash deposits and withdrawals.
  • The ability for a third party to make deposits into a customer account (employer, family member or a microfinance organization).
  • The ability to charge for bill payments.
  • The ability to make retail purchases at selected economic outlets.
  • The ability to transfer airtime credits between users.

Since all the above points are now achievable, micropayment has recently been reconsidered as a viable technology, largely due to the development of cellular networks. The main reason for this is not technological but more down to simple economics. Independent online service providers receive much revenue from mobile users. Mobile networks often charge users for admission to low-cost services on a fixed network. Alongside this many applications require a solution for the commissions placed on small transactions containing mass data storage and message exchange.

Insomuch as micropayment systems are designed to purchase exceptionally low-cost items, it is crucial that the value of each individual process remains very small.

online micropayments

Ad blockers plague

As ad blockers have gained popularity there has been renewed interest in micropayments. Originally the main focus in design was on content but emerging technologies using block chain have created amazing opportunities for artists, journalists, etc., as the content no longer has to be ad-friendly. Micropayment has allowed the author to be in absolute control of content distribution and its economic worth. Simply put micropayments drive browsers, empowering creators and the audience that follows them.

Closer examination reveals that when ads are kept out of the way, micropayments allow the author to more easily control their own income. They help reveal the true value of content which assists in ascertaining the authors’ economic sustainability. It is expected that micropayments will continue to evolve just as it did from paid for ads into its current form.

Technicalities – the protocol

As was previously established, the main issue with low-value transactions has been that processing and transaction fees diminish the final settlement amount. Payment processors place additional costs for a multiplicity of reasons including infrastructure costs, administrative costs, and paid mechanisms for fraud prevention and dispute resolution. In the past two decades much research has been undertaken on using digital communications and cryptography to reduce or erase these costs. For banking facilities these fees ideally need to be down to the fraction-of-a-cent range.

It can be expected that content servers for the global information infrastructure will soon operate billions of these low value transactions that are computationally complex. Whilst costly cryptographic protocols are now impractical and obsolete the micro-payment process can be bootstrapped with already well-known payment protocols for larger amounts, but does not depend on them for each micro-transaction. Special attention is given to its integration into IBM’s Internet Keyed Payment Systems (iKP) at its most basic level.
The product itself allows for the possibility of a payment protocol in wireless networks. The protocol usually assumes two techniques of transaction execution:

  • In on-line mode with the participation of a trusted website – for macropayments,
  • In off-line mode using electronic money, mainly for small value transactions – for micropayments.

The main purpose is to predict scenarios of various events and transactions in the protocol – and to be able to analyze any part of it. Paramount within this are the aspects of payment security such as asymmetric cryptography techniques, public key infrastructures and many more. Needless to say that for the evaluation of any protocol, performance must perfectly blend with the criteria specific to the wireless environment.

micropayments platforms

In summary

In summary micropayment platform schemes that are dedicated for processing small transactions work in two main ways.One is that a seller or service provider establishes an account with a third-party micropayment provider who accumulates, stores and distributes the monies accrued. Both seller and user/buyer are required to establish an account with the same micropayment provider for easier and safer implementation. The provider manages a digital wallet where all the payments are stored until they get to to a larger amount and can then be sent to the recipient.As an example let’s say a site called ‘The Freelance’ is a market workplace for freelancers to connect with companies to develop small projects. A company hires a developer from ‘The Freelance’ to make few changes on their website for $1/hr. If the developer works on it for 8 hours, the task giver – the company – pays ‘The Freelance’. In this case ‘The Freelance’ collects all the fees. It also stores the remainder in a developers’ digital wallet. If a developer is good and garners many fees, ‘The Freelance’ accumulates IOUs to the point where the wallet contains a significant sum, say $500, which is then sufficient to be withdrawn. ‘The Freelance’ then pays the developer directly into his account.

The second is that micropayment systems can operate as a system for prepaid transactions. A user/buyer makes use of a micropayment processor account by depositing in advance a certain amount of money in it. As long as the seller (the other side of the primary transaction) uses the same account provider everything works smoothly as the user’s account with the provider is easily debited for the amount of the purchase. Simply put the payment is made by using a micropayment processing account. Let’s illustrate this with the most common example: PayPal. PayPal is a very popular micropayment provider who has its own requirements for micropayments regarding the maximum amount of the transaction. According to PayPal a micropayment transaction is less than $10. So let’s imagine that a PayPal user decides to deposit $200 in their account. From that point user can become a buyer by purchasing an item for $5 from a webstore. The purchase price is debited from the PayPal account and used to cover the payment. On completion the balance in the buyers’ PayPal account will be $195 minus PayPal’s fees for micropayment transactions, the webstore’s balance account is plus $5, and PayPal gains the provision fee.

In all these scenarios, commercial organizations have much more to gain by addressing the problem of fiscal cash transactions by micropayments. Cash is not only more difficult to use, but you waste a lot of time moving it outside the banking sector.

In the 21st century no country exists beyond the scope of the banking sector and so for their own economic progress they should be encouraged to move away from cash. The extra motivation here is that the resulting low cost solutions and mechanisms that work in these environments can then be efficiently applied in all types of developed economies.

Want to know more about Online Payment Solutions and our recommendations?! Read this post:

How to Choose Online Payment Solution!

[contact-form-7 id=”13387″ title=”Contact download_8_reasons”]
Categories
Blockchain Financial Services Other

Smart contract use cases: commodity trading (tiqpit)

Tiqpit’s idea to move commodity trading to the blockchain is one of our most recent smart contract use cases. As co-founder Mike Ziemkendorf said, tiqpit was born out of the move to Malta: the confrontation with the situation on the small island, paired with the founders’ insider knowledge of how commodity trading works. See how tiqpit aims to solve trading problems and how Espeo is helping them to achieve that with blockchain.

How did you get the idea for tiqpit?

The final idea came from a necessity, I guess. Bitcoin and the blockchain idea went mainstream in 2011. Malta, our company location, is a very small island. Seeing the everyday struggles people have there to buy goods made us wonder. We’re 90 kilometers from Sicily and still, everything is so expensive. As traders in different markets and financial instruments, we know what that reality looks like as well. The highest price makes the profit, wetting and speculating… adding to the everyday costs of the end-consumers and leaving the producers with very small margins. Everything is overcomplicated. The confrontation with the local situation, paired with our knowledge about how the end price is set, was the final push for the tiqpit idea to take form.

Since “we cannot solve our problems with the same thinking we used when we created them” (Albert Einstein) blockchain gives us an opportunity to employ a different kind of thinking to solve the problems we created in the past.

So what exactly needed fixing in commodity trading?

TIQPIT - TRADING PLATFORM OVER BLOCKCHAIN FOR COMMODITY MARKETS

Commodity trading is very speculative, centralized, and in the hands of very few market players. It’s a very unfair system. This is exactly what we’re trying to change. We created a vulnerable trading environment, with proprietary matching engines and multiple proprietary protocols. Orders aren’t handled equally and fairly between market participants. This can lead to price manipulation. The system is inefficient and serves only a chosen few. Opportunistic information sharing and intermediary fees add costs and complexity. Of course, the end-consumer suffers.

How is your solution different?

eXAMPLES OF PROJECTS WORKING WITH BLOCKCHAIN SOLUTION IN THE ENERGY AND COMMODITY SECTOR

I know there are other projects and trading-related smart contract use cases. However, almost all of them are mostly based on private blockchain solutions. They’re led by major energy traders, in cooperation with exchanges and banks. Therefore, the projects are kept in-house and for their own use. Everything happens without the involvement of current and future end-customers, producers, suppliers and consumers. br>Tiqpit Solutions applies blockchain technology to create an easy-to-use and inter-operable b>trading, insurance, finance, reporting and risk management platform for all kinds of tradable commodity and energy products. But this time, we created it on an open-source blockchain solution. In one platform, we combine modules for each kind of participant involved in a commodity trade. That’s unique. Our tiqpit platform has no potential conflict of interest against any network participant. It’s tailored to the participants’ needs, with the aim to open the commodity and energy market for everyone.

So, blockchain was the best option?

commodity trading

At this moment, we see blockchain technology as the best solution to decentralize commodity trading. In short, commodity and energy supply contracts can be carried out automatically. They can be performed directly between producers and consumers (peer-to-peer). All other solutions, like cloud based services, will always have the ‘centralized touch’ with some control over trade matching, information flow or settlement.

Blockchain technology is the best way to connect all market participants (small and large producers, suppliers, consumers, authorities and auditors) in a direct and efficient way. Also, it allows participants to take control, by providing instant, real time information flow. At the same time, it offers significant cost reductions – by more than 30% per contract.

In your opinion, what are the challenges for blockchain adoption? Will we be seeing more smart contract use cases?

The technology is still emerging – it’s relatively new, and people tend to be skeptical about new things. Compare it with the evolving Internet in the early 90’s. New technology always has its faults, but, at the same time, it’s very exciting and offers lots of opportunities.  

  • We currently lack a common set of standards for blockchain transactions. We hope that this will be addressed in the near future.
  • I hope we’ll be able to standardize the wide variety of uses for blockchain and form some guidelines. This would help new products, smart contract use cases and services based on the blockchain evolve.
  • Although auditability and transparency are the benefits of blockchain, highly regulated industries may need to develop new rules. Blockchain’s distributed ledger transactions are likely to necessitate changes to industry regulations governing financial reporting, auditing processes or information-sharing regulations.
  • Also, laws will need to be made to govern blockchain’s smart contracts.
  • Of course, let’s not forget the security, vulnerability and validation of the transactions.

But all this will be addressed in the near future, in line with the current developments in the technology and as blockchain business ideas mature in the industry.

How did you come across Espeo, and what have we done for you?

We were searching for someone who would understand our needs from a technical perspective. Espeo provided just that, with your expertise in web design and smart contract development for our upcoming Token Generation Event (TGE). 
Espeo coded the smart contract (Solidity, Truffle, web3.js), including KYC mechanisms (AWS S3, Lambda) and implemented the Ethereum and Bitcoin payment module for the ICO. We also designed and developed the landing page (React.js). Tiqpit could start an ICO and proceed with their plan to revolutionize the commodities market.

blockchain-based product

Where do you see tiqpit in 5 years? What are your plans for the future?

In 5 years, with a 3.0 platform! But we would like to concentrate on the here and now, and create a great commodity trading platform based on the blockchain. See what we’re up to at www.tiqpit.com!
 
 

Categories
Software Technology

Build your own error monitoring tool

In this tutorial, I’ll describe how you can create your own error watcher. The final version of this project can be found on GitHub. I’ll later refer to the code from this repository, so you can check it out.

Why use even a simple error monitoring tool?

If your application is on the production or will be in near future, you should look for some kind of error monitoring tool, otherwise you’ll have a really big problem. And as a developer, let me tell you that looking for errors manually in your production environment isn’t cool.

Find the cause of the problem before your customers notice

For example, let’s say your application is doing some background processing which isn’t visible at first glance to the end user. The process fails at the one of the background steps. If you have an error monitoring tool, you’ll have the possibility to fix the bug before your customers notice it.

Reduce time of finding to fix time

Without a monitoring tool, when a bug is reported your team would probably start looking through logs manually. This significantly extends the fix time. Now, imagine that your team gets a notification right away when the error appears – you can now skip that time-consuming part.
error monitoring tool

Monitoring infrastructure

In this tutorial, we’ll use Elasticsearch + Logstash + Kibana stack to monitor our application. ELK is free when you use Open Source and Basic subscriptions. If you want to use custom functionalities, e.g. alerting, security, machine learning, you’ll need to pay.
Unfortunately, alerting isn’t free. If you’d like to send an alert message to a Slack channel or email someone about a critical error, you’ll need to use “semi-paid” X-Pack. Some parts are free in Basic subscription.
However, we can implement our own watcher to bypass Elastic’s high costs. I’ve got a good news for you – I’ve implemented it for you already. We’ll get back to it later.
The image below describes how our infrastructure is going to look like.
monitoring infrastructure
Logstash reads the logs, extracts the information we want, and then sends transformed data to Elasticsearch.
We will query Elasticsearch for recent logs containing error log level using our custom Node.js Elasticsearch Watcher. The Watcher will send alert messages into a Slack channel when the query returns some results. The query will be executed every 30s.
Kibana is optional here, however it’s bundled in the repo so if you’d like to analyze application logs in some fancy way, here you go. I won’t describe it in this article, so visit the Kibana site to see what you can do with it.

Dockerized ELK stack

Setting up Elasticsearch, Logstash and Kibana manually is quite boring, so we’ll use an already Dockerized version. To be more precise, we’ll use Docker ELK repository which contains what we need. We’ll tweak this repo to meet our requirements, so either clone it and follow the article or browse the final repo.
Our requirements:

  • Reading logs from files
  • Parsing custom Java logs
  • Parsing custom timestamp

We’re using Logback in our project and we have a custom log format. Below, you can see the Logback appender configuration:

Here are the sample logs:

Firstly, we need to update the docker-compose.yml file to consume our logs directory and custom patterns for Logstash. The Logstash service needs two extra lines in its volume section:

The first line binds our pattern directory. The second attaches logs to container. $LOGS_DIR variable will later be added to an .env file, which will give us ability to change logs dir without modifying the repository. That’s all we need.
If you’d like to persist data between container restarts, you can bind Elasticsearch and Logstash directories to some directory outside the Docker.
Here’s the .env file. You can replace logs dir with your path.

How to configure Logstash to consume app logs

Logstash’s pipeline configuration can be divided into three sections:

  • Input – describes sources which Logstash will be consuming.
  • Filter – processes logs e.g. data extraction, transformation.
  • Output – sends data to external services


The code above is the full Logstash configuration.
The input section is quite simple. We define basic input properties such as logs path and logs beginning position when starting up Logstash. The most interesting part is the codec where we configure handling multiline Java exceptions. It will look up for beginning by, in our example, a custom timestamp and it will treat all text after till next custom timestamp as one log entry (document in Elasticsearch).
I’ve included a patterns directory, so we can use our custom pattern in multiline regex. It’s not required, you can use normal regex here.
The filter section is the most important part of a Logstash configuration. This is the place where the magic happens. Elastic defined plenty of useful plugins which we can use to transform log events. One of them is Grok, which we’ll use in the monitoring tool.
Grok parses and structures text, so you can grab all fields from your logs, e.g. timestamp, log level, etc. It works like regex. Just define your log pattern with corresponding field names and Grok will find matching values in the event. You can use default Grok patterns or create your own to match custom values.
In our example, we’ll use a custom timestamp, so we need to define a custom pattern. Grok allows us to use custom patterns ad hoc in message pattern. However, we want to use it more than once, so we defined a patterns file which we can include in places where we need the pattern e.g. multiline codec and Grok. If you use a standard timestamp format, just use the default one.
Here’s the pattern file:

The file structure is the same as in other Grok patterns. The first word in the line is the pattern name and rest is the pattern itself. You can use default patterns while defining your pattern. You can also use Regex, if none of the default matches your needs. In our case, the log format is e.g. 20180103 00:01:00.518, so we’re able to use already defined patterns.
In the output section, we define that transformed logs will be sent to the Elasticsearch.

Docker file permissions

One thing that took me some time to figure out was configuration of the file permissions of the logs accessed by dockerized Logstash.
If your logs are created as a user with id 1000, you won’t notice the problem and you can skip this step. However, you’re most likely dealing with quite the opposite. For example, you run your application on Tomcat and the logs are created by the Tomcat user and then bound as a volume to the Logstash container. The Tomcat user is not the first user (1000) in the system, so user id won’t match in the container. Default Logstash image runs as a user 1000, so it can read logs only with permission of user 1000. It doesn’t have access to other users’ files.

The trick here is to switch to root in Docker file and create new group with id matching the log creator group id and then add the Logstash user to it. Then we add the user which runs the container to the group which owns the logs on the server (sudo usermod -a -G <group> <user>). After that, we switch back to the Logstash user for security reasons to secure the container.  

Filebeats – log agent

error monitoring infrastructure
The implementation described so far can be used for one application. We could scale it to support many applications, but it wouldn’t be fun nor easy. Logstash reads lines one by one and sends them after transformation to Elasticsearch.
I’ve got a better solution for you. Elastic created family software called the Beats. One of them is Filebeats which kills the pain of log access. The Filebeats is simply a log shipper. It takes logs from the source and transfers them to Logstash or directly to Elasticsearch. With it, you can forward your data, although it can also do some of the things what Logstash does, e.g. transforming logs, dropping unnecessary lines, etc. But, Logstash can do more.
If you have more than one application or more than one instance of the application, then Filebeats is for you. Filebeats transfers logs directly to the Logstash to port defined in the configuration. You just define where should they look for logs and you define the listening part in the Logstash.
The file permission problem will of course be present if you want to run the dockerized version of the Filebeats, but that’s the cost of virtualization.
I suggest you use Filebeats for production purposes. You will be able to deploy ELK on the server which won’t be actually the prod server. Without Filebeats (with Logstash only) you’ll need to place it on the same machine where the logs reside.

Sending Slack alert message

Elastic delivers the Watcher functionality within X-Pack, bundled into the Elasticsearch and, what’s more, there is an already defined Slack action which can send custom messages to your Slack channel, not to mention more actions. However, as stated before, it’s not free. The Watcher is available in Gold subscription, so if that’s ok for you, then you can skip rest of the article. If not, let’s go further.
When I noticed that the Elastic Watcher is a paid option, I thought that I could do my own watcher which would send alert messages to Slack. It’s just a scheduled job which checks if there’s something to send, so it shouldn’t be hard, right?
The Watcher
I created an npm package called Elasticsearch Node.js Watcher, which does the basics of what the X-Pack’s Watcher does, namely watching and executing actions when specified conditions are satisfied. I chose Node.js for the the Watcher, because it’s the easiest and fastest option for a small app which would do all of the things I need.
This library takes two arguments when creating a new instance of a watcher:

  • Connection configuration – it defines connection parameters to Elasticsearch. Read more about it in the documentation.
  • The Watcher configuration – describes when and what to do if there’s a hit from Elasticsearch. It contains five fields (one is optional):
    • Schedule – The Watcher uses it to schedule cron job.
    • Query – query to be executed in Elasticsearch, the result of which will be forward to predicate and action.
    • Predicate – tells if action should be executed.
    • Action – task which is executed after satisfying predicate.
    • Error handler (optional) – task which will be executed when an error appears.

We need to create a server which would start our Watcher, so let’s create index.js with Express server. To make the environment variables defined in .env file visible across Watcher’s files, let’s also include dotenv module.

The meat. In our configuration, we defined to query Elasticsearch every 30 seconds using the cron notation. In the query field we defined the index to be searched. By default, Logstash creates indexes named logstash-<date>, so we set it to logstash-* to query all existing indices.

To find logs, we use Query DSL in the query field. In the example, we’re looking for entries with Error log level which appeared in last 30 seconds. In the predicate field, we’ll define the condition of hits number at greater than 0 since we don’t want to spam Slack with empty messages. The action field references the Slack action described in the next paragraph.

Slack alert action

To send a message to a Slack channel or a user, we needed to set up an incoming webhook integration first. As a result, you’ll get a URL which you should put it in the Watcher’s .env file:

Ok, the last part. Here, we’re sending a POST request to Slack’s API containing a JSON with a formatted log alert. There’s no magic here. We’re just mapping Elasticsearch hits to message attachments and adding some colors to make it more fancy. In the title you can find information about the class of the error and the timestamp. See how you can format your messages.

Dockerization

Finally, we’ll dockerize our Watcher, so here’s the code of Dockerfile:

For development purposes of the Watcher, that’s enough. ELK will keep running and you’ll be able to restart the Watcher server after each change. For production, it would be better to run the Watcher alongside ELK. To run the prod version of the whole infrastructure, let’s add a Watcher service to docker-compose file. Service needs to be added to the copied docker-compose file with a “-prod” suffix.

Then, we can start up our beautiful log monitor with one docker-compose command.
docker-compose -f docker-compose-prod.yml up -d
In the final repository version, you can just execute make run-all command to start the prod version. Checkout the Readme.md, it describes all needed steps.

But…

This solution is the simplest one. I think the next step would be to aggregate errors. In the current version you’ll get errors one by one in your Slack channel. This is good for dev/stage environment, because they’re used by few users. Once you’re on production, you’ll need to tweak Elasticsearch’s query, otherwise you’ll be flooded with messages. I’ll leave it for you as homework 😉

You need to analyze the pros and cons of setting up all of this by yourself. On the market, we have good tools such as Rollbar or Sentry, so you need to choose if you want to use the “Free” (well, almost, because some work needs to be done) or the Paid option.
I hope you found this article helpful.