Categories
Blockchain Software Technology

Decentralized AI: Blockchain's bright future

Blockchain and artificial intelligence are driving technological innovation worldwide and both have profound  implications for the future of business as well as our personal data. How can the two technologies merge? I’ll discuss the opportunities which could arise from decentralized AI.

Before we look at the possible merging of blockchain and AI into decentralized AI, let’s look at the two separately. Let’s look at the benefits of Artificial Intelligence and blockchain.

Artificial intelligence (AI) is a field in computer science dedicated to creating intelligent machines. Also known as machine learning, AI gives machines skills traditionally reserved to humans. Problem solving, speech recognition, planning, and learning are among them.

Meanwhile, blockchain is a decentralized technology which is a global network of computers. A robust platform allows blocks of similar information to be stored over the network.
PwC predicts that by 2030 AI will add up to $15.7 trillion to the world economy, and as a result, global GDP will rise by 14%. According to Gartner’s prediction, business value added by blockchain technology will increase to $3.1 trillion by the same year. Currently, the cryptocurrency sector makes the most use of blockchain tech. So, is the integration of blockchain and AI possible? Can both merge into one and enter other sectors? Actually, that’s already happening and some businesses are beginning to see the potential of integrating blockchain and AI.

Advantages of blockchain technology

Here are some of the advantages of blockchain technology:

  • Blockchain is decentralized. It allows data to be shared without a central unit.  This keeps transactions on a blockchain verifiable and processable independent of a central force.
  • Blockchain is durable and consistent due to its decentralized nature. It can resist malicious attacks on its systems because it does not have a central point vulnerable to attack.
  • Information, timelines, and authenticity supplied by blockchain technology are all accurate.

Benefits of Artificial Intelligence (AI)

AI, or machine intelligence, has a lower error rate compared to humans when coding. As a result, AI offers a greater level of accuracy, speed and precision.

  • AI  is totally logical as it has no emotions and thus makes error-free rational decisions. 
  • Machines don’t get tired and can thrive in hazardous conditions. This enables them to carry out dangerous tasks, such as space exploration, or even mining.
  • Trusting AI with data analysis is the best decision any company can make. AI can easily calculate unstructured data, and give results in real-time, ensuring accuracy in data analytics.

Previous collaboration between blockchain and AI

There’s been notable integration between AI and blockchain. Some examples of this include the Singularity.Net blockchain and AI program, which was created to enhance smart contract testing. Supply chain firm, Nahame has also incorporated blockchain technology and AI to help companies with auditing. There are some plans by a peer-to-peer car rental company, which have been made public, to produce a fleet of self driving cars on blockchain technology.

Decentralized AI – where AI and blockchain could intersect

The best way to use the two of the biggest technologies out there today is by looking to capitalize on one’s strength to aid the other.

Data protection

Artificial intelligence largely depends on our data and uses it to improve itself through machine learning. What’s particularly relevant to AI is the gathering of data about human interactions and other details. Blockchain is a technology that allows encryption of data storage on a decentralized system, and it runs a totally secured and protected database only authorized users can access. So when we integrate blockchain and AI, it means we have a protected decentralized AI system for sensitive data such as financial or even medical data. Therefore, blockchain technology is a great security advantage.

Let’s take a look at Spotify – it uses users’ data to recommend music based on their recent searches and preferences. Most of the time we aren’t concerned about the information as it isn’t particularly sensitive. However, when it comes to our sensitive information stored in the cloud of a company, we would be more concerned about privacy and the guarantee of that privacy.

Ensuring security

As a centralized system running on a single processor,  hackers or malware can infiltrate an AI system and alter its instructions. With blockchain though, before any information is accepted and processed on a blockchain platform, it must go through several nodes or phases of the network on the system. It becomes more difficult to hack any blockchain-based technology when it has more nodes on its network. Although not impossible, it would be far more difficult to hack a blockchain-based, decentralized AI platform.

Trustworthiness

There is greater trust in the system. In order to have credibility, a system must be trustworthy. Blockchain is a more transparent technology than a closed AI system. Blockchains protect data through encryption — only authorized users can access it. This makes it impossible for unauthorized parties to view anything.

In the case of blockchain application in the healthcare sector, patients don’t want their medical information to be accessible to any unauthorized viewers. Medical information remains encrypted to prevent unauthorized third parties from accessing it. Keeping medical information on a blockchain would also allow healthcare providers to easily access patients’ files so they can provide medical aid in case of an emergency. Adding increased performance AI will bring storage to the blockchain by making it easier to access unstructured data.

Benefits of Artificial Intelligence & blockchain in the long run

There are many benefits businesses can gain from integrating blockchain with AI. Porsche automobile in partnership with XAIN AG is already working on decentralized AI applications in its advanced vehicles. JD.com, a leader in developing AI-based applications, has already started using this integration to build decentralized business applications. So it’s worth considering blockchain and AI as integrated technology. It’s not a problem if you already use blockchain or just AI in your business. You can integrate either technology through your existing website API.
Here are some benefits of Artificial Intelligence merging with blockchain:

Decentralized Intelligence

This is an obvious result of the technology integration. Blockchain is a decentralized system while AI is an intelligent system. It would enable business organizations to set up a blockchain-based architecture that allows a combination of AI design. This could be a peer-to-peer connection that has an image recognition feature or language processing.

Energy saving and cost efficient IT architecture

A 2016 report from Deloitte estimated that the annual cost of authenticating transactions on a blockchain is $600 million, most of which goes into mining operations. An AI-integrated blockchain will help organizations reduce their energy consumption. Since AI can predict and speedily calculate data, it would also make it possible for cryptocurrency miners to know when they are performing a less important transaction. This would also allow enterprises to execute transactions faster.

In fact, as AI becomes more developed, and after the integration of AI and blockchain technology becomes more common, AI may take over the mining process on blockchains. Given the fact that AI learns and adapts to its environment, combined with blockchain, there’s no doubt that it will learn the process and the architecture of the blockchain network.

Flexible AI

AI integration with blockchain will pave the way for the development of an artificial general intelligence (AGI) platform. The blockchain model can create a distributed specimen for the development of an AGI.

The integration of blockchain and AI has yet to take off fully. Combining the two technologies into decentralized AI has deep potential to use data in novel ways. A successful integration of both technologies will allow quicker and smoother data management, verification of transactions, identification of illegitimate documents, etc. Therefore, if you’re contemplating the integration of both technologies for your business, don’t hesitate, do it!

Categories
Software Technology

Staff-efficient Large Scale Server Management using Open Source Tools (Part 1)

Server management on behalf of the client is a fairly common service today, complementing the portfolio of many software houses and outsourcing companies. There are still, however, two types of companies on the market: those that provide high-quality services (with competences) at an attractive price (achieved thanks to synergy, not thanks to crude cost-cutting), and… others. If you’re a user of such services — have you ever wondered, which type of supplier you currently use?

Question of quality

All of us are and have been customers many times, and we have a better or worse idea of ​​what high quality is. Often we simply identify it with the satisfaction from the service. The problem arises in case of more advanced services, or those that we have only been using for a short period of time, or even for the first time — and we don’t know what we should really expect from a professional service, and what is only the solid mediocrity.

Let’s think about the criteria of professionalism in the case of server management services. In 2018 — not in 2008 or 1998. The first answer that comes to mind is, of course, “customer satisfaction.” The thing is, this satisfaction is a subjective matter, relative, for example, to support reaction times or other variable parameters derived from purchased hosting plan — or even subjective feelings from conversations with a support engineer (people can feel sympathy for each other or not).

In 2018, another completely objective parameter is absolutely critical: security. In fact, this is why the management of servers is entrusted to specialists, instead of, for example, a full-time programmer who, after all, also knows how to install Linux, so that our clients’ data is safe.

server management quality

How to provide high-quality services

The question arises, however, on the part of the supplier: how to provide high-quality services (and therefore with maximum pressure on security) at competitive market prices, while paying relatively high wages to employees (in Poland today we have an employee market, especially in IT, which imposes high rates) and still make money on these services?

The answer is very simple and complicated at the same time. It’s simply choosing the right tools, that fit your real business model. This seemingly simple answer is complicated, however, when we begin to delve into details.

The key to the correct selection of tools is the understanding of your own business model at the operational level, i.e. not at the level of contracts and money flow, or even at the level of marketing and sales strategy, but at the level of actual work hours, and possible synergies between analogical activities for different clients. Server management doesn’t have the characteristic of a production line, on which all activities are fully reproducible — the trick is to find certain patterns and dependencies, on the basis of which one can build synergy and choose the right tools.

Instead, unfortunately, most companies that provide good quality server administration services, go to one of two extremes that prevent them from building the synergy, which leads them to high costs. As a result, these services aren’t perceived by their boards as prospective and in time become only a smaller or larger addition to these development services. This, in the long run, leads to self-suppression of good-quality services from the market, in favor of services of dubious quality. But let’s go back to the extremes mentioned. It can go in one of the following directions:

  1. Proprietary, usually closed (or sometimes “shared source”) software for managing services. At the beginning, meeting the company’s needs perfectly, over time, however, these needs change, because the technology changes very quickly and the company itself is evolving. As a result, after 3-4 years, the company stays with the system which isn’t attractive for potential employees (because the experience gained in such a company isn’t transferable to any other company), and secondly, it requires constant and increasing expenditures for maintenance and “small development.”
  2. Widely-known software, often used and liked, or at least recognized by many IT people, only… it fits someone’s imagination about their business model, instead of the real one. Why? The reason is very simple: most of popular tools are written either for large companies managing homogeneous IT infrastructure (meaning that many servers are used for common purposes, have common users, etc.), or for hosting companies (serving different clients but offering strictly defined services).

Open source tools

Interestingly, as of 2018, there are still no widely known, open-source tools for heterogeneous infrastructure management, involving support for different owners, different configurations, installed services and applications, and, above all, completely different business goals and performance indicators. Presumably, because authors of such tools don’t have any interest in publishing them as open-source, decreasing potential profit. All globally used tools (eg. Puppet, Chef, Ansible, Salt and others) are designed to manage homogeneous infrastructure. Of course, you can run a separate instance of one of these tools for each client, but it won’t scale to many clients and won’t build any synergy or competitive advantage.

At this point, it’s worth mentioning how we dealt with it in Espeo. Espeo Software provides software development services to over 100 clients from around the world. For several dozen clients, these services are supplemented by management of both production and dev/test servers, and overall DevOps support. It’s a very specific business model, completely different from eg. web hosting company or a company that manages servers for one big client — at least at the operational level.

Therefore, one should ask what are the key factors in such a business model to build synergies — and, above all, on top of what should this synergy be built, so that it’s not synergy at the expense of professionalism. In the case of Espeo, we decided on a dual-stack model, in which, in simplified terms, server support is divided into infrastructure and application levels. This division is however rather conceptual than rigid since both these levels overlap themselves in many aspects.

This division, however, provides the basis for building synergies at the infrastructure level, where, unlike at the application level, the needs of very different clients are similar: security. At the infrastructure level, we use the open-source micro-framework Server Farmer, which is actually a collection of over 80 separate solutions, closely related to the security of the Linux system and various aspects of heterogeneous infrastructure management based on this system.

The Server Farmer’s physical architecture is very similar to the Ansible framework architecture, which is used by us at the application level. Thanks to the similar architecture of both tools, it’s possible, for example, to use the same network architecture on both levels and for all clients. Most of all, however, we’re able to build huge synergy in the area of ​​security thanks to the change from the management of separate contracts (which, in the era of tens of thousands of different machines searching the whole Internet for vulnerabilities and automatically infecting computers and servers, is simply a weak solution) to the production line model, ensuring the right level of security for all clients and all servers.

server management services

Building synergy

An example can be taken from the process of regularly updating the system software on servers, which in 2018 is an absolutely necessary process to be able to talk about any reasonable level of security at all. Modern Linux distributions have automatic update mechanisms, however, the only software elements updated are those which  won’t cause disruptions in the operation of the services. All the rest should be updated manually (i.e. through appropriate tools, but under the supervision of a person).

And here is the often-repeated problem in many companies: using tools that are known and liked by employees, even if these tools don’t fit the company’s business model and don’t speed up this simple operation in any way.

Let’s imagine a company that supports IT infrastructure for e.g. 50 clients, using 50 separate installations of Puppet, Chef Ansible or, which is even worse, a combination of these tools. As a result, we manage the same group of employees-administrators 50 times, we plan the architecture of the system 50 times, we start to analyze logs 50 times, etc. It’s, of course, feasible and in itself doesn’t lead to lowering of the security level. However, in such a model it’s impossible to use the employees’ time effectively, because with 50 separate installations most of this time is consumed on simple, repetitive and easy to automate activities, and on configuring the same elements, just in different places. What follows is that any business conducted this way isn’t scalable and leads to gradual self-marginalization.

This mistake, however, isn’t due to poor orientation or bad intentions of these companies. It’s simply because open source tools for managing the heterogeneous infrastructure of appropriate quality are relatively specialized software, and as a result, the knowledge of such software by potential employees is quite rare. What’s more, many companies decide to create dedicated instances of Puppet or Ansible for each client, because their undeniable advantage from the perspective of the employee is the transferability of experience between successive employers – even if for the employer himself it means the lack of scalability of the process.

If we look at it from the point of view of building synergy and as a result of permanent business advantage, however, the selection of tools to only satisfy the current and short-term HR needs is a weak idea. A much better attempt is to build a compromise between “employability” and the scalability of the employees’ work. This is why in our dual-stack approach, with the help of the Server Farmer, each employee responsible for infrastructure level management can manage approx. 200 servers per daily-hour (an hour spent each working day).

That means one theoretical job position, understood as the full 8 hours worked during each working day (i.e. full 168 hours per month), can support approx. 1,600 servers in the entire process of maintaining a high level of security (including daily review of logs, uploading software updates, user and permission management, and many other everyday activities). Of course, the real, effective workday of a typical IT employee is closer to five than eight hours a day, nevertheless, the theoretical eight hours is the basis for all comparisons. If you already use server management services, ask your current provider, how many servers is each employee able to support without the loss of quality…

Check out Expertise

Whether you need support for your single-server MVP, or you want to scale up your application to thousands servers – our unique approach gives you the best time-to-market and full transparency.

Security isn’t everything

But, of course, security isn’t everything. After all, nobody pays for maintaining servers only so they are secure as such, but to make money on them. And money is earned by the application level, which is by nature quite different for each of the clients. These differences make building synergies between individual client activities so hard, that in practice, there is no sense in doing it since easy hiring is more important than micro-synergies. That’s why in Espeo we use Ansible at this level, as it’s compatible with the Server Farmer, and at the same time, it’s widely known and ensures the inflow of employees.

Of course, for such a dual-stack solution to work properly, it’s necessary to set clear limits of responsibility, so that on the basis of these limits, as well as on the basis of SLA levels or other parameters bought by individual customers. It’s possible to build specific application solutions, and the actions of individual employees don’t overlap. Only then it will be possible to build effective capacity management processes (modeled after ITIL), providing each customer with high-quality services in a repeatable and predictable manner.

capacity management

In part two we’ll describe more technically, what particular solutions we use, in what architecture and how these solutions apply to business processes, and what processes we use for PCI DSS clients.

Estimate your project

Do you have a creative idea? Give us just a little more details and we will get back to you with a tailored offer!

Find more about our Expertise

Categories
Entrepreneurship Software Technology

How to Choose a Mobile Payment Provider

In today’s world, most people carry their cell phone wherever they go. No longer just a phone, mobiles are also multi-functional tools. Mobile devices are increasingly replacing the computer for internet browsing and game playing. It’s also trying to replace our photo camera because there is no need to carry one when we always have our cell phones within reach. The cell phone is also trying to replace people’s wallets.

More and more users expect that apps will enable the possibility to pay for the goods and services directly with a cell phone. It’s in users nature to seek the easiest way to buy something — one click to make a purchase in an app is enough. Statistics confirm that the less a user has to enter while purchasing, the higher the conversion rate. So, if users demand an easy way of payment, then you should make it possible. In this article, I will focus on the topic of mobile payment providers.

mobile payments providers

What is Mobile Payment?

So, let’s discuss what  a mobile payment is and how to do it. A mobile payment is any payment with a mobile phone. Ok, so that sounds simple enough, but how we acheive this? There are several ways to make a mobile payment:

  • NFC (Near-field Communication) — It’s a connectionless method. Every shop needs a special device to make it possible.
  • Mobile wallet — where the phone stores credit card information, replacing your credit card and making your payments much easier.
  • Carrier billing (Premium SMS messaging/ OTP) — operator centric model. The operator bills you for the services.
  • Direct communication between a mobile phone operator and a bank payee.
  • Credit card — this is common but means we have to carry our card with us at all times.

What is a mobile Payment Provider?

A mobile payment provider simply provides payment services using a phone mobile under financial regulations. They secure the payment process and try to make it as easy as possible to use. Big IT companies such as Google and Apple provide such solutions but also financial companies and mobile phone operators try to suggest their own models. Although at the first glance it all may seem that big competitors usually focus on different market sectors enabling integration only when it may be beneficial from the user viewpoint.

What is a mobile Payment Provider?

Ready to develop your app with us? Let us know.

No more gateway processors?

So, you may think “right, so if my mobile payment provider handles all this stuff, then maybe it’s all I need.” But that’s not quite so. The Payment provider (with some help from other networks) handles the card tokenization. It basically generates a unique ID number which represents your card data without  displaying it. The device stores your information so you can process payments without giving card data. However, the payment provider does nothing to authorize and process your transaction.
 
This is where payment processors come in and acts as a payment gateway. Based on the received information the payment processors can carry out the transaction. Transaction processing is secure so you don’t have to worry about any PCI Compliance which will be provided by you gateway processor. Remember to choose a payment processor  available in your country. While many Mobile payment providers charge no fee, payment processors will have their own fees for processing transactions – so just be aware of that.

Let’s take a look at what kind of mobile payment providers exist on the market:

  • GPayThis Google product acts as a mobile wallet and simply stores your card information. You can also upload different kinds of loyalty card information. Later on, users can use it in applications by pressing the “Buy with GPay button.” Also, you can use it in shops which support GPay through NFC. As a user, you can also exchange money between two mobile phones. Goggle’s service is free of charge for transactions made with GPay. Users have to unlock their phones to pay. GPay supports the following gateway processors: ACI, Adyen, Braintree, Checkout.com, Cybersource, CloudPayments, Datatrans, EBANX, First Data, Global Payments, IMSolutions, Paysafe, Payture, Przelewy24, RBK.money, Sberbank, Stripe, Vantiv, Worldpay, Yandex.Checkout.
  • Apple Pay – Apple’s version acts as a mobile wallet storing your card information to use on any Apple device. Each transaction made with Apple Pay must be authorized with face ID, touch ID or a passcode. You can use it in stores that have the right equipment that supports it. You can also send or receive money just like in GPay.
  • Samsung Pay – Not to be outdone, Samsung’s mobile payment app works similarly, but is more limited than GPay or Apple Pay. It’s compatible with all Samsung devices. Samsung provides its own reward program for purchases. The service is free and uses something called Magnetic Secure Transmission (MST), which emulates the swipe of a traditional magnetic strip. It allows users to make payments with most typical payment terminals, not just the new ones that include NFC.

Mobile Payment Providers

  • Microsoft Wallet – This is a wallet for Microsoft devices and  for Visa. ‘Tap to pay’ is currently available only in the US. You can store your loyalty cards by scanning their QR code.
  • Visa Checkout – If you register your card through your account on Visa’s Web site, then you can pay via your web browser simply by providing a user name and a password. It works on any device with internet access . Visa Checkout accounts can be connected to other web wallets such as GPay or Samsung Pay
  • PayPal – The electronic payment giant is also present on mobile phones and mobile devices, we can choose PayPal in GPay to pay for the services. PayPal also provides mobile integration through SDK. The mobile service for PayPal is called Braintree. But you should remember that PayPal is also a payment processor.

How to choose the right mobile Payment Provider for in-app purchases?

So, you may ask what payment provider you should choose while developing a mobile app. This is quite a simple question. While developing apps for iOS you should use Apple Pay and while focusing  on Android you should go for GPay. Samsung Pay can also be an option for Samsung devices but its support is limited to Samsung devices so you should be careful.
 
Please, take your time to browse the SDKs of your payment processor or integration guides (Stripe, Braintree). Some of them have ready-made solutions (tips, recommendations) which you can integrate over Mobile Payment Providers solution as a payment method and also support your regular credit card payments. Bypassing mobile payment provider is used for increasing conversion rate. Naturally, some users may not wish to pay with GPay. You should always monitor users’ behaviour on the payment screen before conducting a transaction  because if the payment method chosen by the clients is unsupported then the client will not buy your products.

The Future Begins Now!

Remember that payments are not an easy topic. Information about payment transactions are sensitive data. Users want to make a purchase with a single button but they don’t want to give up on security. That’s why mobile services will only gain popularity in time. The payment integration is something where we can help. If you are planning to develop a new mobile or web application or add  payments to an already existing one, don’t hesitate and contact us at Espeo.
 

See also:

Categories
Software Technology

App Security: Why is it worth it to implement JWT based authentication in your app?

App speed and security are of huge importance. Our main goal aligned with our customer’s goal is to deliver a satisfactory product as quickly as possible and within budget. There are a number of practices, recommendations and even proposals on how to run projects, but implementation details are extremely important – small changes that allow us to write competitive applications.

When creating a web service one of the most important things is choosing the right authorization method.There are a number of choices: OpenID, SAML, Kerberos and OAuth2 which is the most popular open standard authorization framework. The authorization process for OAuth2 consists of sending the client an authorization request with credentials to issue a random token from the authorization server. This is sent to the resource server to verify if the client is authorized to use the resource and perform specific operations.

The framework is supported by Microsoft (for several APIs and Azure Active Directory service), Facebook (only GraphAPI –
the primary way for apps to read and write to the Facebook social graph) and Google (authorization mechanism for every their APIs). The fact that OAuth2 is used by such important companies in the IT market, vouches for the quality and benefits of this method. In most cases a username and token are required in OAuth2, and additionally it specifies the way in which the tokens are to be sent… and this is where JWT (JSON Web Token)comes forward. JWT is not a protocol or an authorization framework, but it determines the format of the authorization token, which, as it turns out, can have a measurable impact on the functionality of the entire mechanism.

Why is JWT gaining so many supporters

JWT became an open standard in 2015, and in the same year RFC was also created for JSON Web Token Profile for OAuth 2.0 Client Authentication and Authorization Grants, suggesting the possibility of using the OAuth2 protocol with the JWT format for tokens. It gained many fans because of its simplicity and ease to use. As the name suggests, the format of the token is presented in JavaScript Object Notation (JSON). It is a very common data format used for communication between the browser and the server. Thus giving the opportunity for a concise and clear way of transferring information between the two parties in a JSON object.

Further, thanks to the huge amount of parsers available in the programming languages, we can directly convert the received token into an object. It is also worth comparing it to other popular formats such as SAML, SWT or even ordinary string UUID. As for the structure, it is more concise, we do not need to convert a huge token presented in the form of XML as in the case of SAML (obviously the problem concerns front-end site). However, the biggest advantage is the possibility of transferring a large amount of additional information in one token, a so-called claimsy. Support in the form of a huge number of token signing and token verification libraries for virtually any programming language suggests the usability of JWT on a large scale while maintaining an adequate level of security.

What can we pass

A big advantage is the ability to format JWT claims transfer in a token. Claims are statements about an entity (typically, the user) and additional data. Additional token content allows you to limit the number of database queries. Only basic information about the user is collected at the moment of logging in. It should also be noted that the format works well in the context of microservices. It is possible to transfer data between microservices without having to ask for data from a central session microservice. As such if the signature is correct, then the received data is also reliable and we can be trusted.

App Security: Why is it worth it to implement JWT based authentication in your app?

Let’s analyze the content of individual elements of a JWT token, in particular the Payload, which contains our additional information:
Header – informs about the algorithm that was used to generate the token (the standard defines algorithms such as HMAC SHA256 RSA)
Signature – is the digital signature of our token, encrypted in Base64 and calculated from combination of the Header and Payload together with the secret key – in our case the hash function is HMAC-SHA256.
Payload – the content looks familiar to a typical response in JSON format to a specific resource in REST services. Creating claims can be compared to the preparation of a DTO (Data Transfer Object). Set of information to be returned within the token is defined in the server code.

As you can see in the example above – information about the user’s hobby, country of residence, the currency in which he performs his transactions and standard user’s information of the system – id, username and its roles will circulate in the token. It is not a coincidence that I pointed out the types of claims … it is worth stressing that you have to use your own claim names carefully because they may already be registered in iana.org and cause name conflicts. In the example, we’re using two reserved claims – exp (token expiration time) and jti (token id to prevent repetition).

Security

Security is often associated with hiding all data. The JWT standard does not encrypt data, but only converts it to Base64. The creators’ assumption is to be sure that the data being sent to the authorization server is created by a reliable source, thus – preventing unauthorized access. This is why it’s also worth to encrypt the data using SSL protocol to prevent theft of the token. The most important thing is that we are sure about the sender’s identity and that the data received has not been compromised in any way. Verification of the token is quite simple.

App Security: Why is it worth it to implement JWT based authentication in your app?

Servers have secret keys and use them to generate a JWT token. This is of great importance, as because of this they are able to verify its correctness. If the credentials match a JWT token is generated at the request of the client. Thanks to the fact that the application server knows the secret key, it can generate a signature for the JWT token received from the user and compare it with the signature. If everything goes correctly, the user is authenticated and can accesses certain data. Otherwise it may indicate some kind of attack. Thanks to this, access to resources on the server is safer, particularly because authentication with user credentials is a one-time process.

Microservices

It would be a sin not to mention microservices. Especially that JWT is doing so well within this architecture. At this point, it is worth distinguishing between two approaches: writing monolithic applications and those based on microservices that are independent of each other.

Protection for monolithic applications is quite simple. There is an application server that manages user authorization and authentication in advance. In addition, we are sure that all services on which we perform queries are implemented on one and the same server. We do not need to worry about whether each service has authenticated the user separately. It is fully decentralized.
Microservices are different and the main problems are the communication between them and the answer to the question: How can we authenticate the user and be sure that other microservices know about it? As it turns out, JWT ensures that we send information between all microservices about the fact that a given user has access to specific resources.

We use the OAuth2 credential grant (client credential grant) that allows clients to obtain access tokens by providing their client id and secret. As a result, each microservice receives its own client identity and credentials. Then, this data is sent along with the request for token access to the authorization server.

The advantage of this solution is that you can withdraw access to the microservices at any time if we verify the user’s password has been compromised. In addition, the management of scopes, roles, and credentials of microservices are completely controlled. Thanks to JWT claims (more precisely the permissions contained in the token) we know exactly to which resources the user has access, which makes JWT a perfect fit for such architecture.

App Security: Why is it worth it to implement JWT based authentication in your app?

Summary

The JSON Web Token is an increasingly popular format for representing tokens; it is slowly becoming a standard token format and the number of users is growing every day. Thanks to its compact size, lightness and independence, it offers great customization possibilities. The token containing all the information needed for verification without a continuous query of the database is amazingly useful. Of course, we can implement a unique solution based on traditional user’s session but this is unnecessary when such a popular solution is in place. Thanks to JWT, we can easily identify the user between particular sites, and the unique session model could not do it better.

Sources: