Categories
Software Technology

Staff-efficient Large Scale Server Management using Open Source Tools (Part 1)

Server management on behalf of the client is a fairly common service today, complementing the portfolio of many software houses and outsourcing companies. There are still, however, two types of companies on the market: those that provide high-quality services (with competences) at an attractive price (achieved thanks to synergy, not thanks to crude cost-cutting), and… others. If you’re a user of such services — have you ever wondered, which type of supplier you currently use?

Question of quality

All of us are and have been customers many times, and we have a better or worse idea of ​​what high quality is. Often we simply identify it with the satisfaction from the service. The problem arises in case of more advanced services, or those that we have only been using for a short period of time, or even for the first time — and we don’t know what we should really expect from a professional service, and what is only the solid mediocrity.

Let’s think about the criteria of professionalism in the case of server management services. In 2018 — not in 2008 or 1998. The first answer that comes to mind is, of course, “customer satisfaction.” The thing is, this satisfaction is a subjective matter, relative, for example, to support reaction times or other variable parameters derived from purchased hosting plan — or even subjective feelings from conversations with a support engineer (people can feel sympathy for each other or not).

In 2018, another completely objective parameter is absolutely critical: security. In fact, this is why the management of servers is entrusted to specialists, instead of, for example, a full-time programmer who, after all, also knows how to install Linux, so that our clients’ data is safe.

server management quality

How to provide high-quality services

The question arises, however, on the part of the supplier: how to provide high-quality services (and therefore with maximum pressure on security) at competitive market prices, while paying relatively high wages to employees (in Poland today we have an employee market, especially in IT, which imposes high rates) and still make money on these services?

The answer is very simple and complicated at the same time. It’s simply choosing the right tools, that fit your real business model. This seemingly simple answer is complicated, however, when we begin to delve into details.

The key to the correct selection of tools is the understanding of your own business model at the operational level, i.e. not at the level of contracts and money flow, or even at the level of marketing and sales strategy, but at the level of actual work hours, and possible synergies between analogical activities for different clients. Server management doesn’t have the characteristic of a production line, on which all activities are fully reproducible — the trick is to find certain patterns and dependencies, on the basis of which one can build synergy and choose the right tools.

Instead, unfortunately, most companies that provide good quality server administration services, go to one of two extremes that prevent them from building the synergy, which leads them to high costs. As a result, these services aren’t perceived by their boards as prospective and in time become only a smaller or larger addition to these development services. This, in the long run, leads to self-suppression of good-quality services from the market, in favor of services of dubious quality. But let’s go back to the extremes mentioned. It can go in one of the following directions:

  1. Proprietary, usually closed (or sometimes “shared source”) software for managing services. At the beginning, meeting the company’s needs perfectly, over time, however, these needs change, because the technology changes very quickly and the company itself is evolving. As a result, after 3-4 years, the company stays with the system which isn’t attractive for potential employees (because the experience gained in such a company isn’t transferable to any other company), and secondly, it requires constant and increasing expenditures for maintenance and “small development.”
  2. Widely-known software, often used and liked, or at least recognized by many IT people, only… it fits someone’s imagination about their business model, instead of the real one. Why? The reason is very simple: most of popular tools are written either for large companies managing homogeneous IT infrastructure (meaning that many servers are used for common purposes, have common users, etc.), or for hosting companies (serving different clients but offering strictly defined services).

Open source tools

Interestingly, as of 2018, there are still no widely known, open-source tools for heterogeneous infrastructure management, involving support for different owners, different configurations, installed services and applications, and, above all, completely different business goals and performance indicators. Presumably, because authors of such tools don’t have any interest in publishing them as open-source, decreasing potential profit. All globally used tools (eg. Puppet, Chef, Ansible, Salt and others) are designed to manage homogeneous infrastructure. Of course, you can run a separate instance of one of these tools for each client, but it won’t scale to many clients and won’t build any synergy or competitive advantage.

At this point, it’s worth mentioning how we dealt with it in Espeo. Espeo Software provides software development services to over 100 clients from around the world. For several dozen clients, these services are supplemented by management of both production and dev/test servers, and overall DevOps support. It’s a very specific business model, completely different from eg. web hosting company or a company that manages servers for one big client — at least at the operational level.

Therefore, one should ask what are the key factors in such a business model to build synergies — and, above all, on top of what should this synergy be built, so that it’s not synergy at the expense of professionalism. In the case of Espeo, we decided on a dual-stack model, in which, in simplified terms, server support is divided into infrastructure and application levels. This division is however rather conceptual than rigid since both these levels overlap themselves in many aspects.

This division, however, provides the basis for building synergies at the infrastructure level, where, unlike at the application level, the needs of very different clients are similar: security. At the infrastructure level, we use the open-source micro-framework Server Farmer, which is actually a collection of over 80 separate solutions, closely related to the security of the Linux system and various aspects of heterogeneous infrastructure management based on this system.

The Server Farmer’s physical architecture is very similar to the Ansible framework architecture, which is used by us at the application level. Thanks to the similar architecture of both tools, it’s possible, for example, to use the same network architecture on both levels and for all clients. Most of all, however, we’re able to build huge synergy in the area of ​​security thanks to the change from the management of separate contracts (which, in the era of tens of thousands of different machines searching the whole Internet for vulnerabilities and automatically infecting computers and servers, is simply a weak solution) to the production line model, ensuring the right level of security for all clients and all servers.

server management services

Building synergy

An example can be taken from the process of regularly updating the system software on servers, which in 2018 is an absolutely necessary process to be able to talk about any reasonable level of security at all. Modern Linux distributions have automatic update mechanisms, however, the only software elements updated are those which  won’t cause disruptions in the operation of the services. All the rest should be updated manually (i.e. through appropriate tools, but under the supervision of a person).

And here is the often-repeated problem in many companies: using tools that are known and liked by employees, even if these tools don’t fit the company’s business model and don’t speed up this simple operation in any way.

Let’s imagine a company that supports IT infrastructure for e.g. 50 clients, using 50 separate installations of Puppet, Chef Ansible or, which is even worse, a combination of these tools. As a result, we manage the same group of employees-administrators 50 times, we plan the architecture of the system 50 times, we start to analyze logs 50 times, etc. It’s, of course, feasible and in itself doesn’t lead to lowering of the security level. However, in such a model it’s impossible to use the employees’ time effectively, because with 50 separate installations most of this time is consumed on simple, repetitive and easy to automate activities, and on configuring the same elements, just in different places. What follows is that any business conducted this way isn’t scalable and leads to gradual self-marginalization.

This mistake, however, isn’t due to poor orientation or bad intentions of these companies. It’s simply because open source tools for managing the heterogeneous infrastructure of appropriate quality are relatively specialized software, and as a result, the knowledge of such software by potential employees is quite rare. What’s more, many companies decide to create dedicated instances of Puppet or Ansible for each client, because their undeniable advantage from the perspective of the employee is the transferability of experience between successive employers – even if for the employer himself it means the lack of scalability of the process.

If we look at it from the point of view of building synergy and as a result of permanent business advantage, however, the selection of tools to only satisfy the current and short-term HR needs is a weak idea. A much better attempt is to build a compromise between “employability” and the scalability of the employees’ work. This is why in our dual-stack approach, with the help of the Server Farmer, each employee responsible for infrastructure level management can manage approx. 200 servers per daily-hour (an hour spent each working day).

That means one theoretical job position, understood as the full 8 hours worked during each working day (i.e. full 168 hours per month), can support approx. 1,600 servers in the entire process of maintaining a high level of security (including daily review of logs, uploading software updates, user and permission management, and many other everyday activities). Of course, the real, effective workday of a typical IT employee is closer to five than eight hours a day, nevertheless, the theoretical eight hours is the basis for all comparisons. If you already use server management services, ask your current provider, how many servers is each employee able to support without the loss of quality…

Check out Expertise

Whether you need support for your single-server MVP, or you want to scale up your application to thousands servers – our unique approach gives you the best time-to-market and full transparency.

Security isn’t everything

But, of course, security isn’t everything. After all, nobody pays for maintaining servers only so they are secure as such, but to make money on them. And money is earned by the application level, which is by nature quite different for each of the clients. These differences make building synergies between individual client activities so hard, that in practice, there is no sense in doing it since easy hiring is more important than micro-synergies. That’s why in Espeo we use Ansible at this level, as it’s compatible with the Server Farmer, and at the same time, it’s widely known and ensures the inflow of employees.

Of course, for such a dual-stack solution to work properly, it’s necessary to set clear limits of responsibility, so that on the basis of these limits, as well as on the basis of SLA levels or other parameters bought by individual customers. It’s possible to build specific application solutions, and the actions of individual employees don’t overlap. Only then it will be possible to build effective capacity management processes (modeled after ITIL), providing each customer with high-quality services in a repeatable and predictable manner.

capacity management

In part two we’ll describe more technically, what particular solutions we use, in what architecture and how these solutions apply to business processes, and what processes we use for PCI DSS clients.

Estimate your project

Do you have a creative idea? Give us just a little more details and we will get back to you with a tailored offer!

Find more about our Expertise

Categories
Blockchain Software

Blockchain oracles: Can blockchain talk to the world?

In this article, I’ll take you on a journey to find out what blockchain oracles are, what they’re for, and see the technical aspects behind them. We’ll go through base oracle flow, external data sources, authenticity proof (proof that a party doesn’t tamper with data), as well as oracle verification. Fasten your seatbelts and let’s begin!

Update 23/1/2019: While we in the Espeo Blockchain development team use some of the blockchain oracles already available on the market, we weren’t completely satisfied. So we decided to build a solution for ourselves and to share it with you. We’ve just launched a free, open-source oracle called Gardener. Read more about the project here, or find it on GitHub. Enjoy!

What’s the issue with blockchain oracles?

Blockchains function in a closed, trustless environment and can’t get any information from outside the blockchain due to security reasons or so-called sandboxing. You can treat everything within the node network as a single source of truth, secured by the consensus protocol. Following the consensus, all nodes in the network agree to accept only one version of their managed state of the world. Think of it like blinders on a horse — useful, but not much perspective.

However, sometimes the information available in the network isn’t enough. Let’s say I need to know what the price of gold is on a blockchain-based derivatives trading app. Using only data from inside the blockchain we have no way of knowing that. Because the smart contract lives in the sandboxed environment it has no option to retrieve that data by itself, the only viable alternative is to request that data and wait for some external party we trust to send it back. That’s where the utility of blockchain oracles come in.

Two components

There are two components that any sensibly working oracle in the blockchain has to incorporate. One is a data source handling component that retrieves requested data from the reliable data feed sources. These data sources can be data stores or databases, APIs of various types, even internal data stored in some ERP or CRM enterprise systems. The second component is the off-chain monitoring mechanism. It looks for the requests from smart contract and retrieves the required data from the data source handling component. Then, it feeds it back to the smart contract using some unique identification data to communicate which request the submitted data is related to.

When do we need oracles?

I’ve already discussed how the data can be provided back to the requesting smart contract. We also need to consider the timing of when the data should be acquired from its source and at what moment it should be sent back to the smart contract. Let’s first consider in what situations a smart contract may need access to the blockchain oracles. There are endless cases and even more solutions to them, so let’s explore a handful of them:

  • Derivative platforms that need to know the pricing feeds for the underlying assets, such as one we worked on called CloseCross
  • Prediction markets, when the ultimate result of the event has to be established reliably,
  • Solutions which need provable randomness (Ethereum is a deterministic platform),
  • Information from other blockchains,
  • Heavy computations, which don’t fit within block gas limits (or even if they fit they’re extremely expensive)
  • Complex mathematical equations (using f.ex. WolframAlpha)
  • Retrieval of some data from IPFS or other data storage

Implementing the concept

I already wrote what sort of data you can retrieve, and how and for what reasons. Now, let’s dive into more details on how you can implement the whole oracle concept in more practical terms. Because part of the blockchain oracles is an off-chain mechanism, it could be developed using any modern programming language.

Of course, any language which has frameworks that allow it to communicate with the blockchain. It constantly searches (listens) for events emitted by a relevant smart contract and checks if they’re requests for external data. If so, it accesses that data from some data source and processes it according to the specified rules. One option is to use an external data provider we trust. Due to some external factors (agreements), we know it would never cheat.

On the other hand, if we use the data provider that we can’t trust or even force the smart contract clients to use our internal data, we can cause a lot of disruption in the client’s operation. For example, if the data finally provided cannot be trusted or isn’t reliable – as we can put literally any data we want there. We can try and cheat to force our contract to behave according to our expectations even though the truth about the external world’s conditions is different. To sum up, choosing the right data provider is half the battle.

An improvement to that could be to choose a few separate sources of the data, but then a problem related to data accuracy would become apparent. For example, when we want to get exchange rates for EUR/USD from a few different exchange rates agencies or exchanges, it’s almost guaranteed that the values would be slightly different for each of them. On the surface, the simple task of providing the data back to the smart contract appears to be, in general, quite a hard problem to solve correctly and reliably.

Proofs

Once we have our data inside the oracle software, it would be good to prove that we didn’t manipulate it. The most basic uses don’t include any proof. Our users have to believe that we just pass on what we get. But there are stronger proofs. Oraclize.it, the leader in Ethereum blockchain oracles at the moment, uses the TLSNotary proof, Android proof and Ledger proof. Let’s briefly check how they differ.

TLSNotary proof

The first one – the TLSNotary proof – leverages features of the TLS protocol to split the TLS master key between three parties: the server, an auditee and an auditor. Oraclize is the auditee, while a special locked-down AWS instance acts as the auditor. The TLSNotary protocol is open sourced and you can read more about it on the project site: https://tlsnotary.org/.

The black boxes are steps from the standard TLS protocol. The green ones are added in TLSNotary. Basically, what we achieved is that the auditor can get the original data and check if it’s not been tampered with. However, he doesn’t know the auditee’s credentials so he can’t perform the action on his behalf.

Android proof

The next one is the Android proof. It uses two technologies developed by Google: SafetyNet and Android Hardware Attestation. SafetyNet validates if the Android application is running on a safe and not rooted physical device. It also checks if the application code hash isn’t tampered with. Because the application is open source, it can be easily audited and any changes to it would change the code hash.

On the other hand, Android Hardware Attestation checks if the device is running on the latest OS version to prevent any potential exploits. Both technologies together ensure that the device is a provably secure environment where we can make untampered HTTPS connections with a remote data source.

Ledger proof

The last one from Oraclize is the Ledger proof. It uses hardware wallets from the French company Ledger (mainly known for Ledger Nano S or Ledger Blue). These devices encompass the STMicroelectronics secure element, a controller and an operating system called BOLOS. Via BOLOS SDK developers can write their own applications which they can install on the hardware just like cryptocurrency wallets. BOLOS exposes kernel-level API and some operations from that are about cryptography and attestation.

The last one is especially useful here. Via the API, we can ask the kernel to produce a signed hash from the binary. It is signed by a special attestation key, which is controlled by the kernel and out of reach of the application developers. Thanks to this, we can make code attestation as well as device attestation. Currently the Ledger Proof is used for providing untampered entropy to smart-contracts.

TownCrier

Another solution – TownCrier – offers Intel SGX, which is a new capability of some new Intel CPUs. SGX is an acronym for Software Guard Extensions and its architecture extensions. It was designed to increase the security of application codes and data. It is achieved by the introduction of enclaves – protected areas of execution in the memory. There, the code is executed by special instructions and other processes or areas don’t have access to them.

Source: http://www.town-crier.org/get-started.html

The image above shows how it works. The contract User calls the TC contract, which emits an event cought by the TC server. Then the TC server, via the TLS connection, connects to the data store and feeds the data back to contract. Because all of this happens in the TC server enclave, even the operator of the server can’t peek into the enclave or modify its behaviour, while the TLS prevents tampering or eavesdropping on the communication.

A word of caution

Keep in mind, however, that even though each of these solutions provides a way to prove data integrity, no one offers a verifiable on-chain method. You can either trust a big company (like Intel) or make a separate verification off-chain, but even then we notice tampering only after the first successful occurrence.

The last thing I haven’t mentioned yet is how the oracle contract verifies who to accept responses from. This solution is rather simple (at least in most cases). Every account in Ethereum, and the off-chain servers has private and public keys pairs, which identify each of them uniquely (as long as nobody steals it from the server).

Conclusion

To sum up, here’s how the business and technical aspects of oracle constructions work. I started with business needs and use cases, next switched to an oracle description and then started examining exactly how it works. I talked about data sources, authenticity proofs, and server identifications. All this knowledge should give you a general overview of blockchain oracles.

If you’d like to use one of the current solutions or feel that none of them meet your expectations for blockchain oracles, you can ask us for help 🙂 And by the way, here’s my previous article on Ethereum-related issues (gas costs).

Links:

http://docs.oraclize.it/#security-deep-dive
https://ethereum.stackexchange.com/questions/201/how-does-oraclize-handle-the-tlsnotary-secret
https://tlsnotary.org/
http://www.town-crier.org/

Categories
Entrepreneurship Software Technology

How to Choose a Mobile Payment Provider

In today’s world, most people carry their cell phone wherever they go. No longer just a phone, mobiles are also multi-functional tools. Mobile devices are increasingly replacing the computer for internet browsing and game playing. It’s also trying to replace our photo camera because there is no need to carry one when we always have our cell phones within reach. The cell phone is also trying to replace people’s wallets.

More and more users expect that apps will enable the possibility to pay for the goods and services directly with a cell phone. It’s in users nature to seek the easiest way to buy something — one click to make a purchase in an app is enough. Statistics confirm that the less a user has to enter while purchasing, the higher the conversion rate. So, if users demand an easy way of payment, then you should make it possible. In this article, I will focus on the topic of mobile payment providers.

mobile payments providers

What is Mobile Payment?

So, let’s discuss what  a mobile payment is and how to do it. A mobile payment is any payment with a mobile phone. Ok, so that sounds simple enough, but how we acheive this? There are several ways to make a mobile payment:

  • NFC (Near-field Communication) — It’s a connectionless method. Every shop needs a special device to make it possible.
  • Mobile wallet — where the phone stores credit card information, replacing your credit card and making your payments much easier.
  • Carrier billing (Premium SMS messaging/ OTP) — operator centric model. The operator bills you for the services.
  • Direct communication between a mobile phone operator and a bank payee.
  • Credit card — this is common but means we have to carry our card with us at all times.

What is a mobile Payment Provider?

A mobile payment provider simply provides payment services using a phone mobile under financial regulations. They secure the payment process and try to make it as easy as possible to use. Big IT companies such as Google and Apple provide such solutions but also financial companies and mobile phone operators try to suggest their own models. Although at the first glance it all may seem that big competitors usually focus on different market sectors enabling integration only when it may be beneficial from the user viewpoint.

What is a mobile Payment Provider?

Ready to develop your app with us? Let us know.

No more gateway processors?

So, you may think “right, so if my mobile payment provider handles all this stuff, then maybe it’s all I need.” But that’s not quite so. The Payment provider (with some help from other networks) handles the card tokenization. It basically generates a unique ID number which represents your card data without  displaying it. The device stores your information so you can process payments without giving card data. However, the payment provider does nothing to authorize and process your transaction.
 
This is where payment processors come in and acts as a payment gateway. Based on the received information the payment processors can carry out the transaction. Transaction processing is secure so you don’t have to worry about any PCI Compliance which will be provided by you gateway processor. Remember to choose a payment processor  available in your country. While many Mobile payment providers charge no fee, payment processors will have their own fees for processing transactions – so just be aware of that.

Let’s take a look at what kind of mobile payment providers exist on the market:

  • GPayThis Google product acts as a mobile wallet and simply stores your card information. You can also upload different kinds of loyalty card information. Later on, users can use it in applications by pressing the “Buy with GPay button.” Also, you can use it in shops which support GPay through NFC. As a user, you can also exchange money between two mobile phones. Goggle’s service is free of charge for transactions made with GPay. Users have to unlock their phones to pay. GPay supports the following gateway processors: ACI, Adyen, Braintree, Checkout.com, Cybersource, CloudPayments, Datatrans, EBANX, First Data, Global Payments, IMSolutions, Paysafe, Payture, Przelewy24, RBK.money, Sberbank, Stripe, Vantiv, Worldpay, Yandex.Checkout.
  • Apple Pay – Apple’s version acts as a mobile wallet storing your card information to use on any Apple device. Each transaction made with Apple Pay must be authorized with face ID, touch ID or a passcode. You can use it in stores that have the right equipment that supports it. You can also send or receive money just like in GPay.
  • Samsung Pay – Not to be outdone, Samsung’s mobile payment app works similarly, but is more limited than GPay or Apple Pay. It’s compatible with all Samsung devices. Samsung provides its own reward program for purchases. The service is free and uses something called Magnetic Secure Transmission (MST), which emulates the swipe of a traditional magnetic strip. It allows users to make payments with most typical payment terminals, not just the new ones that include NFC.

Mobile Payment Providers

  • Microsoft Wallet – This is a wallet for Microsoft devices and  for Visa. ‘Tap to pay’ is currently available only in the US. You can store your loyalty cards by scanning their QR code.
  • Visa Checkout – If you register your card through your account on Visa’s Web site, then you can pay via your web browser simply by providing a user name and a password. It works on any device with internet access . Visa Checkout accounts can be connected to other web wallets such as GPay or Samsung Pay
  • PayPal – The electronic payment giant is also present on mobile phones and mobile devices, we can choose PayPal in GPay to pay for the services. PayPal also provides mobile integration through SDK. The mobile service for PayPal is called Braintree. But you should remember that PayPal is also a payment processor.

How to choose the right mobile Payment Provider for in-app purchases?

So, you may ask what payment provider you should choose while developing a mobile app. This is quite a simple question. While developing apps for iOS you should use Apple Pay and while focusing  on Android you should go for GPay. Samsung Pay can also be an option for Samsung devices but its support is limited to Samsung devices so you should be careful.
 
Please, take your time to browse the SDKs of your payment processor or integration guides (Stripe, Braintree). Some of them have ready-made solutions (tips, recommendations) which you can integrate over Mobile Payment Providers solution as a payment method and also support your regular credit card payments. Bypassing mobile payment provider is used for increasing conversion rate. Naturally, some users may not wish to pay with GPay. You should always monitor users’ behaviour on the payment screen before conducting a transaction  because if the payment method chosen by the clients is unsupported then the client will not buy your products.

The Future Begins Now!

Remember that payments are not an easy topic. Information about payment transactions are sensitive data. Users want to make a purchase with a single button but they don’t want to give up on security. That’s why mobile services will only gain popularity in time. The payment integration is something where we can help. If you are planning to develop a new mobile or web application or add  payments to an already existing one, don’t hesitate and contact us at Espeo.
 

See also:

Categories
Entrepreneurship Software

Minimum Viable Product: From MVP to MMP

Minimum Viable Product: What should be the next step after creating MVP?

Minimum Viable Product, abbreviated as MVP, is commonly used in the start-up terminology, unfortunately, not always wisely. When we want to create a new successful service (whether technological or not), we want and need to be innovative! Indeed, to innovate is to imagine and to create something brand new. So, you embark on a risky adventure, with very little information regarding the actual demand for your product or service.

What about MVP?

Innovation is often very expensive. What is more, most of the time, it is impossible to predict any possible return on the investment due to the lack of information which makes any investment somewhat irrational. MVP answers the question: “How can I gather the most information about market expectations in reference to what I want to propose? How can I reduce the initial investment costs as much as possible and lower the risk as much as possible?” In the past, the old way was to release first version of the product several months after the investment and to offer an almost finished and final product. This can be avoided by applying and following the concept of Minimum Viable Product so that a product that answers customer’s main problem of accessory functions can be proposed. MVP is a method that helps to create a final product with a biggest number of  functions expected by a target audience and also to offer the fastest product to the market. We described the MVP development idea in details in this article.

Minimum Viable Product

We already have MVP – what’s next?

Now that the meaning of MVP has been explained, you will understand why companies work on MMP. MMP stands from Minimum Marketable Product and it is the minimum number of products that can be produced and delivered so that they can be presentable and usable. So, once we have our Minimum Viable Product, we could take the next step and try to introduce our product to the market. But to do that we need to know something about MMP.

MMP – what is it?

Minimum Marketable Product is a term which refers to a product that has the lowest number of relevant features but is already suitable for sale and marketing. Such an approach allows for a faster implementation of products into the production environment which also means profits are generated quicker and earlier. MMP allows you to focus on a limited number of functions instead of creating a hidden product for a longer period of time. In addition not only do we minimise the risk but also gain on time and costs. Building the perfect application that we are simply going to release into the world is never the best solution!

Going from MPV to MMP – when and why should you shift from MPV to MMP?

One of the best and well-known examples of MMP is the launch of the original iPhone in 2007. On the one hand, Apple’s teams put a lot of effort and work to build a hand, the phone lacked a significant number of basic functions, e.g. copy and paste or sending messages to multiple users at the same time! After the initial success, Apple began to work on expanding and improving the system.

So, we could ask – how can we build MMP? First, limit the product to a specific market segment (do not try to please everyone at the same time!). Next, select only those functionalities that are necessary and essential for a given target group (it is necessary to identify those features that will lead to success in the market.). Remember, use MVP to determine the right functionalities of your product.

MMP development

How to bridge the gap between MVP and MMP?

Every decision you make depends on result MVP validation. If it goes well, you could start thinking about MMP and launching the product to the market. If not, you may have a problem, because you will not know what and where it went wrong – was it the product or the idea itself. Theoretically, losing is easier, because of the certainty that your idea and/or the product (or at least something) is not good and does not work well. If the validation of MVP is successful, you win, but it doesn’t mean that you will profit again. So, in order to have greater range of possibilities, you could validate (or invalidate) your MVP in one way – by shifting to an MMP. This method will also reveal to you all the features your product must have. If you want to win, you will find the MVP market helpful in defining your target audience. First, you must collect constructive feedback, so try to find someone who will benefit most from your idea. Second, the greatest problem of start-up owners is their inactivity. Once they confirm their idea as useful and beneficial, they give it up. You have to start looking for more data and for more feedback. The next step after MVP and role of MMP in this process depends on the experience of the customer with your product. It could show you what to do next.

Can you skip MVP?

To put it simply, we can conclude that one MMP is made up of one MVP. So, normally, you couldn’t skip the MVP. The difference is that we treat MVP in terms of experiments. We collect user feedback and improve the products based on it. Sometimes your MVP is made up of a simple prototype, thanks to which you can test the reasonableness of the idea. This approach increases the success of the final application or system. Remember to always have a clear vision of the product you manufacture. When thinking about MVP, keep in mind also MMP, otherwise, it may turn out that your product contains several dozen functions that are valuable separately, but put together do not make much sense.

See also: 5 mistakes Startups make when developing first App

Launching MMP

If you are sure that your MVP is valid, you could begin working on the full and final product. So, if MMP is just a set of features based on which the decision of the product launch will be made, you have to question yourself constantly about them, their functionality and necessity. If you collect all the feedback about MVP, you can start. Firstly, you must prioritize your backlogs. Try to obtain the true data and facts about your product and consider why it could either work well or fail. When creating MMP you have to follow the customers’ advice and do what they want. Your backlog, which you must create, should enlist all the features which are important to launch your product on market. The clients’ responses could help you to improve your product and find all the features necessary to launch MMP. If you do it well, you will win and achieve success. While launching MMP, the feedback and the analysis of MVP proved to be very important.

If you want to win and launch your product to the market, like Dropbox or any other start-up, you must create the best Minimum Viable Product and analyse all the feedback to constantly improve your product. It is important to bridge the gap and to observe the market. If you can predict the expectations, you can react and introduce changes to give the people what they want. In the end, this is your goal.