Categories
Blockchain Financial Services Technology

Blockchain testing MakerDao's stablecoin platform

As the cryptocurrency bear market continues, investors are looking for much-needed predictability. Stablecoins have risen in popularity as the market tumbles. Several have emerged, but few are as transparent as MakerDao . Maker had Espeo Blockchain testers check to make sure everything worked properly before they launched. Since the technology is so new, blockchain testing is vitally important for any project if you want to keep the trust of this fickle market.

Cryptocurrency is having a bit of an identity crisis lately. Some extoll utopian notions of freeing consumers from banks, while others see a way to make money. It was this unregulated asset speculation that spurred investors to flock to cryptocurrency late last year. Unsurprisingly, speculation in such a constrained asset class caused wild volatility in the market.

Market volatility, of course, is not good if you’d like to actually use cryptocurrency as a currency. One solution to this is the stablecoin. Stablecoins are crypto tokens with fiat currency or other more stable assets backing them. Similar to pegging a weak currency to a stronger one, stablecoins ideally lend confidence to those who hedge with them.

The Stablecoin

Centralization and dubious fiat backing are some current criticisms of stablecoins on the market. Users have to trust a central authority that the company issuing the tokens actually has the funds . But just like pegged fiat currency, stablecoins introduce much-needed confidence into the token economy.

Espeo Blockchain helped MakerDao with blockchain testing before launch. Unlike other stablecoins, Maker issues their Dai tokens using smart contracts in exchange for Ethereum. Head of business development, Gregory DiPrisco wrote in a company blog post from earlier this year that Maker operates in a similar way as a bank, just without the middlemen. The platform aims to introduce stability into the cryptocurrency market with its Dai token. Ether collateral backs each token. MakerDao reduces volatility and allows users to maintain their purchasing power with crypto assets.

” Pretend you are at the bank asking for a home equity loan. You put up your house as collateral and they give you cash as a loan in return… just replace your house with ether, the bank with a smart contract, and the loan with Dai.”

Of course, just as the bank might take your house if you can’t repay the loan, Maker automatically resells ethereum collateral if it drops below the value of the Dai loan. This mechanism maintains the integrity of the system and keeps the price stable. As prices drop, though you might think that the system would unravel. Mike Porcaro, head of communications at Maker remains positive, however. In an email interview, he said:

“Maker is unlocking the power of the blockchain for everyone by creating an inclusive platform for economic empowerment — allowing equal access to the global financial marketplace… The currency lives completely on the blockchain; its stability is unmediated by any locality, and its solvency does not rely on any trusted counterparties.”

Collateral debt position

MakerDao’s CDP portal holds a surplus of collateral in publicly auditable Ethereum smart contracts. Users can use Dai tokens in the same way as any other crypto. Users can send Dai to others, pay for goods and services, or save them long-term. Creating a Collateral Debt Position (CDP) allows users to protect their ETH assets with Dai stablecoins. Porcaro went on to say:

“People or organizations create Dai by locking-up ETH in a CDP. As long as people can open CDPs there will be Dai in circulation. [Maker is] seeing increased volumes of Dai in circulation. In fact, about 1.5% of total ETH is locked up in Dai smart contracts.”

The hope is that the system will encourage more people to use cryptocurrency as a medium of exchange. Stability is essential to achieve this.

Blockchain testing

Espeo Blockchain helped MakerDao with blockchain testing before it went live. Checking how usable the platform is, and the security of the smart contract code is critical in a blockchain project. This process ensures that the system functioned properly before people started putting their money in. Head of product, Soren Nielsen agreed to hire Espeo when his team needed assistance with testing the MakerDao platform. Nielsen explained in an email interview:

“We needed immediate assistance with testing a product, [so] we decided to “test the waters” with Espeo… A challenge we currently have in general when engaging new suppliers is that there is a steep learning curve unless you’re already a user of our system… I felt that we had a professional client-supplier relationship. It certainly was good to have a dedicated project manager following this from Espeo.”

Finding bugs to fix

Project manager Natasza Stanicka and testers Bartosz Kuczyński and Patryk Jaruga got to work trying out every aspect of the platform looking for bugs to fix. They had to methodically sift through every feature and behave just as a regular user would. Kuczyński recalled:

“First, we clicked through every clickable item and went through every user story and tried to find edge cases. I guess the challenge with that was the fact that it is a blockchain application and that has certain consequences in itself. We needed to make sure that it actually cooperates with the blockchain correctly and the results were accurate.”

Jaruga remembers finding a bug which interrupted Ethereum transactions between hardware wallets. He said that every device behaved a bit differently in transactions and that they tested the Maker platform with a range of hot and cold wallets. The Maker team fixed the bugs as soon as they knew about them, said Bartosz Kuczyński. He added:

“Maker’s development team was really interested in bug reports, and they dealt with them immediately. In that respect, I believe this was really very ‘well-oiled.’ The bugs were corrected immediately so we would be able to move on to the next thing. The overall attitude of the company was certainly very positive.”

Testing such a complex application ensured a successful launch. Since MakerDao relies on consumer trust, getting the calculations right is vitally important. Blockchain testing involves a lot of things. Tokenomics, UI/UX, and hardware compatibility are some of the aspects our testers analyzed. In order to maintain the integrity of the token system and preserve the public’s trust, making sure the app works as promised was essential.

Conclusion

Dai stablecoins and the MakerDao infrastructure goes a long way to stabilize the cryptocurrency ecosystem. For more people to adopt crypto and start using it as a medium of exchange, easing price volatility and uncertainty have to happen. Dai’s stability helps users hedge their Ethereum assets and protect their investments from wild swings in the market. Blockchain testing ensured the platform worked properly before launch. More than anything else, confidence in blockchain technology will encourage wider adoption. Knowing that you won’t lose everything overnight will spur more people to start using cryptocurrency.

Categories
Software Technology

Front-end history, 2018 trends and Espeo choices

Front-end development has changed slightly over last 10 to 15 years. Javascript has evolved the most among the existing programming languages. It turned away from writing simple logic on websites by using ugly, unstructured code and plugins, and evolved into building completely functional Single Page Applications.

Let’s go on a small journey back in time to see how it used to look in the past and what changes have happened over the last couple of years.

The good old days…

Front-end history, 2018 trends and Espeo choices
We are in 2006/2007. It’s a time when most 30+ developers started their careers. Javascript was weak and was not taken seriously. Most developers chose backed languages as their main technology to master. Web applications were using reloads every time a user interacted with UI. AJAX was still something new and was used only for updating small elements on the pages. The key player was jQuery, it simplified DOM data manipulations and added a nice wrapper for asynchronous code. Thousands of jQuery plugins were created which were responsible for specific UI interactions such as sliders, carousels, tabs and many more.

The term front-end developer didn’t exist then. People involved in javascript stuff were called web developers. Their responsibility was to turn designs into fully working web pages. Designers worked closely together, they styled using pure CSS without any any compilers and added  small portions of JS code to make websites more interactive. Various browsers with different capabilities posed a great challenge as they were implementing different standards. This is the era of many browser tricks and hours spent on Internet Explorer 7 support. Any automation like concatenating and scripts minification were done by back-end technologies and in most cases no one cared about it, ending with a long list of scripts and linked styles in html head section.

JavaScript revolution

A few years later, between 2009-2010 a lot of things happened in the front-end world. Many new key players appeared on the scene. It’s worth mentioning about a few frameworks which made a lot of noise at  that time:

  • Backbone JS It was the first attempt to make a 100% javascript application without any annoying reloads. The idea of Models and Collections behind Backbone made web application development more structured and created with well documented and clear boundaries understandable  for many developers.
  • RequireJSIt was the first solution for module dependency management in JS. It allowed  to split a code into multiple files so it could be downloaded on demand in browsers which was a great step into the future.
  • Sass and Less compilers became mature enough, which meant no more pure CSS, because NodeJs was a toddler compilers compared to other languages like Ruby, Python or .Net.
  • NodeJS It enabled javascript code to run not only in browsers, it  revolutionised front-end development workflow once and for all.
  • AngularJSIt was the first angular version, which gained a lot of popularity, and most developers turned to Angular world quickly.
  • PhoneGap It allowed to write html applications which worked  inside iOS/Android WebView, it was not perfect and frequently ended with performance issues because of Webview boundaries but enabled to create their own mobile application knowing only html and js.

Naturally, there is much, much more. Javascript stopped being just an additional technology. Companies started to hire “JS Developers.”

It’s a little too fast…

Frontend history


Evolution sped up after 2010. In the following years, thousands of new libraries appeared. Finally, JS Devs had package managers for dependencies. Most users started with bower and slowly turned into NPM packages when nodeJS started to be more mature. No more keeping all dependencies in projects repositories.

In the “good old days” any attempts to make some automation like concatenating files, minification or images optimization needed to be done outside of the JS world. But change was coming…

New task runners started to appear. Most developers began to use Grunt or Gulp for simple automation. Many tasks appeared for both frameworks allowing the possibility to check JS quality, to concatenate files or even to watch changes in source files to do browser refresh or inject changes asynchronously without refreshing. Both tools started to lose popularity for the  benefit of npm scripts. Small pieces of nodeJS scripts had the ability to do everything nodeJS allowed.

The biggest changes appeared in javascript SPA applications. Developers were inspired by solutions put forward by Backbone and Angular teams and started to recreate and improve on similar solutions.

It’s when: Ember, Knockout, and React were gaining popularity. The last one made the biggest impact in the front-end world for a long time.

Another aspect of the development which changed slightly during this time was the modular approach to development. In “the good old days”, modules in javascript did not exist. The only way to preserve privacy was to use design patterns such as Module Pattern, which were created from an unnamed and immediately executed javascript function. It was the only way not to pollute global scope with everything we have implemented. Require JS mentioned in the previous chapter made some progress here with AMD approach. Another approach which appeared was Browserify, it used common JS modules like in Node JS but inside a browser and it compiled them to a single file which should have been injected in html. Finally, Webpack appeared which became a standard for bundling and dealing with modules for client-side applications.

A big impact, a way of development, was also caused by UI Frameworks. Bootstrap, Foundation and, later on, Material-UI. Each one included a complete set of components, fonts, icons and utils such as the frequently-used grid system. Each one sped up the development process and helped to achieve consistent UI without any additional styling.

Fast and furious….

Few years ago, it was easy to follow all the news taking place in the front-end world. But the popularity of javascript was increasing. More and more developers started contributing to open source projects.
There are now millions of libraries/frameworks/scripts in javascript. Let’s take a simple test:

  • Think of any name of an item not connected with programing, e.g. focus on any home-related ones such as: table, home, knife, doors, bed….
  • Try to google it with “js” suffix
  • Almost every name is a name of a library, npm script or github repository…, you should have 100% match in the first 5 words which popped to your head.
Vue, React, Angular

Almost every aspect of the front-end development has changed. In the case of styling, for a few years, SASS was unquestionably a leader. We have adopted nice architectural solutions for sass such as: BEM or SMACSS . Then there was an idea that styles could be created directly in js tied up to the components with the proper scopes. Currently, there are large numbers of similar CSS in JS solutions.
In the case of SPA application situation is clearer, there are 3 competitors: React, Angular 2+ and a newbie, Vue; a detailed description and comparison of each is a topic for another blog post. Most developers choose one of them. Some people will agree with the image below:

The number of mobile application frameworks for javascripts from which we can choose is also huge. There are: React Native, Ionic, NativeScript, Cordova. React Native is gaining popularity and wider support. We need to choose the mobile app technology for our project carefully, maybe for the simpler ones PWA would be enough?

The first Backbone JS applications were huge and each month the application was getting bigger and bigger. When the bug appeared it was hard to tell in what  state application actually is; if it had been Redux or MobX debugging would  not have been so painful and time consuming.

The world of js is a huge one, and I only mentioned a few core libraries. If we dig into a specific area like SVG graphics, WebGl, Web Components, Server side rendering more and more good, well-known libraries will appear and each one deserves their own article. Server side rendering is a “hot” concept nowadays as we have entered the SPA era, where all content is rendered client side. Still, there are crawlers and social tools which do not understand javascript.
For the all non believers: JQuery is not dead! jQuery is still one of the most popular library on the internet. If you think jQuery is always ‘evil’ please read this.

State of Javascript in 2018

So, what is the current state of javascript? It’s widespread use says it all.

You can check out what the current trends are. Take heed and read the conclusions in each section while choosing the technology for a new project. The newest “hot” technology may not always be the best choice because of the lack of documentation and solved issues. Frameworks which are mature have a lot of contributors. Most errors have already been solved and if a new ones appear, there is always a person who will quickly address our problem and solve it.

How to be a good front-end developer in 2018? Naturally, we all need to follow the news frequently. So much is happening everyday, and it is always helpful to go to news aggregators such as: DailyJs. You will not became a technology specialist simply by reading articles, it just gives you an idea of what to learn next 🙂

A few years ago front-end developers were able to do everything their employer needed. It was not too difficult to be knowledgeable about css/sass, vanilla js, existing SPA frameworks.
Nowadays it’s a bit more complicated, it’s hard to be a specialist in everything and it’s not possible to reach master level in everything related to front-end. We need to specialize and choose our path to become: React, Angular 2+, Ember, Ionic, NativeScript, Styling etc. specialist.

Espeo Choices

Espeo choices

In Espeo we follow current trends. For our projects we use fresh well-known and well documented technologies.

In the case of creating new SPA projects React with Redux and Angular 2+ are a way to go, with increasing React dominance. Developers use React Native or native applications for mobile solutions. However, React Native can’t solve all mobile problems. As far as styling Sass or styled Components are concerned we use them extensively in our projects. Webpack and npm scripts have become a standard for all the projects we develop.

We are open to new technologies and still explore new possibilities such as: Vue, TypeScript, Reason…and many more. Possibly, we’ll only consider a few of them for our future projects.
 
[contact-form-7 id=”13387″ title=”Contact download_8_reasons”]

Categories
Blockchain Financial Services Technology

Blockchain art: Opening new avenues for artists and collectors alike

Blockchain art is a bit of a paradox. The highly technical and the highly creative generally don’t mix well. However, several recent projects are bringing the two worlds together in fascinating ways. Blockchain technology is not only a tool for artists (and collectors) to protect the value of their work. It’s also increasingly becoming a medium of expression. Digital artists especially can now control the authenticity and scarcity of their work with the help of distributed ledger technology. Ownership titles, provenance, and even art asset tokenization are a few of the compelling use cases emerging.

But more than simply controlling the flow of distribution, blockchain art also opens new avenues for artists to engage a broad community of connoisseurs and enthusiasts. Even the notoriously conservative art and collectibles market is slowly starting to adopt the technology to establish provenance and facilitate art sales. Several cutting-edge startups are working on blockchain applications in the art world. Blockchain art may fundamentally shift how artists create and how collectors collect.

Blockchain art market

Elena Zavelev, CEO of the New Art Academy believes digital artists will benefit the most from DLT. She wrote in Forbes, “For the first time ever, limited editions of digital art are possible thanks to blockchain technology. Digital artists can create limited editions of their works, providing a new way to grow their market. Previously, the bane of digital art has been the fact that it’s easy to copy and pirate.”

Projects such as CryptoKitties and CryptoPunks were among the first examples of editioned blockchain-based art. Art projects as much as technical feats, the unique figures demonstrate blockchain technology’s usefulness in this sector in a way for everyday people (like me) to easily grasp.

Each CryptoKitty and CryptoPunk is unique and cryptographically secure. While these original works are cute, there are deeper implications for artists and other content creators. Once a collector buys a CryptoKitty, no one can forge the originals thanks to the ERC-721 token. Think of them like baseball cards without the central authority. While blockchain technology guarantees uniqueness, it also limits distribution making these digital assets more valuable for collectors.

Radical Shift

Price speculation in CryptoKitties threatened to collapse the Ethereum network late last year as collectors flocked to buy and trade the exclusive cartoon cats. Some even sold for hundreds of thousands of dollars as interest peaked. Many criticized the craze as frivolous but failed to see the bigger picture. More than a way to speculate, this decentralized ownership model for an original digital artwork is a radical shift in the art world.

Immutability is useful for both artists and collectors. Unchangeable authorship information, for example, allows living artists to prove that their work is original. Once an artist sells a work, a record of its sale also goes onto the blockchain linking any future buyers to a clear history. Buyers benefit from investing in an immutable digital asset.

Blockchain as medium

Artists are also taking blockchain a step further and incorporating it into their work. Brooklyn-based platform Snark.art calls itself a blockchain art laboratory. The startup aims to change the relationship between collectors and artists. According to chief marketing officer Fanny Lakoubay, who spoke to me over Telegram, ”Snark.art is producing conceptual art experiments on the blockchain with established artists who want to reach the crypto community and get traditional art collectors to discover complex and conceptual digital art projects.”

Their first collaborative experiment in blockchain for art is Eve Sussman’s 89 seconds atomized. The video installation is a reimagining of Sussman’s 2004 video installation 89 Seconds at Alcazar. In the original work, Sussman recreated the moments prior to and just after Velasquez’s masterwork Las Meninas in film. In 89 Seconds Atomized, however, the artist split the video installation into 2,304 atoms at 20 x 20 pixels each. Collectors can buy individual atoms — similar to asset tokenization — and decide whether to display their atom to reconstruct the work, or not.

Lakoubay explained that owning a piece is slightly different than fractional ownership. “[Collectors] own an atom entirely,” she said “and can decide what to do with it without having to have consensus from all other owners. It is a choice for you to take part in the buyer community to reconstruct the original work by lending your piece.”

Community experiment

89 Seconds Atomized is not only an art installation but also an experiment in digital ownership and community. Users can view their own atoms on the platform, or either borrow or purchase atoms from other users to view it. Snark.art also encourages the community to organize their own screenings. Granting access to individual atoms is a defining aspect of the concept. 89 seconds could not exist without the blockchain, it is part of the medium of the work,” Lakoubay said.

What’s so groundbreaking about the installation is that it’s a conceptual artwork first, and an investment vehicle second. The community of atom owners is as much a part of the work as the film itself. Though both have implications for the art world. Art has a unique advantage in teaching a broader public about how blockchain technology works through accessible lessons. Ownership, permission, and peer-to-peer transactions are just some of the benefits blockchain technology lends to the art world. Of course, it’s also another way to speculate on price. As art enters its own asset class, investors look to blockchain art as an asset that accrues value.

Not only digital art

Digital art is not the only medium decentralization is starting to change. Some startups and more established players are looking at ways blockchain can enhance the trade in physical art and other more tangible collectibles. Just as digital art benefits from an immutable record of ownership, the more conventional art market is also adopting blockchain innovations. Similar to digital blockchain art, authenticity and a clear provenance increases an artwork’s value and reduces friction in the traditional art market.

For art collectors, provenance is an essential consideration. Investing in a fake, or stolen artwork is an expensive mistake few would like to make. However, the global trade in art and collectibles is a shadowy, unregulated market. One where you have to trust many different actors. Art fraud looms whenever a work comes up for auction. Auction houses hire teams of experts to verify the authenticity and clear provenance of any work before it sells it. Unsurprisingly, this costs both time and money. Blockchain technology could help manage all the parties involved. 

Art Fraud

A 2013 case involved a woman named Glafira Rosales who claimed to be selling previously unknown 20th-century paintings from masters such as Mark Rothko and Willem De Kooning. Instead of being genuine paintings, a Chinese painter was mimicking the artists’ styles and “aging” the works with tea, or dirt. She sold a total of $80 million dollars worth of fake art before the fraud was uncovered.

While blockchain may not help directly in preventing theft, it can help establish ownership as well as a history of transactions. One of Espeo Blockchain’s developers Krzysztof Wędrowicz believes blockchain technology could help auction houses and buyers spot fakes. “Let’s say each artwork could be digitized into a cryptographic print,” he said. “This proves that even if someone would forge a piece of art — to create something which is really really similar, in terms of what your eye can see — you can’t see the difference. With a blockchain, you can prove that a work is not authentic.” 

Currently, a work has to have an extensive review each time it comes up for auction. For an effective title registry to work, however, an expert or centralized institution will have to establish provenance, to begin with. Wędrowicz admits that “You need some authority to start with — so yes it’s some kind of risk.” However, several startups are developing novel ways to track provenance and authenticity using blockchain technology.

Decentralized title registry

One project addressing the provenance of art and collectibles is Artory. According to the company’s website, it’s a decentralized title registry for art and collectibles – a $2 trillion-dollar market. Currently, this asset class lacks a central registry. Artory stores vital transaction data and the history of ownership, and is easily accessible to all parties involved. Wide adoption, they claim, will help reduce costs and give investors more confidence.

Adoption across the art market, of course, will depend on how stakeholders will perceive the change. Artory remains confident that large stakeholders will see the benefits. Previous efforts to keep a central registry have failed largely because collectors don’t trust central entities with their information and intermediaries would like to keep their jobs. However, due to the decentralized nature of Artory, more stakeholders are willing to accept it, they claim.

Last month, Christie’s concluded the sale of the Barney A. Ebsworth Collection of 20th-century art. The record-breaking auction brought in more than $320 million. The auction was not only hugely profitable, but the sales were also recorded on Artory’s decentralized, public blockchain.

Provenance tracking on a blockchain could add more transparency to the art market and facilitate valuations, provenance studies, insurance claims, and even asset-backed lending.

Art Asset Tokenization

Similar to 89 Seconds Atomized, investors can also own individual pieces of physical artwork. Art asset tokenization enables many investors to own parts of an asset. Real estate asset tokenization is another industry where companies are using blockchain technology to tokenize.

Developers claim that blockchain art asset tokenization will add more liquidity to the market encouraging more people to invest in smaller units. Fractional ownership gives investors the chance to own small parts of famous works of art.

Blockchain advisor Francois Devillez has written extensively on asset tokenization and believes it’s the future of blockchain


“[Asset tokenization] works as follows,” Devillez told me. “Let’s say you have a house, you bring the house into a company and you tokenize the share. So you don’t tokenize the asset itself directly. With art, you associate a token with ownership. When you sell the art, you sell the token with it.” Unlike digital art, tokenizing tangible art is more about representing ownership.

Fractional Ownership

Art startup Maecenas is already tokenizing art assets. Maecenas is an unabashedly investment-focused platform looking at art assets as a way to make money. Art sold on the platform is stored in safe facilities far away from the eyes of the public. Nevertheless, for investors who view art as an asset class, this is perfectly natural. 

In a conversation over Telegram, Macaenas press spokesman Mayank Jain said, “we are pioneers in asset tokenization and we firmly believe in its financial potential to reinvent some industries such as the fine art market. But [asset tokenization] shouldn’t be an end in itself. The disruption lies in the value proposition behind the technological process.”

Greater Liquidity

Maecenas claims that splitting million-dollar paintings into smaller pieces will open the market to greater participation. Liquidity is the main argument for this kind of innovation. As in real estate asset tokenization, being able to quickly buy and sell smaller financial units of large assets will allow smaller investors to react to enter the market.

Maecenas’ platform runs on a public, decentralized blockchain to keep all transactions transparent and to protect the company from insider trading, according to the company’s whitepaper. When an owner wants to sell a piece of art, he or she first has to put it up for sale on the platform. Investors bid with the company’s ART token using smart contracts. Once the auction finishes, the smart contract calculates the final share price and the number of shares each investor receives. The seller then receives either ART tokens, or an equivalent crypto or fiat currency.

“Maecenas’ proposal is not just a theory,” said Jain. “Our platform is a tested and proven fractional ownership model. In September, we successfully tokenized and auctioned a 31.5% stake in Andy Warhol’s 14 Small Electric Chairs (1980), collectively raising $1.7m, and recently we revealed Project Phoenix — our upcoming sale of a tokenized Picasso.”

“Project Phoenix will be the first ‘perpetual’ digitized and tokenized work of fine art,” Jain added. A single ERC 721 token and a fixed number of ERC20 tokens will represent ownership of the physical asset. Similar to owning stock in a company, holding tokens gives investors voting power for key decisions in the painting.

Unfortunately, tokenizing the paintings makes them difficult to view in person. Each goes into a safe storage facility in a “freeport” such as Singapore’s Le Freeport. Keeping art stored in these facilities allows investors to avoid high taxes involved in selling luxury items.

Conclusion

Blockchain technology is revolutionizing the art world in a number of fascinating ways. From cutting-edge blockchain art, asset tokenization, and decentralized title registries, blockchain tech is driving innovation in the space. Blockchain not only controls the distribution of digital assets, but also records vital provenance information, and opens the market to investors. Many of these projects are still in their infancy but have also demonstrated successful use cases for the technology.

Images in order of appearance:
CryptoKitty
Las Meninas Diego Velazquez 1656-1657
Chop Suey Edward Hopper 1929
Big Electric Chair Andy Warhol 1967

Categories
Entrepreneurship Software Technology

7 Common Mistakes When Developing MVP

Whereas some startups become successful, the truth is that nine out of ten initiatives fail. Introducing a new product to the market is quite risky. In order to minimise the risk, an MVP is created but unfortunately, despite the efforts, some startup owners still fail at this stage.

So, what is an MVP?

Most people think that an MVP is the end product, whereas others think it is a beta version of the final product. However, a minimum viable product (MVP) is a product with just enough features to satisfy early adopters and also to get significant feedback to incorporate it into the final product. It makes potential customers realise what your product is all about.

It is an opportunity to test the waters and garner customer feedback before the final product is launched. Having said that, it is important to avoid these seven development pitfalls so that your MVP will be successful.

1. The Need to Create the complete Product

An MVP is not supposed to be the final product, but it will achieve its goal by subtraction. However, many business owners believe otherwise and tend to include everything the final product should offer. In other words, they don’t just stick to the minimum number of features but include most or all features, also they try to make the design top-notch leaving no space for future changes and improvements.

This stems from the desire to impress the audience. That is why developers feel the need to polish up the user experience, but also more features are added to display the app’s multi-functionality.
Such an approach might turn out to be disastrous especially if the audience rejects the concept despite allocating a big budget. Deciding what features are crucial while developing an MVP depends on two factors:

The feature selection process involves going through the MVP’s goals and objectives as well as customer needs in order to determine only the key features relevant to the goals. Also, outlining each proposed feature and pointing out its specific benefits in relation to the predetermined goals and users’ expectations. This enables you to see clearly and avoid adding unnecessary features at the MVP stage.

2. Striving for Minimalism

But there are always two sides to a story. When you focus too much on minimizing the features of an MVP, you may fall into the trap of excluding the very features that are key for customers. In addition, you might end up choosing the minimal features that are neither in line with your customer’s desires nor with your own goals. As a result, the MVP ends up having minimal features that are useless to potential customers.

As a rule of thumb, remember that it should not just be a minimum but also a viable product.

3. Disregarding the Market Research

Unfortunately, this is among the main reasons why start-ups fail. You need to know your market. Apart from knowing the customers’ needs, desires and wants, you should find out if your idea is innovative or similar to other that already exist.

Unfortunately often, after carrying out market research, some business owners choose to ignore the results altogether. The common belief is that they either know it all or their idea is so unique that will pass the market test anyway. Although there is nothing wrong with believing in your idea, disregarding market research might come at a cost.

Imagine building an MVP only to find out later it isn’t in line with the market’s needs or that it already exists, or that it isn’t unique at all and won’t make a difference in the market. Wouldn’t that be a waste of time, money and resources?

4. Evading the Prototype phase

Prototyping is a very important step while developing an MVP. It is a visual representation of your idea or, in other words, it brings your idea to life, and what is more, it makes the app development process easier. In addition, it is essential as it dispels any doubts the investors might have about the product.

This phase consists of three steps:

  1. Begin with interface architecture and build a basic structure of your product as well as general information related to it. Remember to include an interaction foundation for your application.
  2. Build a sketch, build a low-fidelity interactive prototype – a rough wireframe that maps your app’s information architecture and includes interactive elements.
  3. Finish with a high-fidelity interactive prototype which includes the graphics and interactive elements which allow navigation through the application.

5. Choosing the Wrong Team

Hiring an improper, inexperienced or unprofessional team can be an MVP’s downfall. What is needed is a team of designers, developers, QA engineers and PMs in order to build an MVP. However, if this team does not have top-of-the-notch skills and proficiency, then the development stage will fail.
When you work with an unprofessional team, you are likely to come across two issues:

Missed deadlines. An MVP needs to be developed in a fast-paced environment. There is a need for constant testing and upgrading. An unprofessional team is likely to miss deadlines and in the end slow down the process or miss opportunities all together.

Feedback analysis. Since timing and analysis are crucial for any MVP, then its success depends on the entire team’s competence. If your team is incompetent, once you receive the first feedback from your users, they can be unable to work on a better version. The best development teams focus on the product so much that they might provide some very valuable feedback and insight. They act as consultants both on the technology and the product.

6. Inappropriate Development Method

Developing an MVP successfully is like cooking. If you intend to marinade and roast chicken and then you change your mind and decide on boiling it, you will spoil it and fail. This is the reason why some MVP developers give up halfway through the project.
There are generally two approaches to building an MVP: agile and waterfall. Agile software development is more efficient in this scenario when compared to the waterfall (traditional method) due to its ability to deliver high-quality results week after week. In addition, the agile approach done properly helps avoid bugs and offers adaptability to changing circumstances.

Although the payment structures might not matter, it is worth noting that most companies that offer agile software development are paid per-hour rate while those that use waterfall are paid per-bid basis. When developing using waterfall one might finish up with a product done according to the specs instead of the product that’s actually needed. Each change done outside of the agreed scope might be paid extra. This can affect the development stage where one is unable to pay on time hence delaying the MVP or affecting the relationship with the developers.

7. Ignoring analytics and user feedback

One of the main purposes of an MVP is to generate feedback in order to make the final product better. Ignoring feedback renders the whole process useless as it deprives it of its purpose.
Why build an MVP only to ignore feedback later?
Feedback is important as it helps you understand the user better, adjust your product to meet the customers’ needs as well as gauge the audience’s perception of your product. Therefore, ignoring the analytics and user feedback is business suicide.

Conclusion

Developing a successful MVP will increase your chances of the product seeing the light of day in the market. In addition, it might turn your idea into a profitable course. However, not all MVPs are successful. Avoiding these seven mistakes will go a long way toward making your MVP successful at its development stage.

See also: