Blockchain technology deFi decentralized finance with secure digital network for crypto currencies bitcoin financial cloud computing and money records
Espeo Software has become a trusted and vetted service provider for connecting, contributing, and collaborating with the Hyperledger community on building enterprise-grade blockchain ecosystems. As a result, the company will support the development of open platforms for distributed ledgers with other future-looking blockchain establishments.
Espeo Software was founded to guide organisations in moving towards a distributed future. By utilizing the Linux Foundation’s Hyperledger Fabric, the company builds practical solutions to common business cases like invoice processing, commissions management, financial settlements, contract validation or master data management.
As one of Europe’s leading blockchain consulting companies, receiving the prestigious Hyperledger Service Provider Certification was a natural step for Espeo Software.
The Hyperledger Fabric Certified Service Provider program ensures that enterprises obtain the assistance they need to deploy new applications faster and more efficiently than ever before. During this process, they can trust that a credible and vetted partner will support their production and operational needs.
Espeo Software’s participation in the Hyperledger association will enable the company to work with other enterprises pioneering the use of distributed ledger technologies and contribute to the rapid and global adaptation of blockchain applications.
“Joining the Hyperledger community is an exciting opportunity for us, and we look forward to partnering up with its members to accelerate blockchain’s adoption globally. We cannot wait to utilize our expertise on the matter with other organizations pioneering in distributed ledger technology.”
said Dominik Zyskowski, Consulting Director at Espeo Software
Providing good quality applications is crucial for any business. The idea of having unit and integration tests for important parts of code has existed in backend languages for a long time. When it came to the front end – it was not always considered as something really needed.
Front end part was simple, user interfaces were not complicated, applications were usable even with some errors in the UI. Nevertheless, with rapidly increasing numbers of SPA applications and pushing more and more functionalities to the front end side, having very complicated user interfaces, testing client side code started to be treated seriously.
Choosing types of tests you write should always be a project specific decision. In the following article I will go through all types of tests that you can create for front end applications and highlight the differences between them. After reading it, you should be able to answer the question “What tests should I write for my next project?”.
Types of front end tests
Image below presents the test hierarchy in the form of a pyramid. It also includes additional test types which are outside of it.
Unit tests
At the bottom of the pyramid you have unit tests. They usually test small parts of the functionality. It can be some business logic responsible for doing specific calculations. Units can also cover behaviours of a single component in SPA frameworks like React. The smaller the parts you are going to test the better. Unit tests should be the majority of all your tests, building the solid ground for your application tests coverage.
Sometimes people create tests for container components which consist of multiple smaller components. In such cases, you should no longer think about such tests as a unit as those tests start to be integration tests. Units work on mocked data, prepared for each test case. Unit tests focus on implementation details and should be done by developers who fully understand code which need to be tested.
Integration test
Another type of tests are integration tests which test interaction between different parts of the application. Usually you take a group of components working together and make sure the results of their interaction match your expectations. The border between unit and integration is very blurry. Similar to units, those tests work on mocked data. Integration tests focus less on implementation details. They usually run on simulated or headless browsers.
E2E tests
End to end tests test application as a whole. They run on applications opened in real browsers and try to simulate real user behaviours. Application should run against a test server with data as close to “production” as possible. It is not possible to cover all possible user behaviours inside tests. The common approach is to cover the most popular “user journeys”. End to end tests don’t care about implementation details, they can be created by a separate QA engineer or developer who is aware of how applications should behave without any knowledge of how things were implemented.
This is a great benefit as such tests can be conducted even by someone outside the team who receive a full set of feature acceptance criteria. E2E tests can be configured to run on desktop or mobile resolution making it possible to check if application responsiveness was implemented correctly.
Other tests
Unit, integration and e2e tests are three main parts of our test pyramid. However, there are also different ways of testing front end applications:
Visual Regression tests – the idea is to check how an application is rendering its content. Usually, when some feature or page is ready, you can make a visual snapshot of it (screenshot) and mark it as a pattern against which you will compare in future. Then, when the application is evolving, you can from time to time run a snapshot comparing tool to check if all application visual changes were expected. It’s a great way to monitor if some styling done in another component or module affects things which already exist and should not be changed. Visual regression tests can be attached to Storybook which is displaying our components in isolation to prevent frequent changing of pattern if content of a specific page is changing.
Performance tests – performance testing focuses on how fast an application is loading and how accessible it is for an user. There are different tools which can help you monitor application loading time and suggest possible improvements. Very often performance problems start to be visible when executing E2E tests takes a lot of time.
Accessibility tests – accessibility tests focus on checking if our application is created in a way which follows A, AA or AAA accessibility levels. There are multiple different tools which will check and suggest possible semantic or colouring improvements for your app.
Cross-browser testing – this kind of tests focus on checking if an application is working in a consistent way on different browsers and different platforms. This used to be mostly a boring QA job to test against different devices. Nowadays, you are able to set up your E2E to be executed on multiple browsers. In the real world this is something which is not done very often.
Manual testing – I am mentioning it since this type of test will almost always exist in any project. However, the more you cover conducting all the tests types mentioned above, the less manual work will be needed.
What tests should you write for your next project?
Choosing a testing approach for your next project is not an easy job. There are a lot of things that have to be considered. First of all, you should think of the type of project you are developing. Good communication with the client is needed to deeply understand their needs and choose the best “testing plan”.
POC or Prototype Application
If you are working on a simple Proof Of Concept (POC) or Prototype application, spending time on tests usually doesn’t make sense. In case of such projects time is crucial and as a result you will never have a 100% ready application. Often such prototype applications will be recreated later on from scratch or only parts of the prototype will be reused. Such prototype apps may change frequently as requirements will be changing over time.
Suggested tests: none
Application done once without updates
If you are working on an application till the version 1.0 and you know that such an application will never get any updates, you should start thinking about some small set of tests. For sure you should start from the ground and implement unit tests for crucial components and business logic. You can also think of some basic e2e tests covering the most popular paths in your application. As applications will be released just once, those tests should help you mainly during the development.
Suggested tests: units, basic e2e tests
Application with frequent new versions or Product Application
For an application which will be supported for a long time and often new versions of it will be released, you need to make sure that your application is of a perfect quality. You should have units covering most of the components and business logic. Quite a good set of integration tests. E2E test covering all standard user journeys.
Suggested tests: units, integration, E2E tests
Applications with specific requirements
Choosing between other test types: visual regression, performance, accessibility or cross-browser always should be decided based on specificity of a project and client requirements.
Cross browser tests
For most of the projects you will support multiple browsers including mobile ones. Running E2E against all sets of browsers may be very time consuming making your pipeline to run for hours. Probably the best approach is still just to run an e2e test for a single browser with combination with manual testing or just run on 2 most frequently used. I would not advise you to overload your pipelines.
Performance
If speed of working is crucial to not lose users, then you should invest in performance testing. Starting from the intermediate development phase making regular performance checks after each or few sprints. Thanks to this, you can make sure that the application is going in the right direction. Some performance problems discovered when the application is almost ready may lead you to huge amounts of work needed for refactoring.
Accessibility
Accessibility requirements should always come from the client. If a client expects AA or AAA level proper tools which will scan your html, the structure will help you understand what is missing to reach desired accessibility level.
Visual regression
Visual regression should be done when having a very stable UI. E2E tests will check if a feature is working but they will not check if it looks perfect. For pages without dynamic content, you can collect patterns and add visual checking to your pipeline. For most applications built from reusable components, the best is to gather patterns from Storybook with standalone components displayed and make sure that further code changes do not affect them. If crucial is to also have a stable UI for all possible resolutions. You can gather screenshots from mobile/tablet etc. versions and also compare them inside our pipeline.
Conclusion
The test coverage concepts above are just a reference. Final decision is always up to the client but we as developers should always educate the client of the best possible approach for a specific application. Having a product application used by a huge number of users done without any tests is an extremely unwise decision but unfortunately sometimes it happens. Mainly because of poor knowledge about consequences supported by very limited resources for the project.
Hope the lecture of this article extended your knowledge about types of tests for front end applications and gave you a brief idea of what tests should be created for your new project.
In the next article I will focus on choosing the best tools and test frameworks for the modern front end technology stack in 2022, understand when you should execute specific tests, and put some light on typical problems when developing different kinds of tests.
This article was written by our Senior JS/Front end Developer, Łukasz Błaszyński. Łukasz has over 9 years of professional experience in the field. You can check out his Github profile here.
For the last couple of years the BI solutions like Tableau and PowerBI started to be very desirable for the projects. It did not take much time when business started to ask for test automation of created reports. Since reports can be published to the server like Tableau Server, the obvious choice was to use Selenium. Immediately when creating automated tests we can encounter problems validating reports. We can verify that a report has been generated but we cannot say whether it contains data or not. In this little tutorial I will present how to create a one-step test to retrieve data from a report using Tableau JavaScript API.
Cucumber Test
Let’s start with a simple cucumber scenario. It contains two steps. First one sets up a web driver and navigates users to the public Tableau report. Second one validates the report. We will simply verify if a provided in the parameter report Storm Map Sheet contains value ALEX.
[dm_code_snippet background=”no” background-mobile=”no” slim=”no” bg-color=”#212725″ theme=”dark” language=”javascript” wrapped=”no” height=”400px” copy-text=”Get the Code!” copy-confirmed=”You have it!”]
@Test
Scenario: Report contains data
Given I navigate to “https://public.tableau.com/views/RegionalSampleWorkbook/Storms”
And I verify “Storm Map Sheet” report is generated with data “ALEX”
[/dm_code_snippet]
Test Steps
Next step is a standard definition of test steps. Since step 1 is pretty obvious we will concentrate on step 2.
[dm_code_snippet background=”no” background-mobile=”no” slim=”no” bg-color=”#212725″ theme=”dark” language=”javascript” wrapped=”no” height=”400px” copy-text=”Get the Code!” copy-confirmed=”You have it!”]
@Given(“I navigate to {string}”)
public void iNavigateToUrl(String url) {
urlAddress = url;
tableauPage.navigateToReport(url);
}
[/dm_code_snippet]
[dm_code_snippet background=”no” background-mobile=”no” slim=”no” bg-color=”#212725″ theme=”dark”
language=”javascript” wrapped=”no” height=”400px” copy-text=”Get the Code!” copy-confirmed=”You have it!”]
@And(“I verify {string} report is generated with data {string}”)
public void iVerifyReportIsGenerated(String reportName, String expectedData) {
tableauPage.updateJSScript(urlAddress, reportName);
tableauPage.verifyReportIsGenerated(expectedData);
}
[/dm_code_snippet]
We see two method calls. In the first one we will be updating the existing JS script with provided values reportName and url. In the second method we will be validating Tableau report data.
Methods Implementations
Update JS script method does following things:
take template.js and put it into string
update values of $url and $reportName with provided ones
write the result to script.js
[dm_code_snippet background=”no” background-mobile=”no” slim=”no” bg-color=”#212725″ theme=”dark”
language=”javascript” wrapped=”no” height=”400px” copy-text=”Get the Code!” copy-confirmed=”You have it!”]
public void updateJSScript(String url, String reportName) {
String data = “”;
data = Files.readString(Paths.get(“Path to template.js”));
data = data.replace(“$url”, “\””+url+”\””);
data = data.replace(“$reportName”, “\””+reportName+”\””);
Files.write(
Paths.get(“Path to script.js”),
data.getBytes(StandardCharsets.UTF_8),
StandardOpenOption.WRITE);
}
[/dm_code_snippet]
Having updated script.js we can now navigate to index.html. When opening the page, the data of Tableau report are retrieved from Tableau JavaScript API and printed to the browser console. Then the method reads those data and validates them comparing with expectedData.
[dm_code_snippet background=”no” background-mobile=”no” slim=”no” bg-color=”#212725″ theme=”dark”
language=”javascript” wrapped=”no” height=”400px” copy-text=”Get the Code!” copy-confirmed=”You have it!”]
public void verifyReportIsGenerated(String expectedData) {
driver.get(“file:\\\\”+”Path to index.html”)
LogEntries log = driver.manage().logs().get(LogType.BROWSER);
List logs = log.getAll();
for (LogEntry e : logs) {
if (data != null) {
if (e.toString().contains(“script.js”)) {
Assert.assertTrue(e.toString().contains(expectedData));
}
}
}
}
[/dm_code_snippet]
Setting Chrome Driver properties
Important thing is to pass appropriate options to ChromeDriver in order to print ALL logs to the console. Obviously we can specify which logs should be visible in the console. In our case it is safe to print all. We then filter logs and validate just what is useful for us.
Unfortunately, this works only for the Chrome browser.
[dm_code_snippet background=”no” background-mobile=”no” slim=”no” bg-color=”#212725″ theme=”dark”
language=”javascript” wrapped=”no” height=”400px” copy-text=”Get the Code!” copy-confirmed=”You have it!”]
public static WebDriver setupChromeDriver() {
ChromeOptions options = new ChromeOptions();
var logPrefs = new LoggingPreferences();
logPrefs.enable(LogType.BROWSER, Level.ALL);
options.setCapability(CapabilityType.LOGGING_PREFS, logPrefs);
options.setCapability(“goog:loggingPrefs”, logPrefs);
options.setCapability(CapabilityType.ACCEPT_INSECURE_CERTS, true);
driver = new ChromeDriver(options);
return driver;
}
[/dm_code_snippet]
template.js
Finally template.js. Script contains $reportName and $url parameters which are replaced by test scenario data. In the function initViz() we call getData() function which gets underlying data from Tableau JavaScript API.
[dm_code_snippet background=”no” background-mobile=”no” slim=”no” bg-color=”#212725″ theme=”dark”
language=”javascript” wrapped=”no” height=”400px” copy-text=”Get the Code!” copy-confirmed=”You have it!”]
var viz, sheet, table;
function initViz() {
var containerDiv = document.getElementById(“vizContainer”),
url = $url
viz = new tableau.Viz(containerDiv, url, “”);
setTimeout(getData, 4000)
}
function getData() {
sheet = viz.getWorkbook().getActiveSheet().getWorksheets().get($reportName);
options = {
maxRows: 1,
ignoreAliases: false,
ignoreSelection: true,
includeAllColumns: false
};
sheet.getUnderlyingDataAsync(options).then(function(t)) {
table = t;
console.log(JSON.stringify(table.getData()));
var tgt = document.getElementById(“dataTarget”);
tgt.innerHTML = “
Underlying Data:
” +
JSON.stringify(table.getData()) + “
“;
});
}
}
[/dm_code_snippet]
index.html
And last thing is to create a simple html file to load underlying data when opening it.
As promised, a one-step test has been presented. As usual it took me much more time to figure out how to retrieve data from a Tableau report than to implement a simple scenario. This is not a fully UI test but definitely better than just simple selenium verification whether a report exists or not. Hopefully a similar solution will be soon available in different BI tools.
Although digital workshops and events have been known in the past, it was the global pandemic that accelerated their popularity among companies and corporations. As more and more people started working from home, organizations had to find ways to make people collaborate effectively despite the distance between them.
Aside from virtual meetings and planning sessions, digital workshops are by far one of the most productive collaboration methods, used both in-house and while working with external clients.
As virtual workshops can bring a significant value to an organization, we want to share our tips on how to run them successfully.
The term ‘digital workshop’ refers to an event that takes place virtually with a group of participants discussing specific topics and interacting with each other, the host, and program content. Compared with in-person conferences or trainings, online workshops usually focus on topics in greater depth. As a result, topics can be viewed more objectively, and the host and participants are encouraged to interact more.
Reasons for hosting a workshop online
There are many reasons for hosting digital workshops. Below we share some of them:
Online presentations, workshops, and training sessions are increasingly popular in an increasingly globalized world.
By arranging a workshop online, you can save a lot of money. The reason for this is that many of the costs (e.g. accommodation, treats, workshop materials) are not necessary.
Workshop members from different sites can work efficiently together in a distributed environment.
The ability to proceed with workshops or meetings despite difficult circumstances, such as staff illness, travel bans, and environmental concerns.
What obstacles you might encounter while preparing a virtual workshop and how to overcome them?
Before organising a digital workshop, it is important to consider the following:
First of all, a digital workshop should first be designed to engage participants and create a trusting environment.
To achieve this, you may want to establish some rules before the workshop (e.g. that each participant should have a camera on, say something about themselves beforehand etc.). Additionally, everyone should feel safe and comfortable to initiate a conversation. This is why asking short questions that are easy to answer and checking up on the participants is vital.
Secondly, you need to look for tools that fit the needs of the workshop.
Some of the most popular online meeting tools include Zoom, Google Hangouts, and Teams. A good idea is to mix different working methods during the workshop. Consider using sticky notes, dividing participants into smaller groups, ways to answer the potential questions that might arise etc. To facilitate a successful digital workshop, it is recommended that you use tools for:
Note-taking
Screen sharing
File transfers
Chatting
Voting
Something to highlight is that as a workshop facilitator, you should feel like you have control and understanding of the tools you are using.
What about the participants’ attention during digital workshops?
Generally speaking, it is good to prepare a script that identifies each step of the digital workshop (and the tool to utilize a particular step). Then, it is crucial to make sure that everyone is aware of those steps.
Moreover, it is worth testing both the script and the tools before the workshop takes place. Well-chosen tools as well as engaging workshop script should be able to hold people’s focus and attention during the whole virtual workshop. A good idea is to have a plan that consists of short video calls and smaller interventions. By doing so, participants in the workshop will remain more focused.
Lastly, set clear expectations for the digital workshop you are facilitating.
Having clear goals in mind is a must when organising a successful digital workshop. This way, you and the rest of the workshop members are involved in the process and know exactly what you want to accomplish. This step is achievable with a proper structure and preparation.
Successful digital workshop with Laboratorium Marzeń
Recently, we have conducted a successful product design workshop with a Polish foundation called Laboratorium Marzeń. The foundation focuses on helping families with premature infants and children who require special attention.
They had an idea for a mobile app that would help parents with navigation of their kids’ development and day-to-day lives.
We decided to utilize our experience and expertise in UX to help the foundation design the app in a way that focused on the actual needs of potential users and their struggles.
“We started working on our application with a vision of what needs it should respond to but we had no idea how to go about it. The workshop with Mr. Maciej from Espeo Software was an amazing journey that took us from the initial idea of the application to a real prototype, which exceeded our wildest expectations. Professionalism, creativity, commitment and a focus on the needs of potential users at every stage of the cooperation were the characteristics that we valued the most at Espeo.”
Jolanta Uchman, Co-Founder of Laboratorium Marzeń foundation
You can read more about the structure and the course of the workshop here.
Would you like to organize a product design workshop for your business? Use the contact form below and we will come back to you to discuss the details.
Manage Cookie Consent
We use cookies to optimize our website and our service.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
We use technologies like cookies to store and/or access device information. We do this to improve browsing experience and to show personalized ads. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.