Categories
Design Software Technology

SwiftUI vs UIKit – why is declarative programming the future?

As an iOS developer, it’s important to know and understand SwiftUI and it’s predecessor UIKit – a framework used to build graphical, event-driven interfaces for mobile apps. When SwiftUI was announced at Apple’s WWDC event in 2019, it immediately grabbed my attention. It’s a huge release and change in building graphical interfaces since UIKit, which is now over 10 years old. But why, and what makes it so different from UIKit? Its use of the declarative paradigm instead of the imperative paradigm. Now we are in the year 2023. It’s been 4 years since the first announcement and I can tell you that SwiftUI is definitely the future of iOS programming!

Imperative programming

Imperative programming is a paradigm that focuses on describing the specific steps or procedures to achieve a desired outcome. In this approach, developers give explicit instructions to the computer on how to perform tasks and update the user interface. UIKit uses an imperative programming model.

Imperative programming requires developers to manage the state of the application and handle UI updates manually. For example, to display a button on the screen, developers must create the button object, specify its attributes, and explicitly add it to the view hierarchy. As the application evolves and state changes, imperative code becomes more complex and difficult to maintain.

UIkit vs SwiftUI

UIKit has been used as an imperative framework for app development for a long time. While it has served developers well, it has had its limitations and a legacy feel in some areas. As applications became more complex, UIKit codebases often became unwieldy and difficult to maintain.

Making changes or adapting to different screen sizes and orientations required considerable effort. SwiftUI addressed many of these challenges with its declarative approach. It introduced a more natural way of building user interfaces by allowing developers to declare the desired layout and appearance of UI elements.

The framework automatically handles the underlying complexity of managing the user interface. This makes the code more maintainable and scalable. SwiftUI also provides a live preview feature that allows developers to see changes to the UI in real-time as they write code. This significantly speeds up the development process and improves the overall developer experience.

UIKit and SwiftUI view visibility code.
Figure 1. A snapshot of SwiftUI code demonstrating the use of navigation bars and buttons, as compared to UIKit

Examples

Let’s examine a few examples to illustrate the differences between UIKit and SwiftUI in action.

Creating a button

In UIKit, creating a simple button involves several steps, including initialising a UIButton object, setting its properties (such as title, colour, and font), and adding it to the view hierarchy using addSubview(). Any updates or changes to the button’s appearance or behaviour require manual adjustments to its properties and event handling using addTarget().

SwiftUI, on the other hand, takes a much simpler approach. To create a button, you simply use the Button view and specify its label text. SwiftUI takes care of the rest, including handling the button’s appearance and actions. For example:

Differences of button configuration in UIKit and SwiftUI.
Figure 2. A side-by-side comparison of button configuration in UIKit and SwiftUI.

Conditional visibility

When dealing with conditional visibility of views, UIKit requires you to manually show/hide views based on conditions, often involving complex logic and maintaining multiple outlets or references to views.

SwiftUI simplifies this process with its declarative nature. To conditionally show a view, you can use the if statement directly in SwiftUI. Here’s an example where a view is only displayed when a certain condition is met:

Differences in conditional visibility in UIKit and SwiftUI.
Figure 3. A side-by-side comparison of conditional visibility in UIKit and SwiftUI.

Handling lists

Working with lists in UIKit often involves implementing a UITableViewDataSource and a UITableViewDelegate to manage the data and appearance of cells. This requires managing data sources, registering cells, and handling updates explicitly.

SwiftUI simplifies the handling of lists with its own List view. To render a list of items, you simply pass an array of items and a closure that describes how to render each item:

Differences in list handling in UIKit and SwiftUI.
Figure 4. A side-by-side comparison of list handling in UIKit and SwiftUI.

These examples highlight how SwiftUI’s declarative paradigm greatly simplifies the code and makes iOS development more intuitive and efficient compared to UIKit’s imperative approach.

Pros and cons of SwiftUI

Pros of SwiftUI

  • Declarative syntax: SwiftUI’s declarative syntax simplifies UI development, making it easier to learn and understand for both new and experienced developers.
  • Swift integration: SwiftUI is designed specifically for Swift, leveraging its powerful features and type safety, resulting in more robust code.
  • Live preview: The live preview feature allows developers to see immediate results as they make changes to the UI, resulting in faster development iterations.
  • Platform adaptability: SwiftUI code can target multiple Apple platforms (iOS, macOS, watchOS and tvOS) with minimal modification, encouraging code reuse.
  • Animation support: SwiftUI provides built-in support for animation, making it easy to create visually appealing user interfaces.

Cons of SwiftUI

  • iOS version limitation: SwiftUI is available for iOS 13 and above, which means it may not be suitable for projects that require support for older iOS versions.
  • Learning curve: While SwiftUI is relatively easy for developers familiar with Swift and declarative programming to grasp, it may take some time for developers accustomed to UIKit’s imperative style.
  • Limited support for third-party libraries: Because SwiftUI is a relatively new framework, it may have fewer third-party libraries and resources than UIKit.

Conclusion

SwiftUI’s adoption of the declarative programming paradigm has revolutionised iOS application development. Its simpler and more expressive syntax, with live preview, allows developers to create robust and visually stunning user interfaces more efficiently. While UIKit has served the iOS community well for many years, SwiftUI represents the future of iOS programming. As the framework continues to mature and gain popularity, developers can expect further enhancements and improvements that will solidify its position as the first choice for creating user interfaces in the Apple ecosystem.

Categories
Blockchain Entrepreneurship Finance Financial Services Other Supply Chain Technology

How to leverage distributed ledger technology in corporate platforms

In the modern-day digital landscape, consortia and corporate platforms face numerous challenges. These include managing complexity, enhancing collaboration, and improving transparency. This article focuses on addressing them using distributed ledger technology (DLT). Power relations, coopetition within consortia, data security, and privacy in decentralized architectures will be the topic of examination.

How to leverage distributed ledger technology in corporate platforms and consortia

Table of contents:

  1. Modern-day application of blockchain
  2. How blockchain can address challenges faced by consortia
  3. Interactions in corporate platforms – a four-phase trajectory
  4. Conditions for a Successful distributed ledger technology Project
  5. Things can go wrong with blockchain
  6. Our distributed ledger technology business case: HLB Global
  7. About this article
  8. About Agnieszka Hołownia-Niedzielska

Modern-day application of blockchain

Distributed Ledger Technology has wide uses. Sectors like settlements and supply chain management continue to leverage it. Logistics use blockchain to track production and delivery processes, prevent food fraud, and verify product origin. It also helps verify transportation conditions, validate expiry dates, and confirm eco-certificates.

How blockchain can address challenges faced by consortia

Operating across companies requires considerations around cost division, compromising needs, and managing relationships without a chain of command. Despite many unknowns that characterize building and organizing consortia, the business case that stakeholders want to work on remains the same. They will be looking for similar benefits. That’s why creating a list of benefits and presenting them to internal stakeholders is important. This shift could have significant implications for all workers. It can lead to increased automation, greater independence from a central unit, and higher accountability. With a more process-oriented approach, organizations can expect better-structured processes, but also better data quality. This could lead to less confusion and more organized cross-company systems.

Ecosystems based on DLT, blockchain, or other decentralized architectures are maturing. More companies now look beyond innovation and publicity. Instead, they focus on tangible business cases. Use cases of these consortia are likely to impact workers not directly involved in software development and tech, such as those in accounting and settlements. By automating repetitive tasks, these technologies allow specialists to focus on expert tasks, reducing mistakes and streamlining processes.

The power within blockchain consortia is distributed differently from that of large centralized platform providers. Blockchain implementation operates independently of the organizational or legal framework. However, its decentralized nature means that each participant owns a full copy of the data, rather than a single central unit that has to be monitored or widely trusted.

Interactions in corporate platforms – a four-phase trajectory

The interaction between various companies involved in such platform projects always has specific characteristics, but it could be streamlined into a four-phase course.

The first phase involves innovators, often CIOs driven by personal interest, who identify a persistent problem. It could be resolved by standardizing the process across companies with blockchain technology.

Later, the initiating company experiencing the issue reaches out to interested parties in other business units. This usually happens after many companies have dealt with the same problem for years.

In the third phase of the process, business owners from one or more companies work together. Occasionally, the tech and legal departments also get involved. Usually, the ones in control of the project consult internally with multiple stakeholders. They then work on a solution acceptable to all businesses.

Finally, the proposed solution is built into a Minimum Viable Product (MVP) or Proof of Concept (PoC) version. It’s then beta-tested, feedback is collected, and the solution gains momentum. Some companies might wish to join only as participants, while the majority prefers to join as nodes to own their copy of data.

Conditions for a Successful distributed ledger technology Project

Maintaining relationships within DLT consortia requires some vital work. Mainly, it’s a key to managing tasks well and organizing things, keeping people on different levels informed. Those managing consortia projects need to consider differences between participants and the number of departments and workers engaged. It’s crucial to understand that processes and roles may differ. As a consequence, the people engaged in the project may change. Depending on the consortium’s participants, the structure of the internal process can also vary.  

Also, although blockchain communication is secure, establishing connections between nodes built in different organizations requires careful consideration of IT policies. Some of them may need adjustments. The team may modify the project’s technical side according to needs. In an ideal scenario, all participants have their node, transforming IT solution consumers into “vendors”.

Gathering feedback regularly is also a success factor. Feedback from early adopters – a group of pilot companies – plays a vital role, especially during usability tests. What’s important to note is that this work isn’t done when the first version of the solution is in operation. Collecting lessons learned after the MVP phase is a method to implement agile adjustments successfully. 

Interested in validating your blockchain project idea? Check out our article:
6 Easy Steps to Verify a Blockchain Project

Things can go wrong with blockchain

However, not all blockchain projects succeed. Some fail because they don’t grow big enough, so they don’t get their momentum. Some people treat consortia projects as internal, resulting in a lack of external communication and adaptability. This creates a restrictive ‘our way is the only way’ mentality. Moreover, implementing change always presents a challenge.

Despite some disappointments, distributed ledger technology consortia and structures keep offering benefits. This could be savings, or utility (paying more but gaining a single, independent source of truth). Developers build these structures in an agile manner, adding new participants and features, and planning for the next steps in the decentralization process. The key is to ensure that end-users actively use these projects; they don’t remain just as ideas hidden in a drawer.

Our distributed ledger technology business case: HLB Global

It’s time to analyze a real-world application, a case study of HLB Global. HLB is a global network of independent advisory and accounting firms. They have independent branches across 157 countries with more than 38 000 professionals, combining local expertise and global capabilities.

Espeo Software created a decentralized system to let members interact with each other more easily. HLB wanted to standardize and clarify its referral processes. They wanted everyone to play by the same rules enforced by the system. With the ability to add new referrals in place via the existing SharePoint interface, it’s now easier for everyone to manage their tasks. To track deal statuses, data is gathered from various sources like project orders, invoices, and payments. We had to build trust into the solution.

We used Hyperledger Fabric, a private blockchain network, as the solution. Users within this network operate in a transparent and secure environment. The system tracks and permanently records every action, ensuring it’s tamper-proof. The permissioned nature of HLF allows only authorized participants to gain access, maintaining the integrity and confidentiality of the blockchain network.

The HLF blockchain network now stores the data. It serves as the one source of truth with immutable data history. We also automated status changes that follow the execution of chain code tasks.

About this article

Espeo Software’s Solutions Consultant, Agnieszka Hołownia-Niedzielska, was invited by Prof. Dr. Ulrich Klüh to share her expertise on a Coopetition in Corporate Platforms project. The Darmstadt Business School conducts the research project, with funding from the Hans Böckler Foundation.

About Agnieszka Hołownia-Niedzielska:

Agnieszka Hołownia-Niedzielska is a Senior Solutions Consultant at Espeo Software. She has over a decade of experience in FinTech and RegTech product development and project management. Having been a business owner herself, she brings unique insights into business and technical analysis across various project sizes. She acquired Blockchain for Business Professional certification.

Categories
Software Technology

[PL] Organizujemy Coding.fest 2.0 w Poznaniu!

Już 22 czerwca (czwartek) spotkamy się w biurze Espeo Software by porozmawiać nt. “Narzędzia AI, a praca programisty”. Na uczestników będą czekały merytoryczne prezentacje przygotowane przez ekspertów z technologii mobilnych, programowania backend oraz programowania frontend. Jedna z prelekcji będzie niespodzianką – a speaker zostanie wyłoniony w drodze konkursu, o którym więcej informacji znajdziesz w dalszej części artykułu.

To doskonała okazja do networkingu z innymi specjalistami, a to wszystko w luźnej festiwalowej atmosferze, gdzie nie zabraknie pizzy, pysznych przekąsek i napojów.

Coding.fest 2.0 #Poznan

Temat: Narzędzia AI, a praca programisty

Data: 22 czerwca (czwartek)

Godzina: 18:00

Miejsce: Espeo Software, ul. Baraniaka 6, 61-131 Poznań

Zapisy: meetup.com

Udział: bezpłatny

Czy to wydarzenie dla mnie?

Programujesz i interesujesz się najnowszymi rozwiązaniami, które mogą ułatwić Ci pracę? Chcesz dowiedzieć się więcej na temat narzędzi AI? A może chcesz podyskutować na ten temat w gronie innych specjalistów? Być może wierzysz w siłę networkingu i chcesz spotkać się ze innymi programistami w Poznaniu by poznać się i wymienić doświadczeniami? Doskonale się składa! Wygląda na to że Coding.fest 2.0 jest właśnie dla Ciebie!

Program wydarzenia

Zaczynamy o godzinie 18:00. Merytoryczna część wydarzenia obejmuje 3 prezentacje. Czas przewidziany na tę część meetup’u to około 90- min. W ramach prezentacji możesz także spodziewać się konkursów dla widowni! Liczymy na Twoje zaangażowanie dyskusję. W trakcie dla gości dostępne będą przekąski i napoje. Po części merytorycznej odbędzie się integracja przy świeżej pizzy. Pamiętaj żeby zapisać się do udziału w wydarzeniu TUTAJ.

Konkurs na wystąpienie

Masz pomysł na temat, który jest związany z AI i programowaniem i chcesz o tym podyskutować? Świetnie się składa! Do naszego składa chcemy zaprosić spikera, który zostanie wyłoniony w ramach konkursu.

Zwycięzca zostanie zaproszony na wydarzenie jako mówca, a dodatkowo otrzyma za swoje wystąpienie nagrodę w postaci 1500 złotych.

Co trzeba zrobić?

W konkursie mogą brać udział wyłącznie osoby pełnoletnie. Należy przygotować krótką prezentacje o tym, co chcesz przedstawić na wydarzeniu. Prezentacja ta powinna być przygotowana w formie wypowiedzi wideo i trwać maksymalnie 90 s (1.5 min).

Zgłoszenie konkursowe

Zgłoszenie należy wysłać mailem zatytułowanym “Konkurs Coding.fest 2.0” na adres: eb@espeo.eu. Można przesyłać je do końca dnia w niedzielę 18 czerwca. Zgłoszenie powinno zawierać twoje dane personale: imię, nazwisko, datę urodzenia, e-mail oraz numer telefonu. Załączony film powinien być w formacie MP4. Przesłanie zgłoszenia jest równoznaczne ze zgodą na przetwarzanie danych osobowych na potrzeby przeprowadzenia i rozstrzygnięcia konkursu.

Wyniki konkursu

Komisja konkursowa złożona z pozostałych prelegentów wybierze zwycięzce w poniedziałek 19 czerwca. Wtedy wszyscy uczestnicy zostaną poinformowani o wynikach. Zwycięzca otrzyma zaproszenie na wydarzenie i wsparcie w przygotowaniu prezentacji na wydarzeniu (przewidywany czas tej prezentacji to 20 – 30 min). Po wystąpieniu zwycięzca otrzyma nagrodę w wysokości 1500 złotych! Powodzenia!

Dołącz do Coding.fest 2.0 w Poznaniu i do zobaczenia wkrótce!

Categories
Software Technology

A guide to dependency injection in NestJS

Abstract

Dependency injection is one of the most popular patterns in software design. Particularly among developers who adhere to clean code principles. Yet, surprisingly, the differences and relations between the dependency inversion principle, inversion of control, and dependency injection are usually not clearly defined. Therefore, it results in a murky understanding of the subject.

The present blog post aims to clarify the boundaries between these terms. My aim is to present a practical guide on how dependency injection could be implemented and used. By the end of this post, you will know:

  • What is the difference between the dependency inversion principle, inversion of control, and dependency injection?
  • Why is dependency injection useful?
  • What are meta-programming, decorators, and Reflection API?
  • How is dependency injection implemented in NestJS?

Introduction 

Software design is a constantly developing discipline. Despite its tremendous practical implications, we still don’t understand how to differentiate between a great and a terrible code of an application. That seems to be a reason to discuss how to characterize great software code.

One of the most popular opinions claims that your code needs to be clean to be good.  

The mentioned school of thought is particularly dominant among developers using an object-oriented programming paradigm. Therefore, they propose to decompose a part of reality covered by the application into objects defined by classes. Consequently, group them into layers. Traditionally, we have distinguished three layers:

  • User interface (UI);
  • Business logic;
  • Implementation layer (databases, networking, and so on).

Advocates of clean code believe that these layers are not equal and there is a hierarchy among them. Namely, the implementation and UI layers should depend on the business logic. The dependency injection (DI) is a design pattern that helps to achieve this goal.

Before we move forward, we need to make some clarifications. Although we traditionally distinguish three application layers, their number could be greater. “Higher level” layers are more abstract and distant from the input/output operations. 

Let’s meet the dependency inversion principle (DIP)

Clean architecture advocates argue that good software architecture, can be described with the independence of the higher-level layers from the lower-level modules. It is called the dependency inversion principle (DIP). The word “inversion” stems from the inversion of the traditional flow of an application where UI depended on the business logic. In turn, business logic depended on the implementation layer (if three layers were defined). In architecture where software developers adhere to the DIP, the UI and implementation layers depend on the business logic.

A bunch of programming techniques, were invented to achieve DIP. Accordingly, they are referred to together under the umbrella term: inversion of control (IoC). The dependency injection pattern helps to invert the control of the program in object creation. So, IoC is not limited to the DI pattern. Please find below some other use cases of IoC:

  • Changing algorithms for a specific procedure on the fly (strategy pattern), 
  • Updating the object during the runtime (decorator pattern),
  •  Informing subscribed objects about a state change (observer pattern).

The goals and pros of dependency injection

If you use techniques that invert the flow of the application comes with the burden of writing/using additional components. Unfortunately, it increases the cost of maintainability. Indeed, by extending the code base in this manner, we also increase the system’s complexity. As a result, the entry barrier for new developers is higher. That’s why it is worth discussing the potential benefits of IoC and when it makes sense to use it.

To describe the dependency injection, we separate the initialization of the objects the class uses from the class itself.

In other words, we decouple the class configuration from its concern. The class dependencies are usually called – the services. Meanwhile, the class is described – the client.

I assume the DI pattern is based on putting the initialized services as arguments to the client constructor. Yet, other forms of this pattern exist. For example, setter or interface injection. Nevertheless, I’m describing only the most popular constructor injection.

Overall, DI enables applications to become: 

  • More resilient to change. Therefore, if the details of the service change, the client code stays the same. So, dependency injects by code and outside of the client. 
  • More testable.  Injected dependencies could be derided. Finally, it results in a decrease in the cost of writing application tests.

When are these benefits worth the developers’ effort? The answer is simple! When the system is supposed to be long-lived and encompasses a large domain, which results in a complex dependencies graph.

Technical frames of implementing Dependency Injection

First, you should consider and understand the pros and cons of the DI. Does your system is large enough to benefit from this pattern? If so, it’s a good moment to think about implementation. This topic will embark us on a meta-programming journey. Thus, we will see how to create a program scanning the code and then performing on its result.

What do we need to create a framework performing the DI? Let’s list some design points that our specification. 

  • First, we need a place to store the information about the dependencies. This place is usually called the container.
  • Then, our system needs an injector. So, we need to initialize the service and inject the dependency into the client.
  • Next, we need a scanner. The ability to go through all the system modules and put their dependencies into the container is essential. 
  • Finally, we will need a way to annotate classes with metadata. It allows the system to identify which classes should be injected and which are going to be the recipient of the injection.

TypeScript, decorators, and reflection API

Specifically, regarding the general points mentioned above, we should focus on the terms of data structures that could help us to implement the specification. 

Within the TypeScript realm, we could easily use:

  • Reflection API. This API is a collection of tools to handle metadata (e.g., add it, get it, remove it, and so on). 
  • Decorators. These data structures are functions to run during the code analysis. The API can be used to run the reflection. Then, the classes will get annotated with metadata. 

Implementation of dependency injection is NESTJS

That was a long journey! Finally, we collected all the pieces to understand how to implement the dependency injection in one of the most popular NodeJS frameworks – NestJS.

Let’s look at the source code of the framework. Please, check the link: nest/packages/common/decorators/core/. Here, there are two definitions of decorators that interest us.

Both of these files export functions adding metadata to the object. The method is running the defined metadata function from the reflection API. 

In the first case (injectable decorator), we are tagging classes as providers. Consequently, they could be injected by the NestJS dependency injection system. The latter case (inject decorator) is different. We are informing the DI system that it needs to provide a particular constructor parameter from the class.

NestJS is grouping providers and other classes into higher architecture groups called: modules. As you probably have foreseen, we use decorators located in nest/packages/common/decorators/modules, as module.decorator.ts

Similarly to decorators defined above, the meat and potatoes of this code are to export a function that runs reflection API. Also, add meta-data that this class is a module. Inside the module, we need to answer two essential questions. 

  • Which classes are controllers?
  • Which are providers? 

Then DI system can know how to instantiate the dependencies.  

The NestJS adds the metadata that helps to organize the code into logical units:

– Units of injectable providers.

– Parameters of the constructors needed to inject. 

– Modules structuring the dependency graph of the project.

Step by step implementation in NESTJS

How is NestJS using this information? Every NestJS project starts with a more or less similar statement:

const app = await NestFactory.create()

This line runs the create function from the NestFactoryStatic class, which runs the initialize function (among others).

What is the job of the initialize function? 

  • It creates the dependency scanner, which scans modules for dependencies.
  • During the scanForModules, we add modules to the container, which is a Map between a string and a Module.
  • Then, scanModulesForDependencies is run, which in essence is running 4 functions: reflectImports, reflectProviders, reflectControllers, and reflectExports.
  • These functions have a similar goal: get metadata annotated by the decorators and perform specific actions.
  • After, this instanceLoader initializes the dependencies by running the function createInstancesOfDependencies, which creates and loads the proper object.

The described picture is less complex than the complete code of the system. It has to handle edge cases of circular dependencies, exceptions, and so on, yet it conveys the gist of it.  

Conclusion

To summarize, we have embarked on a long journey! First, we learned that classes are grouped into layers. Also, they are not equal. To maintain the proper hierarchy among them, we must invert the control. In other words, we need to standardize the flow of the application.

In the case of object creation, we can invert the control by using the dependency injection system. A great example is NestJS. Here, the system works by utilizing decorators and reflection API. Therefore, it enables to transformation metaprogramming in TypeScript.

Briefly all this jazz is worth the effort for complex and long-lived applications.

References