To doskonała okazja do networkingu z innymi specjalistami, a to wszystko w luźnej festiwalowej atmosferze, gdzie nie zabraknie pizzy, pysznych przekąsek i napojów.
Coding.fest 2.0 #Poznan
Temat:Narzędzia AI, a praca programisty
Data: 22 czerwca (czwartek)
Godzina: 18:00
Miejsce: Espeo Software, ul. Baraniaka 6, 61-131 Poznań
Programujesz i interesujesz się najnowszymi rozwiązaniami, które mogą ułatwić Ci pracę? Chcesz dowiedzieć się więcej na temat narzędzi AI? A może chcesz podyskutować na ten temat w gronie innych specjalistów? Być może wierzysz w siłę networkingu i chcesz spotkać się ze innymi programistami w Poznaniu by poznać się i wymienić doświadczeniami? Doskonale się składa! Wygląda na to że Coding.fest 2.0 jest właśnie dla Ciebie!
Program wydarzenia
Zaczynamy o godzinie 18:00. Merytoryczna część wydarzenia obejmuje 3 prezentacje. Czas przewidziany na tę część meetup’u to około 90- min. W ramach prezentacji możesz także spodziewać się konkursów dla widowni! Liczymy na Twoje zaangażowanie dyskusję. W trakcie dla gości dostępne będą przekąski i napoje. Po części merytorycznej odbędzie się integracja przy świeżej pizzy. Pamiętaj żeby zapisać się do udziału w wydarzeniu TUTAJ.
Konkurs na wystąpienie
Masz pomysł na temat, który jest związany z AI i programowaniem i chcesz o tym podyskutować? Świetnie się składa! Do naszego składa chcemy zaprosić spikera, który zostanie wyłoniony w ramach konkursu.
Zwycięzca zostanie zaproszony na wydarzenie jako mówca, a dodatkowo otrzyma za swoje wystąpienie nagrodę w postaci 1500 złotych.
Co trzeba zrobić?
W konkursie mogą brać udział wyłącznie osoby pełnoletnie. Należy przygotować krótką prezentacje o tym, co chcesz przedstawić na wydarzeniu. Prezentacja ta powinna być przygotowana w formie wypowiedzi wideo i trwać maksymalnie 90 s (1.5 min).
Zgłoszenie konkursowe
Zgłoszenie należy wysłać mailem zatytułowanym “Konkurs Coding.fest 2.0” na adres: eb@espeo.eu. Można przesyłać je do końca dnia w niedzielę 18 czerwca. Zgłoszenie powinno zawierać twoje dane personale: imię, nazwisko, datę urodzenia, e-mail oraz numer telefonu. Załączony film powinien być w formacie MP4. Przesłanie zgłoszenia jest równoznaczne ze zgodą na przetwarzanie danych osobowych na potrzeby przeprowadzenia i rozstrzygnięcia konkursu.
Wyniki konkursu
Komisja konkursowa złożona z pozostałych prelegentów wybierze zwycięzce w poniedziałek 19 czerwca. Wtedy wszyscy uczestnicy zostaną poinformowani o wynikach. Zwycięzca otrzyma zaproszenie na wydarzenie i wsparcie w przygotowaniu prezentacji na wydarzeniu (przewidywany czas tej prezentacji to 20 – 30 min). Po wystąpieniu zwycięzca otrzyma nagrodę w wysokości 1500 złotych! Powodzenia!
During the journey of every Android Developer, we have observed various Material Design language rules. We started with the first guidelines of Material Design in 2014. Then we acknowledged the set of updates to the language in Material 2 in 2018. Finally, we have seen the major changes in the newest version of Material Design called Material You (Material 3) in 2021.I will introduce these changes through the years from the Android Developer’s perspective and describe the most valuable changes and new things in Material 3. Let’s begin!
Material design is a design language (scheme or style that guides the product design). There were 3 major versions of this language.
Material 1
Google introduced the first version of Google Material Design in 2014. The primary purpose of creating this language was to combine principles of good design with technical and scientific innovation. It allowed the unifying design of Google applications. What else, developers can exclude basic components with Google’s material components API.
The next step was Material 2 (Material Theme), introduced in 2018 as an update of Material 1. The language went through some cosmetic changes to highlight the components even more.
The main differences between Material 1 and Material 2 are:
new font — Google Sans,
more white spaces,
rounded corners,
colorful icons, and so on.
The video below is a great example showing the migration from Material 1 to Material 2.
Source: ‘Google Material Design 2.0’, youtube.com, uploaded by mobileCTRL, July 24, 2018
Material 3
Material 3, also known as Material You, is the newest version of Material Design. The two previous versions were strictly standardized. Otherwise, Material 3 focuses on user-oriented personalization. The newest version of the language introduced many updates. Some of them are listed below:
dynamic colors based on the user’s wallpaper,
different shapes,
new typography,
changed components (for example bigger buttons)
Source: ‘Android 12 Official Release – What We Waited For!’, youtube.com, uploaded by In Depth Tech Reviews, October 20, 2021
I believe that James Williams explains very well the changes between Material 2 and Material 3 in the article ‘Migrating to Material Design 3′ (source: Material Design Blog, uploaded at October 27, 2021).
Let’s dig deeper into the major features of Material 3.
Characteristics of Material 3
Dynamic colors
Material You focuses on a variety of different colors and shapes. The colors are fully customizable. On the contrary, with material 2, more apps look similar due to strict standards.
The dynamic colors allow the app to match the system color preferences. As you can see in the example above, the colors in the app change every time users change the wallpaper.
Material You is based on a variety of shapes. You can distinguish different ones used on one screen and even the 7-level shape scale based on the roundedness of the component corners.
Also, Google has updated the icons to Material Icons, which can have slightly different versions. The symbols are easily customizable, and an updated version is available on the Google icon’s page.
The Material 3 typography went through simplification. As a result, from the 6 Headline variations, currently (Jan 2022), we have fewer variants for each classification (Small, Medium, Large).
The Action Buttons in Android can be classified as Floating Action Buttons (FAB) or Extended Floating Action Buttons. FAB is a circular button that triggers the primary action in the UI apps. Extended Floating Action Button is the class with the Material Components library in Android. Concluding, it looks more rectangular now.
The Top App Bar and the status bar (displaying the battery and the network icons) now have the same color. Also, there is less contrast between the app bar and the content below.
The users are used to the Floating Action Button at the edge of the Bottom App Bar. However, the new Material expands the bar vertically and keeps the button inside.
The new navigation bar has a more distinguishable selected element by changing the icon to a fill version, bolding the label, and adding a shape around the icon in a different color.
Also, the time picker component is similar to the date picker component. I recommend a more detailed article about the implementation of the time picker component.
And here you can find a full list of the Material 3 components.
Conclusion
From its premiere in 2014, Material Design language went through dozen changes. After a while, there was an update to Material 2. That wasn’t a big deal. However, the major changes came with Material 3. The first Android system with Material 3 released in late 2021.
Today is the beginning of 2023, and I think the next 2–3 years are the time for validating and uploading improvements to Material 3.
Therefore, the upcoming versions of Android OS and different Google UI libraries for programmers will need some changes.
Let’s observe what Google will develop in the coming years!
Dependency injection is one of the most popular patterns in software design. Particularly among developers who adhere to clean code principles. Yet, surprisingly, the differences and relations between the dependency inversion principle, inversion of control, and dependency injection are usually not clearly defined. Therefore, it results in a murky understanding of the subject.
The present blog post aims to clarify the boundaries between these terms. My aim is to present a practical guide on how dependency injection could be implemented and used. By the end of this post, you will know:
What is the difference between the dependency inversion principle, inversion of control, and dependency injection?
Why is dependency injection useful?
What are meta-programming, decorators, and Reflection API?
How is dependency injection implemented in NestJS?
Introduction
Software design is a constantly developing discipline. Despite its tremendous practical implications, we still don’t understand how to differentiate between a great and a terrible code of an application. That seems to be a reason to discuss how to characterize great software code.
One of the most popular opinions claims that your code needs to be clean to be good.
The mentioned school of thought is particularly dominant among developers using an object-oriented programming paradigm. Therefore, they propose to decompose a part of reality covered by the application into objects defined by classes. Consequently, group them into layers. Traditionally, we have distinguished three layers:
User interface (UI);
Business logic;
Implementation layer (databases, networking, and so on).
Advocates of clean code believe that these layers are not equal and there is a hierarchy among them. Namely, the implementation and UI layers should depend on the business logic. The dependency injection (DI) is a design pattern that helps to achieve this goal.
Before we move forward, we need to make some clarifications. Although we traditionally distinguish three application layers, their number could be greater. “Higher level” layers are more abstract and distant from the input/output operations.
Let’s meet the dependency inversion principle (DIP)
Clean architecture advocates argue that good software architecture, can be described with the independence of the higher-level layers from the lower-level modules. It is called the dependency inversion principle (DIP). The word “inversion” stems from the inversion of the traditional flow of an application where UI depended on the business logic. In turn, business logic depended on the implementation layer (if three layers were defined). In architecture where software developers adhere to the DIP, the UI and implementation layers depend on the business logic.
A bunch of programming techniques, were invented to achieve DIP. Accordingly, they are referred to together under the umbrella term: inversion of control (IoC). The dependency injection pattern helps to invert the control of the program in object creation. So, IoC is not limited to the DI pattern. Please find below some other use cases of IoC:
Changing algorithms for a specific procedure on the fly (strategy pattern),
Updating the object during the runtime (decorator pattern),
Informing subscribed objects about a state change (observer pattern).
The goals and pros of dependency injection
If you use techniques that invert the flow of the application comes with the burden of writing/using additional components. Unfortunately, it increases the cost of maintainability. Indeed, by extending the code base in this manner, we also increase the system’s complexity. As a result, the entry barrier for new developers is higher. That’s why it is worth discussing the potential benefits of IoC and when it makes sense to use it.
To describe the dependency injection, we separate the initialization of the objects the class uses from the class itself.
In other words, we decouple the class configuration from its concern. The class dependencies are usually called – the services. Meanwhile, the class is described – the client.
I assume the DI pattern is based on putting the initialized services as arguments to the client constructor. Yet, other forms of this pattern exist. For example, setter or interface injection. Nevertheless, I’m describing only the most popular constructor injection.
Overall, DI enables applications to become:
More resilient to change. Therefore, if the details of the service change, the client code stays the same. So, dependency injects by code and outside of the client.
More testable. Injected dependencies could be derided. Finally, it results in a decrease in the cost of writing application tests.
When are these benefits worth the developers’ effort? The answer is simple! When the system is supposed to be long-lived and encompasses a large domain, which results in a complex dependencies graph.
Technical frames of implementing Dependency Injection
First, you should consider and understand the pros and cons of the DI. Does your system is large enough to benefit from this pattern? If so, it’s a good moment to think about implementation. This topic will embark us on a meta-programming journey. Thus, we will see how to create a program scanning the code and then performing on its result.
What do we need to create a framework performing the DI? Let’s list some design points that our specification.
First, we need a place to store the information about the dependencies. This place is usually called the container.
Then, our system needs an injector. So, we need to initialize the service and inject the dependency into the client.
Next, we need a scanner. The ability to go through all the system modules and put their dependencies into the container is essential.
Finally, we will need a way to annotate classes with metadata. It allows the system to identify which classes should be injected and which are going to be the recipient of the injection.
TypeScript, decorators, and reflection API
Specifically, regarding the general points mentioned above, we should focus on the terms of data structures that could help us to implement the specification.
Within the TypeScript realm, we could easily use:
Reflection API. This API is a collection of tools to handle metadata (e.g., add it, get it, remove it, and so on).
Decorators. These data structures are functions to run during the code analysis. The API can be used to run the reflection. Then, the classes will get annotated with metadata.
Implementation of dependency injection is NESTJS
That was a long journey! Finally, we collected all the pieces to understand how to implement the dependency injection in one of the most popular NodeJS frameworks – NestJS.
Let’s look at the source code of the framework. Please, check the link: nest/packages/common/decorators/core/. Here, there are two definitions of decorators that interest us.
Both of these files export functions adding metadata to the object. The method is running the defined metadata function from the reflection API.
In the first case (injectable decorator), we are tagging classes as providers. Consequently, they could be injected by the NestJS dependency injection system. The latter case (inject decorator) is different. We are informing the DI system that it needs to provide a particular constructor parameter from the class.
Similarly to decorators defined above, the meat and potatoes of this code are to export a function that runs reflection API. Also, add meta-data that this class is a module. Inside the module, we need to answer two essential questions.
Which classes are controllers?
Which are providers?
Then DI system can know how to instantiate the dependencies.
The NestJS adds the metadata that helps to organize the code into logical units:
– Units of injectable providers.
– Parameters of the constructors needed to inject.
– Modules structuring the dependency graph of the project.
Step by step implementation in NESTJS
How is NestJS using this information? Every NestJS project starts with a more or less similar statement:
const app = await NestFactory.create()
This line runs the create function from the NestFactoryStatic class, which runs the initialize function (among others).
What is the job of the initialize function?
It creates the dependency scanner, which scans modules for dependencies.
During the scanForModules, we add modules to the container, which is a Map between a string and a Module.
Then, scanModulesForDependencies is run, which in essence is running 4 functions: reflectImports, reflectProviders, reflectControllers, and reflectExports.
These functions have a similar goal: get metadata annotated by the decorators and perform specific actions.
After, this instanceLoader initializes the dependencies by running the function createInstancesOfDependencies, which creates and loads the proper object.
The described picture is less complex than the complete code of the system. It has to handle edge cases of circular dependencies, exceptions, and so on, yet it conveys the gist of it.
Conclusion
To summarize, we have embarked on a long journey! First, we learned that classes are grouped into layers. Also, they are not equal. To maintain the proper hierarchy among them, we must invert the control. In other words, we need to standardize the flow of the application.
In the case of object creation, we can invert the control by using the dependency injection system. A great example is NestJS. Here, the system works by utilizing decorators and reflection API. Therefore, it enables to transformation metaprogramming in TypeScript.
Briefly all this jazz is worth the effort for complex and long-lived applications.
Some time ago, I attended the Devoxx Conference in Kraków, the biggest Java conference in Poland. I searched for inspiration and wanted to hear about the newest trends in the industry. Among many great lectures, one topic left me with a feeling that I should give it more attention – the Micronaut framework and GraalVM native image.
Conference specification
Conference speeches are time limited so they usually don’t give you a ready solution which you can just grab and use in your projects. They show you only some aspects, highlighting the best features. A closer look and a real-life application is necessary if you want to actually use it. The most popular framework for creating Java applications is currently Spring with Spring Boot. It is very powerful and easy to start using it. Many features work out of the box and you can start a local development with just adding a single dependency to the application.
However, Spring heavily depends on reflection and proxy objects. For this reason, Spring Boot applications need some time to start. On the other hand, Micronaut is designed to use AOT (Ahead Of Time) compilation and avoid reflection. It should make it easier to use GraalVM and native image. We may ask a question if startup time is really important for Java applications. The answer is not very obvious. If we have an application running all the time, the startup time is not crucial.
The situation changes on Serverless environments, where applications are started to handle the incoming request and closed afterwards. Our application may prepare a response within milliseconds but if it needs a couple of seconds to start, it may become a serious problem if we are billed for each minute of running our application.
Applications used for testing
To make a comparison, I decided to write two simple CRUD applications (CRUD stands for Create, Retrieve, Update, Delete and represents the set of most common actions on various entities).
One application was created using Spring Boot 2.7.1, the second one using Micronaut Framework 3.5.3. Both used Java 17 and Gradle as a build tool. I wanted to compare the following aspects:
application startup time;
how easy is to write the application;
how easy is to test the application;
how easy is to prepare the native image.
To look more like a real application, I defined three entities: Order, Buyer, and Product.
Example: 3 entities of model application
Each entity is persisted in a database, has its own business logic and is exposed through REST API. As a result each application has 9 beans (3 REST controllers, 3 domain services and 3 database repositories).
Example entity structure
Startup time and memory consumption
* all measurements done on MacBook Air M1
We can see that even without GraalVM, Micronaut gets better startup times and less memory consumption. Combining the applications with GraalVM gets a huge performance boost. In both scenarios Micronaut gives us better numbers.
Micronaut from the Spring developer perspective
I work with Spring on a daily basis. When I thought about writing a new application, the Spring Boot was my first choice. When I heard about Micronaut for the first time I was afraid that I would need to learn everything from scratch and writing applications wouldn’t be as easy as using Spring. Now, I can confidently say that the use of Micronaut is just as easy as writing a Spring Boot application.
Production code
Here we have mainly similarities:
Both have a nice starting page (https://start.spring.io/ and https://micronaut.io/launch/) where you can choose the basic configuration (Java version, built tool, testing framework) , add some dependencies and generate a project. Both are well integrated with IntelliJ IDEA.
Spring perspective Micronaut perspective
Database entities and domain classes are just identical
DB repositories also looks identical – the only difference is in the imports
Basically that’s it. There are no other differences in production code.
The advantage of Micronaut
However, there is one big advantage of Micronaut. It checks many things at compile time instead of at runtime. Let’s see the following repository method: findByName. We used the wrong return type because we return String instead of Product. How will both frameworks behave?
Micronaut will fail during project compilation with the following failure:
error: Unable to implement Repository method: ProductRepository.findByName(String name). Query results in a type [eu.espeo.micronautdemo.db.Product] whilst method returns an incompatible type: java.lang.String
String findByName(String name);
Very clear. And how about Spring? It will compile successfully and everything will be fine until we try to call this method:
java.lang.ClassCastException: class eu.espeo.springdemo.db.Product cannot be cast to class java.lang.String (eu.espeo.springdemo.db.Product is in unnamed module of loader ‘app’; java.lang.String is in module java.base of loader ‘bootstrap’)
Let’s hope everyone writes tests. Otherwise such an error will be thrown on production.
Spring Boot vs Micronaut 0:1
Test code for the REST controller
For Spring we can write:
An integration @SpringBootTest with TestRestTemplate injected. This test starts the Web server. We can easily send some requests to our application.
An integration @SpringBootTest using MockMvc. This test doesn’t start the Web server so it is faster, but sending requests and parsing responses are not as easy as when using TestRestTemplate.
For more specific scenarios you can mock some classes using @MockBean annotation.
For Micronaut you can write:
An integration @MicronautTest with HttpClient injected. This test starts the Netty server. As easy to use as SpringBoot test with TestRestTemplate.
For more specific scenarios you can mock some classes using @MockBean annotation.
@Test voidshouldSaveAndRetrieveProduct() { // given var productBusinessId = UUID.randomUUID(); var productName = “Apple MacBook”; var price = BigDecimal.valueOf(11499.90);
// when var createResponse = restTemplate .postForEntity(“http://localhost:” + port + “/products”, new ProductDto(productBusinessId, productName, price), ProductDto.class); then(createResponse.getStatusCode()).isEqualTo(HttpStatus.OK); var product = restTemplate.getForObject( “http://localhost:” + port + “/products/” + productBusinessId, ProductDto.class);;
// then then(product).isNotNull(); then(product.businessId()).isEqualTo(productBusinessId); then(product.name()).isEqualTo(productName); then(product.price()).isEqualByComparingTo(price); } }
@Test voidshouldSaveAndRetrieveProduct() { // given var productBusinessId = UUID.randomUUID(); var productName = “Apple MacBook”; var price = BigDecimal.valueOf(11499.90);
// when var createResponse = client.toBlocking() .exchange(HttpRequest.POST(“/products”, new ProductDto(productBusinessId, productName, price))); then((CharSequence) createResponse.getStatus()).isEqualTo(HttpStatus.OK); var product = client.toBlocking() .retrieve(HttpRequest.GET(“/products/” + productBusinessId), ProductDto.class);
// then then(product).isNotNull(); then(product.businessId()).isEqualTo(productBusinessId); then(product.name()).isEqualTo(productName); then(product.price()).isEqualByComparingTo(price); } }
,
Generally, there is no big difference. Good thing for Micronaut is that it provides the default configuration of a database using TestContainers, so we can just start writing tests and everything should just work. On the other hand, Spring provides no test configuration so we need to remember to configure a database. This gives us a slight advantage over Spring Boot.
Spring Boot vs Micronaut 0:2
Native image (GraalVM)
Both frameworks support compilation to a native image. We can produce either a native application or a docker image containing a native application. Micronaut already uses AOT compilation so creation of native image should be easier. To be able to do it, I needed only 3 steps:
install GraalVM (if not already installed) and native-image tool
sdk install java 22.1.0.r17-grl
gu install native-image
Add Gradle dependency
compileOnly(“org.graalvm.nativeimage:svm”)
Annotate DTO classes used by REST API with @Introspected to generate BeanIntrospection metadata at compilation time. This information can be used, for example, to render the POJO as JSON using Jackson without using reflection.
@Introspected public record ProductDto( UUID businessId, String name, BigDecimal price ) { (…some methods mapping DTO to domain model…) }
That’s it. Now you can just execute ./gradlew nativeCompile (to build just an application) or ./gradlew dockerBuildNative (to build a Docker image – but currently it does not work on Macs with M1 chip), wait a couple of minutes (native compilation takes longer than standard build) and here you go.
I was really curious if Spring Boot applications will be as easy to convert as Micronaut applications. There is a spring-native project which has currently a beta support (it means that breaking changes may happen). The changes I needed to make was slightly different but still not complicated:
install GraalVM (if not already installed) and native-image tool
add a Spring AOT Gradle plugin
id ‘org.springframework.experimental.aot’ version ‘0.12.1’
add a spring-native dependency (but when using Gradle you can skip it because Spring AOT plugin will add this dependency automatically)
configure the build to use the release repository of spring-native (both for dependencies and plugins)
maven { url ‘https://repo.spring.io/release’ }
Now you can just execute ./gradlew nativeCompile (to build just an application) or ./gradlew bootBuildImage (to build a Docker image – but for some reason the process is stuck on my Mac with M1 chip), wait a couple of minutes (native compilation takes longer than standard build) and here you go.
As we can see, both frameworks compile well to the native image and both have some problems with generating Docker images on M1 Macs 🙂 None of them is perfect yet.
Spring Boot vs Micronaut 1:3
Summary
It’s great to have a choice. My test confirmed that Micronaut can be a good alternative to Spring Boot. It has some advantages but Spring is still very strong. I will keep an eye on both.
Manage Cookie Consent
We use cookies to optimize our website and our service.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
We use technologies like cookies to store and/or access device information. We do this to improve browsing experience and to show personalized ads. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.