DZone Research Report: A look at our developer audience, their tech stacks, and topics and tools they're exploring.
Getting Started With Large Language Models: A guide for both novices and seasoned practitioners to unlock the power of language models.
A framework is a collection of code that is leveraged in the development process by providing ready-made components. Through the use of frameworks, architectural patterns and structures are created, which help speed up the development process. This Zone contains helpful resources for developers to learn about and further explore popular frameworks such as the Spring framework, Drupal, Angular, Eclipse, and more.
A Framework for Maintaining Code Security With AI Coding Assistants
Comparing the Efficiency of a Spring Boot Project to a Go Project
It wasn't long ago that I decided to ditch my Ubuntu-based distros for openSUSE, finding LEAP 15 to be a steadier, more rock-solid flavor of Linux for my daily driver. The trouble is, I hadn't yet been introduced to Linux Mint Debian Edition (LMDE), and that sound you hear is my heels clicking with joy. LMDE 6 with the Cinnamon desktop. Allow me to explain. While I've been a long-time fan of Ubuntu, in recent years, it's the addition of snaps (rather than system packages) and other Ubuntu-only features started to wear on me. I wanted straightforward networking, support for older hardware, and a desktop that didn't get in the way of my work. For years, Ubuntu provided that, and I installed it on everything from old netbooks, laptops, towers, and IoT devices. More recently, though, I decided to move to Debian, the upstream Linux distro on which Ubuntu (and derivatives like Linux Mint and others) are built. Unlike Ubuntu, Debian holds fast to a truly solid, stable, non-proprietary mindset — and I can still use the apt package manager I've grown accustomed to. That is, every bit of automation I use (Chef and Ansible mostly) works the same on Debian and Ubuntu. I spent some years switching back and forth between the standard Ubuntu long-term releases and Linux Mint, a truly great Ubuntu-derived desktop Linux. Of course, there are many Debian-based distributions, but I stumbled across LMDE version 6, based on Debian GNU/Linux 12 "Bookworm" and known as Faye, and knew I was onto something truly special. As with the Ubuntu version, LMDE comes with different desktop environments, including the robust Cinnamon, which provides a familiar environment for any Linux, Windows, or macOS user. It's intuitive, chock full of great features (like a multi-function taskbar), and it supports a wide range of customizations. However, it includes no snaps or other Ubuntuisms, and it is amazingly stable. That is, I've not had a single freeze-up or odd glitch, even when pushing it hard with Kdenlive video editing, KVM virtual machines, and Docker containers. According to the folks at Linux Mint, "LMDE is also one of our development targets, as such it guarantees the software we develop is compatible outside of Ubuntu." That means if you're a traditional Linux Mint user, you'll find all the familiar capabilities and features in LMDE. After nearly six months of daily use, that's proven true. As someone who likes to hang on to old hardware, LMDE extended its value to me by supporting both 64- and 32-bit systems. I've since installed it on a 2008 Macbook (32-bit), old ThinkPads, old Dell netbooks, and even a Toshiba Chromebook. Though most of these boxes have less than 3 gigabytes of RAM, LMDE performs well. Cinnamon isn't the lightest desktop around, but it runs smoothly on everything I have. The running joke in the Linux world is that "next year" will be the year the Linux desktop becomes a true Windows and macOS replacement. With Debian Bookworm-powered LMDE, I humbly suggest next year is now. To be fair, on some of my oldest hardware, I've opted for Bunsen. It, too, is a Debian derivative with 64- and 32-bit versions, and I'm using the BunsenLabs Linux Boron version, which uses the Openbox window manager and sips resources: about 400 megabytes of RAM and low CPU usage. With Debian at its core, it's stable and glitch-free. Since deploying LMDE, I've also begun to migrate my virtual machines and containers to Debian 12. Bookworm is amazingly robust and works well on IoT devices, LXCs, and more. Since it, too, has long-term support, I feel confident about its stability — and security — over time. If you're a fan of Ubuntu and Linux Mint, you owe it to yourself to give LMDE a try. As a daily driver, it's truly hard to beat.
This tutorial illustrates B2B push-style application integration with APIs and internal integration with messages. We have the following use cases: Ad Hoc Requests for information (Sales, Accounting) that cannot be anticipated in advance. Two Transaction Sources: A) internal Order Entry UI, and B) B2B partner OrderB2B API. The Northwind API Logic Server provides APIs and logic for both transaction sources: Self-Serve APIs to support ad hoc integration and UI dev, providing security (e.g., customers see only their accounts). Order Logic: enforcing database integrity and Application Integration (alert shipping). A custom API to match an agreed-upon format for B2B partners. The Shipping API Logic Server listens to Kafka and processes the message. Key Architectural Requirements: Self-Serve APIs and Shared Logic This sample illustrates some key architectural considerations: Requirement Poor Practice Good Practice Best Practice Ideal Ad Hoc Integration ETL APIs Self-Serve APIs Automated Self-Serve APIs Logic Logic in UI Reusable Logic Declarative Rules.. Extensible with Python Messages Kafka Kafka Logic Integration We'll further expand on these topics as we build the system, but we note some best practices: APIs should be self-serve, not requiring continuing server development. APIs avoid the nightly Extract, Transfer, and Load (ETL) overhead. Logic should be re-used over the UI and API transaction sources. Logic in UI controls is undesirable since it cannot be shared with APIs and messages. Using This Guide This guide was developed with API Logic Server, which is open-source and available here. The guide shows the highlights of creating the system. The complete Tutorial in the Appendix contains detailed instructions to create the entire running system. The information here is abbreviated for clarity. Development Overview This overview shows all the key codes and procedures to create the system above. We'll be using API Logic Server, which consists of a CLI plus a set of runtimes for automating APIs, logic, messaging, and an admin UI. It's an open-source Python project with a standard pip install. 1. ApiLogicServer Create: Instant Project The CLI command below creates an ApiLogicProject by reading your schema. The database is Northwind (Customer, Orders, Items, and Product), as shown in the Appendix. Note: the db_urlvalue is an abbreviation; you normally supply a SQLAlchemy URL. The sample NW SQLite database is included in ApiLogicServer for demonstration purposes. $ ApiLogicServer create --project_name=ApiLogicProject --db_url=nw- The created project is executable; it can be opened in an IDE and executed. One command has created meaningful elements of our system: an API for ad hoc integration and an Admin App. Let's examine these below. API: Ad Hoc Integration The system creates a JSON API with endpoints for each table, providing filtering, sorting, pagination, optimistic locking, and related data access. JSON: APIs are self-serve: consumers can select their attributes and related data, eliminating reliance on custom API development. In this sample, our self-serve API meets our Ad Hoc Integration needs and unblocks Custom UI development. Admin App: Order Entry UI The create command also creates an Admin App: multi-page, multi-table with automatic joins, ready for business user agile collaboration and back office data maintenance. This complements custom UIs you can create with the API. Multi-page navigation controls enable users to explore data and relationships. For example, they might click the first Customer and see their Orders and Items: We created an executable project with one command that completes our ad hoc integration with a self-serve API. 2. Customize: In Your IDE While API/UI automation is a great start, we now require Custom APIs, Logic, and Security. Such customizations are added to your IDE, leveraging all its services for code completion, debugging, etc. Let's examine these. Declare UI Customizations The admin app is not built with complex HTML and JavaScript. Instead, it is configured with the ui/admin/admin.yml, automatically created from your data model by the ApiLogicServer create command. You can customize this file in your IDE to control which fields are shown (including joins), hide/show conditions, help text, etc. This makes it convenient to use the Admin App to enter an Order and OrderDetails: Note the automation for automatic joins (Product Name, not ProductId) and lookups (select from a list of Products to obtain the foreign key). If we attempt to order too much Chai, the transaction properly fails due to the Check Credit logic described below. Check Credit Logic: Multi-Table Derivation and Constraint Rules, 40X More Concise. Such logic (multi-table derivations and constraints) is a significant portion of a system, typically nearly half. API Logic server provides spreadsheet-like rules that dramatically simplify and accelerate logic development. The five check credit rules below represent the same logic as 200 lines of traditional procedural code. Rules are 40X more concise than traditional code, as shown here. Rules are declared in Python and simplified with IDE code completion. Rules can be debugged using standard logging and the debugger: Rules operate by handling SQLAlchemy events, so they apply to all ORM access, whether by the API engine or your custom code. Once declared, you don't need to remember to call them, which promotes quality. The above rules prevented the too-big order with multi-table logic from copying the Product Price, computing the Amount, rolling it up to the AmountTotal and Balance, and checking the credit. These five rules also govern changing orders, deleting them, picking different parts, and about nine automated transactions. Implementing all this by hand would otherwise require about 200 lines of code. Rules are a unique and significant innovation, providing meaningful improvements over procedural logic: CHARACTERISTIC PROCEDURAL DECLARATIVE WHY IT MATTERS Reuse Not Automatic Automatic - all Use Cases 40X Code Reduction Invocation Passive - only if called Active - call not required Quality Ordering Manual Automatic Agile Maintenance Optimizations Manual Automatic Agile Design For more on the rules, click here. Declare Security: Customers See Only Their Own Row Declare row-level security using your IDE to edit logic/declare_security.sh, (see screenshot below). An automatically created admin app enables you to configure roles, users, and user roles. If users now log in as ALFKI (configured with role customer), they see only their customer row. Observe the console log at the bottom shows how the filter worked. Declarative row-level security ensures users see only the rows authorized for their roles. 3. Integrate: B2B and Shipping We now have a running system, an API, logic, security, and a UI. Now, we must integrate with the following: B2B partners: We'll create a B2B Custom Resource. OrderShipping: We add logic to Send an OrderShipping Message. B2B Custom Resource The self-serve API does not conform to the format required for a B2B partnership. We need to create a custom resource. You can create custom resources by editing customize_api.py using standard Python, Flask, and SQLAlchemy. A custom OrderB2B endpoint is shown below. The main task here is to map a B2B payload onto our logic-enabled SQLAlchemy rows. API Logic Server provides a declarative RowDictMapper class you can use as follows: Declare the row/dict mapping; see the OrderB2B class in the lower pane: Note the support for lookup so that partners can send ProductNames, not ProductIds. Create the custom API endpoint; see the upper pane: Add def OrderB2B to customize_api/py to create a new endpoint. Use the OrderB2B class to transform API request data to SQLAlchemy rows (dict_to_row). The automatic commit initiates the shared logic described above to check credit and reorder products. Our custom endpoint required under ten lines of code and the mapper configuration. Produce OrderShipping Message Successful orders must be sent to Shipping in a predesignated format. We could certainly POST an API, but Messaging (here, Kafka) provides significant advantages: Async: Our system will not be impacted if the Shipping system is down. Kafka will save the message and deliver it when Shipping is back up. Multi-cast: We can send a message that multiple systems (e.g., Accounting) can consume. The content of the message is a JSON string, just like an API. Just as you can customize APIs, you can complement rule-based logic using Python events: Declare the mapping; see the OrderShipping class in the right pane. This formats our Kafka message content in the format agreed upon with Shipping. Define an after_flush event, which invokes send_order_to_shipping. This is called by the logic engine, which passes the SQLAlchemy models.Order row. send_order_to_shipping uses OrderShipping.row_to_dict to map our SQLAlchemy order row to a dict and uses the Kafka producer to publish the message. Rule-based logic is customizable with Python, producing a Kafka message with 20 lines of code here. 4. Consume Messages The Shipping system illustrates how to consume messages. The sections below show how to create/start the shipping server create/start and use our IDE to add the consuming logic. Create/Start the Shipping Server This shipping database was created from AI. To simplify matters, API Logic Server has installed the shipping database automatically. We can, therefore, create the project from this database and start it: 1. Create the Shipping Project ApiLogicServer create --project_name=shipping --db_url=shipping 2. Start your IDE (e.g., code shipping) and establish your venv. 3. Start the Shipping Server: F5 (configured to use a different port). The core Shipping system was automated by ChatGPT and ApiLogicServer create. We add 15 lines of code to consume Kafka messages, as shown below. Consuming Logic To consume messages, we enable message consumption, configure a mapping, and provide a message handler as follows. 1. Enable Consumption Shipping is pre-configured to enable message consumption with a setting in config.py: KAFKA_CONSUMER = '{"bootstrap.servers": "localhost:9092", "group.id": "als-default-group1", "auto.offset.reset":"smallest"}' When the server is started, it invokes flask_consumer() (shown below). This is called the pre-supplied FlaskKafka, which handles the Kafka consumption (listening), thread management, and the handle annotation used below. This housekeeping task is pre-created automatically. FlaskKafka was inspired by the work of Nimrod (Kevin) Maina in this project. Many thanks! 2. Configure a Mapping As we did for our OrderB2B Custom Resource, we configured an OrderToShip mapping class to map the message onto our SQLAlchemy Order object. 3. Provide a Consumer Message Handler We provide the order_shipping handler in kafka_consumer.py: Annotate the topic handler method, providing the topic name. This is used by FlaskKafka to establish a Kafka listener Provide the topic handler code, leveraging the mapper noted above. It is called FlaskKafka per the method annotations. Test It You can use your IDE terminal window to simulate a business partner posting a B2BOrder. You can set breakpoints in the code described above to explore system operation. ApiLogicServer curl "'POST' 'http://localhost:5656/api/ServicesEndPoint/OrderB2B'" --data ' {"meta": {"args": {"order": { "AccountId": "ALFKI", "Surname": "Buchanan", "Given": "Steven", "Items": [ { "ProductName": "Chai", "QuantityOrdered": 1 }, { "ProductName": "Chang", "QuantityOrdered": 2 } ] } }}' Use Shipping's Admin App to verify the Order was processed. Summary These applications have demonstrated several types of application integration: Ad Hoc integration via self-serve APIs. Custom integration via custom APIs to support business agreements with B2B partners. Message-based integration to decouple internal systems by reducing dependencies that all systems must always be running. We have also illustrated several technologies noted in the ideal column: Requirement Poor Practice Good Practice Best Practice Ideal Ad Hoc Integration ETL APIs Self-Serve APIs Automated Creation of Self-Serve APIs Logic Logic in UI Reusable Logic Declarative Rules.. Extensible with Python Messages Kafka Kafka Logic Integration API Logic Server provides automation for the ideal practices noted above: 1. Creation: instant ad hoc API (and Admin UI) with the ApiLogicServer create command. 2. Declarative Rules: Security and multi-table logic reduce the backend half of your system by 40X. 3. Kafka Logic Integration Produce messages from logic events. Consume messages by extending kafka_consumer. Services, including: RowDictMapper to transform rows and dict. FlaskKafka for Kafka consumption, threading, and annotation invocation. 4. Standards-based Customization Standard packages: Python, Flask, SQLAlchemy, Kafka... Using standard IDEs. Creation, logic, and integration automation have enabled us to build two non-trivial systems with a remarkably small amount of code: Type Code Custom B2B API 10 lines Check Credit Logic 5 rules Row Level Security 1 security declaration Send Order to Shipping 20 lines Process Order in Shipping 30 lines Mapping configurationsto transform rows and dicts 45 lines Automation dramatically increases time to market, with standards-based customization using your IDE, Python, Flask, SQLAlchemy, and Kafka. For more information on API Logic Server, click here. Appendix Full Tutorial You can recreate this system and explore running code, including Kafka, click here. It should take 30-60 minutes, depending on whether you already have Python and an IDE installed. Sample Database The sample database is an SQLite version of Northwind, Customers, Order, OrderDetail, and Product. To see a database diagram, click here. This database is included when you pip install ApiLogicServer.
This article starts with an overview of what a typical computer vision application requires. Then, it introduces Pipeless, an open-source framework that offers a serverless development experience for embedded computer vision. Finally, you will find a detailed step-by-step guide on the creation and execution of a simple object detection app with just a couple of Python functions and a model. Inside a Computer Vision Application "The art of identifying visual events via a camera interface and reacting to them" That is what I would answer if someone asked me to describe what computer vision is in one sentence. But it is probably not what you want to hear. So let's dive into how computer vision applications are typically structured and what is required in each subsystem. Really fast frame processing: Note that to process a stream of 60 FPS in real-time, you only have 16 ms to process each frame. This is achieved, in part, via multi-threading and multi-processing. In many cases, you want to start processing a frame even before the previous one has finished. An AI model to run inference on each frame and perform object detection, segmentation, pose estimation, etc: Luckily, there are more and more open-source models that perform pretty well, so we don't have to create our own from scratch, you usually just fine-tune the parameters of a model to match your use case (we will not deep dive into this today). An inference runtime: The inference runtime takes care of loading the model and running it efficiently on the different available devices (GPUs or CPUs). A GPU: To run the inference using the model fast enough, we require a GPU. This happens because GPUs can handle orders of magnitude more parallel operations than a CPU, and a model at the lowest level is just a huge bunch of mathematical operations. You will need to deal with the memory where the frames are located. They can be at the GPU memory or at the CPU memory (RAM) and copying frames between those is a very heavy operation due to the frame sizes that will make your processing slow. Multimedia pipelines: These are the pieces that allow you to take streams from sources, split them into frames, provide them as input to the models, and, sometimes, make modifications and rebuild the stream to forward it. Stream management: You may want to make the application resistant to interruptions in the stream, re-connections, adding and removing streams dynamically, processing several of them at the same time, etc. All those systems need to be created or incorporated into your project and thus, it is code that you need to maintain. The problem is that you end up maintaining a huge amount of code that is not specific to your application, but subsystems around the actual case-specific code. The Pipeless Framework To avoid having to build all the above from scratch, you can use Pipeless. It is an open-source framework for computer vision that allows you to provide a few functions specific to your case and it takes care of everything else. Pipeless splits the application's logic into "stages," where a stage is like a micro app for a single model. A stage can include pre-processing, running inference with the pre-processed input, and post-processing the model output to take any action. Then, you can chain as many stages as you want to compose the full application even with several models. To provide the logic of each stage, you simply add a code function that is very specific to your application, and Pipeless takes care of calling it when required. This is why you can think about Pipeless as a framework that provides a serverless-like development experience for embedded computer vision. You provide a few functions and you don't have to worry about all the surrounding systems that are required. Another great feature of Pipeless is that you can add, remove, and update streams dynamically via a CLI or a REST API to fully automate your workflows. You can even specify restart policies that indicate when the processing of a stream should be restarted, whether it should be restarted after an error, etc. Finally, to deploy Pipeless you just need to install it and run it along with your code functions on any device, whether it is in a cloud VM or containerized mode, or directly within an edge device like a Nvidia Jetson, a Raspberry, or any others. Creating an Object Detection Application Let's deep dive into how to create a simple application for object detection using Pipeless. The first thing we have to do is to install it. Thanks to the installation script, it is very simple: curl https://raw.githubusercontent.com/pipeless-ai/pipeless/main/install.sh | bash Now, we have to create a project. A Pipeless project is a directory that contains stages. Every stage is under a sub-directory, and inside each sub-directory, we create the files containing hooks (our specific code functions). The name that we provide to each stage folder is the stage name that we have to indicate to Pipeless later when we want to run that stage for a stream. pipeless init my-project --template empty cd my-project Here, the empty template tells the CLI to just create the directory, if you do not provide any template, the CLI will prompt you several questions to create the stage interactively. As mentioned above, we now need to add a stage to our project. Let's download an example stage from GitHub with the following command: wget -O - https://github.com/pipeless-ai/pipeless/archive/main.tar.gz | tar -xz --strip=2 "pipeless-main/examples/onnx-yolo" That will create a stage directory, onnx-yolo, that contains our application functions. Let's check the content of each of the stage files; i.e., our application hooks. We have the pre-process.py file, which defines a function (hook) taking a frame and a context. The function makes some operations to prepare the input data from the received RGB frame in order to match the format that the model expects. That data is added to the frame_data['inference_input'] which is what Pipeless will pass to the model. def hook(frame_data, context): frame = frame_data["original"].view() yolo_input_shape = (640, 640, 3) # h,w,c frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) frame = resize_rgb_frame(frame, yolo_input_shape) frame = cv2.normalize(frame, None, 0.0, 1.0, cv2.NORM_MINMAX) frame = np.transpose(frame, axes=(2,0,1)) # Convert to c,h,w inference_inputs = frame.astype("float32") frame_data['inference_input'] = inference_inputs ... (some other auxiliar functions that we call from the hook function) We also have the process.json file, which indicates Pipeless the inference runtime to use (in this case, the ONNX Runtime), where to find the model that it should load, and some optional parameters for it, such as the execution_provider to use, i.e., CPU, CUDA, TensortRT, etc. { "runtime": "onnx", "model_uri": "https://pipeless-public.s3.eu-west-3.amazonaws.com/yolov8n.onnx", "inference_params": { "execution_provider": "tensorrt" } } Finally, the post-process.py file defines a function similar to the one at pre-process.py. This time, it takes the inference output that Pipeless stored at frame_data["inference_output"] and performs the operations to parse that output into bounding boxes. Later, it draws the bounding boxes over the frame, to finally assign the modified frame to frame_data['modified']. With that, Pipeless will forward the stream that we provide but with the modified frames including the bounding boxes. def hook(frame_data, _): frame = frame_data['original'] model_output = frame_data['inference_output'] yolo_input_shape = (640, 640, 3) # h,w,c boxes, scores, class_ids = parse_yolo_output(model_output, frame.shape, yolo_input_shape) class_labels = [yolo_classes[id] for id in class_ids] for i in range(len(boxes)): draw_bbox(frame, boxes[i], class_labels[i], scores[i]) frame_data['modified'] = frame ... (some other auxiliar functions that we call from the hook function) The final step is to start Pipeless and provide a stream. To start Pipeless, simply run the following command from the my-project directory: pipeless start --stages-dir . Once running, let's provide a stream from the webcam (v4l2) and show the output directly on the screen. Note we have to provide the list of stages that the stream should execute in order; in our case, it is just the onnx-yolo stage: pipeless add stream --input-uri "v4l2" --output-uri "screen" --frame-path "onnx-yolo" And that's all! Conclusion We have described how creating a computer vision application is a complex task due to many factors and the subsystems that we have to implement around it. With a framework like Pipeless, getting up and running takes just a few minutes and you can focus just on writing the code for your specific use case. Furthermore, Pipeless' stages are highly reusable and easy to maintain so the maintenance will be easy and you will be able to iterate very fast. If you want to get involved with Pipeless and contribute to its development, you can do so through its GitHub repository.
The domain of Angular state management has received a huge boost with the introduction of Signal Store, a lightweight and versatile solution introduced in NgRx 17. Signal Store stands out for its simplicity, performance optimization, and extensibility, making it a compelling choice for modern Angular applications. In the next steps, we'll harness the power of Signal Store to build a sleek Task Manager app. Let's embark on this journey to elevate your Angular application development. Ready to start building? Let's go! A Glimpse Into Signal Store’s Core Structure Signal Store revolves around four fundamental components that form the backbone of its state management capabilities: 1. State At the heart of Signal Store lies the concept of signals, which represent the application's state in real-time. Signals are observable values that automatically update whenever the underlying state changes. 2. Methods Signal Store provides methods that act upon the state, enabling you to manipulate and update it directly. These methods offer a convenient way to interact with the state and perform actions without relying on observable streams or external state managers. 3. Selectors Selectors are functions that derive calculated values from the state. They provide a concise and maintainable approach to accessing specific parts of the state without directly exposing it to components. Selectors help encapsulate complex state logic and improve the maintainability of applications. 4. Hooks Hooks are functions that are triggered at critical lifecycle events, such as component initialization and destruction. They allow you to perform actions based on these events, enabling data loading, state updates, and other relevant tasks during component transitions. Creating a Signal Store and Defining Its State To embark on your Signal Store journey, you'll need to install the @ngrx/signals package using npm: But first, you have to install the Angular CLI and create an Angular base app with: JavaScript npm install -g @angular/cli@latest JavaScript ng new <name of your project> JavaScript npm install @ngrx/signals Creating a state (distinct from a store) is the subsequent step: TypeScript import { signalState } from '@ngrx/signals'; const state = signalState({ /* State goes here */ }); Manipulating the state becomes an elegant affair using the patchState method: TypeScript updateStateMethod() { patchState(this.state, (state) => ({ someProp: state.someProp + 1 })); } The patchState method is a fundamental tool for updating the state. It allows you to modify the state in a shallow manner, ensuring that only the specified properties are updated. This approach enhances performance by minimizing the number of state changes. First Steps for the Task Manager App First, create your interface for a Task and place it in a task.ts file: TypeScript export interface Task { id: string; value: string; completed: boolean; } The final structure of the app is: And our TaskService in taskService.ts looks like this: TypeScript @Injectable({ providedIn: 'root' }) export class TaskService { private taskList: Task[] = [ { id: '1', value: 'Complete task A', completed: false }, { id: '2', value: 'Read a book', completed: true }, { id: '3', value: 'Learn Angular', completed: false }, ]; constructor() { } getTasks() : Observable<Task[]> { return of(this.taskList); } getTasksAsPromise() { return lastValueFrom(this.getTasks()); } getTask(id: string): Observable<Task | undefined> { const task = this.taskList.find(t => t.id === id); return of(task); } addTask(value: string): Observable<Task> { const newTask: Task = { id: (this.taskList.length + 1).toString(), // Generating a simple incremental ID value, completed: false }; this.taskList = [...this.taskList, newTask]; return of(newTask); } updateTask(updatedTask: Task): Observable<Task> { const index = this.taskList.findIndex(task => task.id === updatedTask.id); if (index !== -1) { this.taskList[index] = updatedTask; } return of(updatedTask); } deleteTask(task: Task): Observable<Task> { this.taskList = this.taskList.filter(t => t.id !== task.id); return of(task); } } Crafting a Signal Store for the Task Manager App The creation of a store is a breeze with the signalStore method: Create the signalStore and place it in the taskstate.ts file: TypeScript import { signalStore, withHooks, withState } from '@ngrx/signals'; export const TaskStore = signalStore( { providedIn: 'root' }, withState({ /* state goes here */ }), ); Taking store extensibility to new heights, developers can add methods directly to the store. Methods act upon the state, enabling you to manipulate and update it directly. TypeScript export interface TaskState { tasks: Task[]; loading: boolean; } export const initialState: TaskState = { tasks: []; loading: false; } export const TaskStore = signalStore( { providedIn: 'root' }, withState(initialState), withMethods((store, taskService = inject(TaskService)) => ({ loadAllTasks() { // Use TaskService and then patchState(store, { tasks }); }, })) ); This method loadAllTasks is now available directly through the store itself. So in the component, we could do it in ngOnInit(): TypeScript @Component({ // ... providers: [TaskStore], }) export class AppComponent implements OnInit { readonly store = inject(TaskStore); ngOnInit() { this.store.loadAllTasks(); } } Harmony With Hooks The Signal Store introduces its own hooks, simplifying component code. By passing implemented methods into the hooks, developers can call them effortlessly: TypeScript export const TaskStore = signalStore( { providedIn: 'root' }, withState(initialState), withMethods(/* ... */), withHooks({ onInit({ loadAllTasks }) { loadAllTasks(); }, onDestroy() { console.log('on destroy'); }, }) ); This results in cleaner components, exemplified in the following snippet: TypeScript @Component({ providers: [TaskStore], }) export class AppComponent implements OnInit { readonly store = inject(TaskStore); // ngOnInit is NOT needed to load the Tasks !!!! } RxJS and Promises in Methods Flexibility takes center stage as @ngrx/signals seamlessly accommodates both RxJS and Promises: TypeScript import { rxMethod } from '@ngrx/signals/rxjs-interop'; export const TaskStore = signalStore( { providedIn: 'root' }, withState({ /* state goes here */ }), withMethods((store, taskService = inject(TaskService)) => ({ loadAllTasks: rxMethod<void>( pipe( switchMap(() => { patchState(store, { loading: true }); return taskService.getTasks().pipe( tapResponse({ next: (tasks) => patchState(store, { tasks }), error: console.error, finalize: () => patchState(store, { loading: false }), }) ); }) ) ), })) ); This snippet showcases the library's flexibility in handling asynchronous operations with RxJS. What I find incredibly flexible is that you can use RxJS or Promises to call your data. In the above example, you can see that we are using an RxJS in our methods. The tapResponse method helps us to use the response and manipulate the state with patchState again. But you can also use promises. The caller of the method (the hooks in this case) do not care. TypeScript async loadAllTasksByPromise() { patchState(store, { loading: true }); const tasks = await taskService.getTasksAsPromise(); patchState(store, { tasks, loading: false }); }, Reading the Data With Finesse Experience, the Signal Store introduces the withComputed() method. Similar to selectors, this method allows developers to compose and calculate values based on state properties: TypeScript export const TaskStore = signalStore( { providedIn: 'root' }, withState(initialState), withComputed(({ tasks }) => ({ completedCount: computed(() => tasks().filter((x) => x.completed).length), pendingCount: computed(() => tasks().filter((x) => !x.completed).length), percentageCompleted: computed(() => { const completed = tasks().filter((x) => x.completed).length; const total = tasks().length; if (total === 0) { return 0; } return (completed / total) * 100; }), })), withMethods(/* ... */), withHooks(/* ... */) ); In the component, these selectors can be effortlessly used: TypeScript @Component({ providers: [TaskStore], templates: ` <div> {{ store.completedCount() } / {{ store.pendingCount() } {{ store.percentageCompleted() } </div> ` }) export class AppComponent implements OnInit { readonly store = inject(TaskStore); } Modularizing for Elegance To elevate the elegance, selectors, and methods can be neatly tucked into separate files. We use in these files the signalStoreFeature method. With this, we can extract the methods and selectors to make the store even more beautiful. This method again has withComputed, withHooks, and withMethods for itself, so you can build your own features and hang them into the store. // task.selectors.ts: TypeScript export function withTasksSelectors() { return signalStoreFeature( {state: type<TaskState>()}, withComputed(({tasks}) => ({ completedCount: computed(() => tasks().filter((x) => x.completed).length), pendingCount: computed(() => tasks().filter((x) => !x.completed).length), percentageCompleted: computed(() => { const completed = tasks().filter((x) => x.completed).length; const total = tasks().length; if (total === 0) { return 0; } return (completed / total) * 100; }), })), ); } // task.methods.ts: TypeScript export function withTasksMethods() { return signalStoreFeature( { state: type<TaskState>() }, withMethods((store, taskService = inject(TaskService)) => ({ loadAllTasks: rxMethod<void>( pipe( switchMap(() => { patchState(store, { loading: true }); return taskService.getTasks().pipe( tapResponse({ next: (tasks) => patchState(store, { tasks }), error: console.error, finalize: () => patchState(store, { loading: false }), }) ); }) ) ), async loadAllTasksByPromise() { patchState(store, { loading: true }); const tasks = await taskService.getTasksAsPromise(); patchState(store, { tasks, loading: false }); }, addTask: rxMethod<string>( pipe( switchMap((value) => { patchState(store, { loading: true }); return taskService.addTask(value).pipe( tapResponse({ next: (task) => patchState(store, { tasks: [...store.tasks(), task] }), error: console.error, finalize: () => patchState(store, { loading: false }), }) ); }) ) ), moveToCompleted: rxMethod<Task>( pipe( switchMap((task) => { patchState(store, { loading: true }); const toSend = { ...task, completed: !task.completed }; return taskService.updateTask(toSend).pipe( tapResponse({ next: (updatedTask) => { const allTasks = [...store.tasks()]; const index = allTasks.findIndex((x) => x.id === task.id); allTasks[index] = updatedTask; patchState(store, { tasks: allTasks, }); }, error: console.error, finalize: () => patchState(store, { loading: false }), }) ); }) ) ), deleteTask: rxMethod<Task>( pipe( switchMap((task) => { patchState(store, { loading: true }); return taskService.deleteTask(task).pipe( tapResponse({ next: () => { patchState(store, { tasks: [...store.tasks().filter((x) => x.id !== task.id)], }); }, error: console.error, finalize: () => patchState(store, { loading: false }), }) ); }) ) ), })) ); } This modular organization allows for a clean separation of concerns, making the store definition concise and easy to maintain. Streamlining the Store Definition With selectors and methods elegantly tucked away in their dedicated files, the store definition now takes on a streamlined form: // task.store.ts: TypeScript export const TaskStore = signalStore( { providedIn: 'root' }, withState(initialState), withTasksSelectors(), withTasksMethods(), withHooks({ onInit({ loadAllTasksByPromise: loadAllTasksByPromise }) { console.log('on init'); loadAllTasksByPromise(); }, onDestroy() { console.log('on destroy'); }, }) ); This modular approach not only enhances the readability of the store definition but also facilitates easy maintenance and future extensions. Our AppComponent then can get the Store injected and use the methods from the store, the selectors, and using the hooks indirectly. TypeScript @Component({ selector: 'app-root', standalone: true, imports: [CommonModule, RouterOutlet, ReactiveFormsModule], templateUrl: './app.component.html', styleUrl: './app.component.css', providers: [TaskStore], changeDetection: ChangeDetectionStrategy.OnPush, }) export class AppComponent { readonly store = inject(TaskStore); private readonly formbuilder = inject(FormBuilder); form = this.formbuilder.group({ taskValue: ['', Validators.required], completed: [false], }); addTask() { this.store.addTask(this.form.value.taskValue); this.form.reset(); } } The final app: In Closing In this deep dive into the @ngrx/signals library, we've unveiled a powerful tool for Angular state management. From its lightweight architecture to its seamless integration of RxJS and Promises, the library offers a delightful development experience. As you embark on your Angular projects, consider the elegance and simplicity that @ngrx/signals brings to the table. Whether you're starting a new endeavor or contemplating an upgrade, this library promises to be a valuable companion, offering a blend of simplicity, flexibility, and power in the dynamic world of Angular development. You can find the final code here. Happy coding!
Dragonfly is a drop-in Redis replacement designed to deliver far better performance with far fewer servers. A single node can handle millions of queries per second and up to 1TB of in-memory data. In this article, we will explore how to use Dragonfly with Laravel, one of the most widely used and well-known web frameworks. Dragonfly maintains full compatibility with the Redis interface, meaning Laravel developers can integrate it as a cache and queue driver without a single line of code change. This seamless integration can offer an effortless upgrade path with substantial benefits. So, whether you are a seasoned Laravel veteran or just starting out, join us as we step into the world of Dragonfly and Laravel. Getting Started Let's start by setting up a new Dragonfly instance. Visit our documentation here to download an image or the binary and have a Dragonfly instance up and running in no time. Once the Dragonfly instance is operational and reachable, integrating it with your Laravel project is a breeze. Luckily, Laravel already has full support for Redis, so all of its drivers can be reused. To use Dragonfly in your Laravel application, start by updating the .env file with the following configurations. For caching and session management: CACHE_DRIVER=redis SESSION_DRIVER=redis To integrate Dragonfly as the queue driver as well: QUEUE_CONNECTION=redis Even though we are using redis as the driver value, Dragonfly is designed to be a direct replacement for Redis, so no additional driver installation is required. With the driver set, the next step is to ensure Laravel can communicate with the Dragonfly instance. This involves updating the .env file again with the correct connection details: REDIS_HOST: The hostname or IP address of the Dragonfly server. REDIS_PORT: The port on which the Dragonfly instance is running. REDIS_PASSWORD: The password for the Dragonfly instance, if set. Here's an example configuration: REDIS_HOST=127.0.0.1 # Replace with Dragonfly host REDIS_PORT=6379 # Replace with Dragonfly port REDIS_PASSWORD=null # Replace with Dragonfly password if applicable After updating these settings, verify the connection by running a simple operation like INFO in Laravel. If you encounter any connectivity issues, double-check the host, port, and password values. Also, ensure that the Dragonfly server is running and accessible from your Laravel application's environment. use Illuminate\Support\Facades\Redis; // Run the INFO command and print the Dragonfly version. Redis::info()["dragonfly_version"]; Higher Efficiency as a Cache Caching commonly accessed values is one of the primary uses of in-memory databases like Dragonfly and Redis due to their fast response times. This is where Dragonfly shines, especially in scenarios involving a large number of keys and clients, typical as a central cache of multi-node systems or microservices. A standout feature of Dragonfly is the cache mode, designed specifically for scenarios where maintaining a lean memory footprint is as crucial as performance. In this mode, Dragonfly evicts the least recently accessed values when it detects low memory availability, ensuring efficient memory usage without sacrificing speed. You can read more about the eviction algorithm in the Dragonfly Cache Design blog post. Activating the cache mode is straightforward. Here are the flags you would use to run Dragonfly in this mode, with a memory cap of 12GB: ./dragonfly --cache_mode --maxmemory=12G Consider a scenario where your application needs to handle a high volume of requests with a vast dataset. In such cases, the Dragonfly cache mode can efficiently manage memory usage while providing rapid access to data, ensuring your application remains responsive and agile. API-wise, all functionality of the Laravel Cache facade should be supported. For example, to store a given key and value with a specific expiration time, the following snippet can be used: use Illuminate\Support\Facades\Cache; // Store a value with a 10 minute expiration time. Cache::put("key", "value", 600); Memory Usage One of the benefits of using Dragonfly as a cache is its measurably lower memory usage for most use cases. Let's conduct a simple experiment and fill both Redis and Dragonfly with random strings, measuring their total memory usage after filling them with data. Dataset Dragonfly Redis 3 Million Values of Length 1000 2.75GB 3.17GB 15 Million Values of Length 200 3.8GB 4.6GB After conducting the experiment, we've observed that Dragonfly's memory usage is up to 20% lower compared to Redis under similar conditions. This allows you to store significantly more useful data with the same memory requirements, making the cache more efficient and achieving higher coverage. You can read more about Dragonfly throughput benchmarks and memory usage in the Redis vs. Dragonfly Scalability and Performance blog post. Snapshotting Beyond lower memory usage, Dragonfly also demonstrates stability during snapshotting processes. Snapshotting, particularly in busy instances, can be a challenge in terms of memory management. With Redis, capturing a snapshot on a highly active instance might lead to increased memory usage. This happens because Redis needs to copy memory pages, even those that have only been partially overwritten, resulting in a spike in memory usage. Dragonfly, in contrast, adjusts the order of snapshotting based on incoming requests, effectively preventing any unexpected surges in memory usage. This means that even during intensive operations like snapshotting, Dragonfly maintains a stable memory footprint, ensuring consistent performance without the risk of sudden memory spikes. You can read more about the Dragonfly snapshotting algorithm in the Balanced vs. Unbalanced blog post. Key Stickiness Dragonfly also introduces a new feature with its custom STICK command. This command is particularly useful in instances running in cache mode. It enables specific keys to be marked as non-evicting, irrespective of their access frequency. This functionality is especially handy for storing seldom-accessed yet important data. For example, you can reliably keep auxiliary information, like dynamic configuration values, directly on your Dragonfly instance. This eliminates the need for a separate datastore for infrequently used but crucial data, streamlining your data management process. // Storing a value in the Dragonfly instance with stickiness. Redis::transaction(function (Redis $redis) { $redis->set('server-dynamic-configuration-key', '...'); $redis->command('STICK', 'server-dynamic-configuration-key'); }); // ... // Will always return a value since the key cannot be evicted. $redis->get('server-dynamic-configuration-key'); Enhanced Throughput in Queue Management Dragonfly, much like Redis, is adept at managing queues and jobs. As you might have already guessed, the transition to using Dragonfly for this purpose is seamless, requiring no code modifications. Consider the following example in Laravel, where a podcast processing job is dispatched: use App\Jobs\ProcessPodcast; $podcast = Podcast::create(/* ... */); ProcessPodcast::dispatchSync($podcast); Both Dragonfly and Redis are capable of handling tens of thousands of jobs per second with ease. For those aiming to maximize performance, it's important to note that using a single job queue won't yield significant performance gains. To truly leverage Dragonfly's capabilities, multiple queues should be utilized. This approach distributes the load across multiple Dragonfly threads, enhancing overall throughput. However, a common challenge arises when keys from the same queue end up on different threads, leading to increased latency. To counter this, Dragonfly offers the use of hashtags in queue names. These hashtags ensure that jobs in the same queue (which uses the same hashtag) are automatically assigned to specific threads, much like in a Redis Cluster environment, thereby reducing latency and optimizing performance. To learn more about hashtags, check out the Running BullMQ with Dragonfly blog post, which has a detailed explanation of hashtags and their benefits, while Dragonfly is used as a backing store for message queue systems. As a quick example, to optimize your queue management with Dragonfly, start by launching Dragonfly with specific flags that enable hashtag-based locking and emulated cluster mode: ./dragonfly --lock_on_hashtags --cluster_mode=emulated Once Dragonfly is running with these settings, incorporate hashtags into your queue names in Laravel. Here's an example: ProcessPodcast::dispatch($podcast)->onQueue('{podcast_queue}'); By using hashtags in queue names, you ensure that all messages belonging to the same queue are processed by the same thread in Dragonfly. This approach not only keeps related messages together, enhancing efficiency, but also allows Dragonfly to maximize throughput by distributing different queues across multiple threads. This method is particularly effective for systems that rely on Dragonfly as a message queue backing store, as it leverages Dragonfly's multi-threaded architecture to handle a higher volume of messages more efficiently. Conclusion Dragonfly's ability to handle massive workloads with lower memory usage and its multi-threaded architecture make it a compelling choice for modern web applications. Throughout this article, we've explored how Dragonfly seamlessly integrates with Laravel, requiring minimal to no code changes, whether it's for caching, session management, or queue management.
Jakarta EE is a set of specifications: an open-source platform that offers a collection of software components and APIs (Application Programming Interface) for the development of enterprise applications and services in Java. During these last years, Jakarta EE has become one of the preferred frameworks for professional software enterprise applications and services development in Java. There are probably dozens of such open-source APIs nowadays, but what makes Jakarta EE unique is the fact that all these specifications are issued from a process originally called JCP (Java Community Process), currently called EFSP (Eclipse Foundation Specification Process). These specifications, initially called JSR (Java Specifications Request) and now called simply Eclipse specifications, are issued from a consortium, bringing together the most important organizations in today's Java software development field, originally led by the JCP and now stewarded by the Eclipse Foundation. Consequently, as opposed to its competitors where the APIs are evolving according to unilateral decisions taken by their implementers, Jakarta EE is an expression of the consensus of companies, user groups, and communities. From J2EE to Java EE Jakarta EE is probably the best thing that has happened to Java since its birth more than 20 years ago. Created in the early 2000s, the J2EE (Java 2 Enterprise Edition) specifications were an extension of the Java programming language known also as J2SE (Java 2 Standard Edition). J2EE was a set of specifications intended to facilitate the development of Java enterprise-grade applications. It was also intended to describe a unified and standard API, allowing developers to deal with complex functionalities like distributed processing, remote access, transactional management, security, and much more. They were maintained by the JCP as explained, and led by an executive committee in which Sun Microsystems, as the original developer of the Java programming language, had a central role. The beginning of the year 2000 saw the birth of J2EE 1.2. This was the first release of what was about to be later called the "server-side" Java. That time was the epoch of the enterprise multi-tier applications that some people today describe as monoliths, having graphical user interfaces on their web tier, business delegate components like stateless EJB (Enterprise Java Beans), MDB (Message Driven Beans), and other remote services on the middle tier, and JPA (Java Persistence API) components on the data access tier. Clusters, load-balancers, fail-over strategies, and sticky HTTP sessions were parts of the de facto software architecture standard that every enterprise application had to meet. All these components were deployed on application servers like WebSphere, WebLogic, JBoss, Glassfish, and others. From the year 2006 and ahead, Sun Microsystems decided to simplify the naming convention of the J2EE specifications, which were in their 1.4 version at that time, and, starting with the 5th release, to rename them as Java EE (Java Enterprise Edition). Similarly, the standard edition became Java SE (Java Standard Edition). This same year, Sun Microsystems was purchased by Oracle, who became the owner of both Java SE and Java EE. During this time, the JCP continued to produce hundreds of specifications in the form of JSRs covering all the enterprise software development aspects. The complete list of the JSRs may be found here. Java EE was a huge success - a real revolution in Java software architecture and development. Its implementations, open-source or commercial products, were ubiquitous in the enterprise IT landscape. Oracle has inherited two of them: Glassfish, which was the open-source reference implementation by Sun Microsystems, and WebLogic, a commercial platform obtained further to the purchase of BEA. But Oracle was and stays an RDBMS software vendor and, despite being the new owner of Java SE/EE as well as of Glassfish and WebLogic, the relationships with the Java and Java EE community were quite sharp. Consequently, Java SE became a commercial product available under a license and requiring a subscription while Java EE wasn't maintained anymore, and finished by being donated in 2017 to the Eclipse Foundation. Its new name was Jakarta EE. From Java EE to Jakarta EE With Jakarta EE, the server-side Java started a new life. It was first Jakarta EE 8, which kept the original namespace javax.*. Then came Jakarta EE 9, which was a hybrid release, as it used some original namespace prefixes together with the new ones jakarta.*. Finally, the current release, Jakarta EE 10, among other many novelties, provides a fully coherent new namespace. The new Jakarta EE 11 release is in progress and scheduled to be delivered in June 2024. The architecture of the Java enterprise-grade services and applications continued to evolve under the Oracle stewardship, but the Java EE specifications were in a kind of status quo before becoming Eclipse Jakarta EE. The company didn't really manage to set up a dialogue with users, communities, work groups, and all those involved in the recognition and promotion of the Java enterprise-grade services. Their evolution requests and expectations weren't being addressed by the editor, who didn't seem interested in dealing with their new responsibility as the Java/Jakarta EE owner. In such a way that little by little, this has led to a guarded reaction from software architects and developers, who began to prefer and adopt alternative technological solutions to application servers. Several kinds of solutions started to appear many years ago on the model of Spring, an open-source Java library, claiming to be an alternative to Jakarta EE. In fact, Spring has never been a true alternative to Jakarta EE because, in all its versions and millings (including but not limited to Spring Core, Spring Boot, and Spring Cloud), it is based on Jakarta EE and needs Jakarta EE in order to run. As a matter of fact, an enterprise-grade Java application needs implementations of specifications like Servlets, JAX-RS (Java API for RESTful Web Services), JAX-WS (Java API for XML Web Services), JMS (Java Messaging Service), MDB (Message Driven Bean), CDI (Context and Dependency Injection), JTA (Java Transaction API), JPA (Java Persistence API) and many others. Otherwise, Spring doesn't implement any of these specifications. Spring only provides interfaces to these implementations, relies on them, and, as such, is only a Jakarta EE consumer or client. So, Spring is a Jakarta EE alternative as much as the remote control is an alternative to the television set. Nevertheless, the marketing is sometimes more impressive than the technology itself. This is what happened with Spring, especially since the emergence of Spring Boot. While trying to find alternative solutions to Jakarta EE and to remedy issues like the apparent heaviness and the expansive prices of application servers, certain software professionals have adopted Spring Boot as a development platform. And since they needed Jakarta EE implementations for even basic web applications anyway, as shown above, they deployed these applications in open-source servlet engines like Tomcat, Jetty, or Undertow. For more advanced features than just servlets like JPA or JMS, Spring Boot provides integration with Active MQ or Hibernate. And should these software professionals need even more advanced features, like JTA for example, they were going fishing on the internet for free third-party implementations like Atomikos. Additionally, in the absence of an official integration, they tried by themselves to integrate these features on their servlet engine, with all the risks that this entails. Other solutions closer to real Jakarta EE alternatives have emerged as well, and among them, Netty, Quarkus, or Helidon are the best-known and most popular. All these solutions were based on a couple of software design principles like single concern, discrete boundaries, transportability across runtimes, auto-discovery, etc., which were known since the dawn of time. But because the software industry continuously needs new names, the new name that has been found for these alternative solutions is microservices. More and more microservice architecture-based applications appeared during the next few years, at such a point that the word "microservice" became one of the most common buzzwords in the software industry. And in order to optimize and standardize the microservices technology, the Eclipse Foundation decided to apply to microservices the same process that was used in order to design the Jakarta EE specifications. Thus, the Eclipse MicroProfile was born. The Eclipse MicroProfile is, like Jakarta EE, a group of specifications trying to inspire from several microservices existing frameworks such as Spring Boot, Quarkus, Helidon, and others, and to unify their base principles in a consistent and standard API set. Again, like Jakarta EE, the Eclipse MicroProfile specifications have to be implemented by software editors. While some of these implementations like OpenLiberty, Quarkus, Helidon, and others only concern the Eclipse MicroProfile specifications, some others like Wildfly, Red Hat EAP, Glassfish, or Payara are trying to do the splits and unify Jakarta EE and Eclipse MicroProfile in a consistent and unique platform. Conclusion As a continuation of its previous releases, Jakarta EE is a revolution in Java enterprise-grade applications and services. It retains the open-source spirit and is guided by collaboration between companies, communities, and user groups rather than commercial goals alone.
In the dynamic world of web development, Single Page Applications (SPAs) and frameworks like React, Angular, and Vue.js have emerged as the preferred approach for delivering seamless user experiences. With the evolution of the Kotlin language and its recent multiplatform capabilities, new options exist that are worthwhile to evaluate. In this article, we will explore Kotlin/JS for creating a web application that communicates with a Spring Boot backend which is also written in Kotlin. In order to keep it as simple as possible, we will not bring in any other framework. Advantages of Kotlin/JS for SPA Development As described in the official documentation, Kotlin/JS provides the ability to transpile Kotlin code, the Kotlin standard library, and any compatible dependencies to JavaScript (ES5). With Kotlin/JS we can manipulate the DOM and create dynamic HTML by taking advantage of Kotlin's conciseness and expressiveness, coupled with its compatibility with JavaScript. And of course, we do have the much needed type-safety, which reduces the likelihood of runtime errors. This enables developers to write client-side code with reduced boilerplate and fewer errors. Additionally, Kotlin/JS seamlessly integrates with popular JavaScript libraries (and frameworks), thus leveraging the extensive ecosystem of existing tools and resources. And, last but not least: this makes it easier for a backend developer to be involved with the frontend part as it looks more familiar. Moderate knowledge of "vanilla" JavaScript, the DOM, and HTML is of course needed; but especially when we are dealing with non-intensive apps (admin panels, back-office sites, etc.), one can get engaged rather smoothly. Sample Project The complete source code for this showcase is available on GitHub. The backend utilizes Spring Security for protecting a simple RESTful API with basic CRUD operations. We won't expand more on this since we want to keep the spotlight on the frontend part which demonstrates the following: Log in with username/password Cookie-based session Page layout with multiple tabs and top navigation bar (based on Bootstrap) Client-side routing (based on Navigo) Table with pagination, sorting, and filtering populated with data fetched from the backend (based on DataTables) Basic form with input fields including (dependent) drop-down lists (based on Bootstrap) Modals and loading masks (based on Bootstrap and spin.js) Usage of sessionStorage and localStorage Usage of Ktor HttpClient for making HTTP calls to the backend An architectural overview is provided in the diagram below: Starting Point The easiest way to start exploring is by creating a new Kotlin Multiplatform project from IntelliJ. The project's template must be "Full-Stack Web Application": This will create the following project structure: springMain: This is the module containing the server-side implementation. springTest: For the Spring Boot tests commonMain: This module contains "shared" code between the frontend and the backend; e.g., DTOs commonTest: For the unit tests of the "common" module jsMain: This is the frontend module responsible for our SPA. jsTest: For the Kotlin/JS tests The sample project on GitHub is based on this particular skeleton. Once you clone the project you may start the backend by executing: $ ./gradlew bootRun This will spin up the SpringBoot app, listening on port: 8090. In order to start the frontend, execute: $ ./gradlew jsBrowserDevelopmentRun -t This will open up a browser window automatically navigating to http://localhost:8080 and presenting the user login page. For convenience, a couple of users are provisioned on the server (have a look at dev.kmandalas.demo.config.SecurityConfig for details). Once logged in, the user views a group of tabs with the main tab presenting a table (data grid) with items fetched from the server. The user can interact with the table (paging, sorting, filtering, data export) and add a new item (product) by pressing the "Add product" button. In this case, a form is presented within a modal with typical input fields and dependent drop-down lists with data fetched from the server. In fact, there is some caching applied on this part in order to reduce network calls. Finally, from the top navigation bar, the user can toggle the theme (this setting is preserved in the browser's local storage) and perform logout. In the next section, we will explore some low-level details for selected parts of the frontend module. The jsMain Module Let's start by having a look at the structure of the module: The naming of the Kotlin files should give an idea about the responsibility of each class. The "entrypoint" is of course the Main.kt class: Kotlin import home.Layout import kotlinx.browser.window import kotlinx.coroutines.MainScope import kotlinx.coroutines.launch fun main() { MainScope().launch { window.onload = { Layout.init() val router = Router() router.start() } } } Once the "index.html" file is loaded, we initialize the Layout and our client-side Router. Now, the "index.html" imports the JavaScript source files of the things we use (Bootstrap, Navigo, Datatables, etc.) and their corresponding CSS files. And of course, it imports the "transpiled" JavaScript file of our Kotlin/JS application. Apart from this, the HTML body part consists of some static parts like the "Top Navbar," and most importantly, our root HTML div tag. Under this tag, we will perform the DOM manipulations needed for our simple SPA. By importing the kotlinx.browser package in our Kotlin classes and singletons, we have access to top-level objects such as the document and window. The standard library provides typesafe wrappers for the functionality exposed by these objects (wherever possible) as described in the Browser and DOM API. So this is what we do at most parts of the module by writing Kotlin and not JavaScript or using jQuery, and at the same time having type-safety without using, e.g., TypeScript. So for example we can create content like this: Kotlin private fun buildTable(products: List<Product>): HTMLTableElement { val table = document.createElement("table") as HTMLTableElement table.className = "table table-striped table-hover" // Header val thead = table.createTHead() val headerRow = thead.insertRow() headerRow.appendChild(document.createElement("th").apply { textContent = "ID" }) headerRow.appendChild(document.createElement("th").apply { textContent = "Name" }) headerRow.appendChild(document.createElement("th").apply { textContent = "Category" }) headerRow.appendChild(document.createElement("th").apply { textContent = "Price" }) // Body val tbody = table.createTBody() for (product in products) { val row = tbody.insertRow() row.appendChild(document.createElement("td").apply { textContent = product.id.toString() }) row.appendChild(document.createElement("td").apply { textContent = product.name }) row.appendChild(document.createElement("td").apply { textContent = product.category.name }) row.appendChild(document.createElement("td").apply { textContent = product.price.toString() }) } document.getElementById("root")?.appendChild(table) return table } Alternatively, we can use the Typesafe HTML DSL of the kotlinx.html library which looks pretty cool. Or we can load HTML content as "templates" and further process them. Seems that many possibilities exist for this task. Moving on, we can attach event-listeners thus dynamic behavior to our UI elements like this: Kotlin categoryDropdown?.addEventListener("change", { val selectedCategory = categoryDropdown.value // Fetch sub-categories based on the selected category mainScope.launch { populateSubCategories(selectedCategory) } }) Before talking about some "exceptions to the rule", it's worth mentioning that we use the Ktor HTTP client (see ProductApi) for making the REST calls to the backend. We could use the ported Fetch API for this task but going with the client looks way better. Of course, we need to add the ktor-client as a dependency to the build.gradle.kts file: Kotlin val jsMain by getting { dependsOn(commonMain) dependencies { implementation("io.ktor:ktor-client-core:$ktorVersion") implementation("io.ktor:ktor-client-js:$ktorVersion") implementation("io.ktor:ktor-client-content-negotiation:$ktorVersion") //... } } The client includes the JSESSIONID browser cookie received from the server upon successful authentication of the HTTP requests. If this is omitted, we will get back HTTP 401/403 errors from the server. These are also handled and displayed within Bootstrap modals. Also, a very convenient thing regarding the client-server communication is the sharing of common data classes (Product.kt and Category.kt, in our case) between the jsMain and springMain modules. Exception 1: Use Dependencies From npm For client-side routing, we selected the Navigo JavaScript library. This library is not part of Kotlin/JS, but we can import it in Gradle using the npm function: Kotlin val jsMain by getting { dependsOn(commonMain) dependencies { //... implementation(npm("navigo", "8.11.1")) } } However, because JavaScript modules are dynamically typed and Kotlin is statically typed, in order to manipulate Navigo from Kotlin we have to provide an "adapter." This is what we do within the Router.kt class: Kotlin @JsModule("navigo") @JsNonModule external class Navigo(root: String, resolveOptions: ResolveOptions = definedExternally) { fun on(route: String, handler: () -> Unit) fun resolve() fun navigate(s: String) } With this in place, the Navigo JavaScript module can be used just like a regular Kotlin class. Exception 2: Use JavaScript Code From Kotlin It is possible to invoke JavaScript functions from Kotlin code using the js() function. Here are some examples from our example project: Kotlin // From ProductTable.kt: private fun initializeDataTable() { js("new DataTable('#$PRODUCTS_TABLE_ID', $DATATABLE_OPTIONS)") } // From ModalUtil.kt: val modalElement = document.getElementById(modal.id) as? HTMLDivElement modalElement?.let { js("new bootstrap.Modal(it).show()") } However, this should be used with caution since this way we are outside Kotlin's type system. Takeaways In general, the best framework to choose depends on several factors with one of the most important ones being, "The one that the developer team is more familiar with." On the other hand, according to Thoughtworks Technology radar, the SPA by default approach is under question; meaning, that we should not blindly accept the complexity of SPAs and their frameworks by default even when the business needs don't justify it. In this article, we provided an introduction to Kotlin multiplatform with Kotlin/JS which brings new things to the table. Taking into consideration the latest additions in the ecosystem - namely Kotlin Wasm and Compose Multiplatform - it becomes evident that these advancements offer not only a fresh perspective but also robust solutions for streamlined development.
Embark on a journey into the latest advancements in Spring Boot development with version 3.2.0 as we guide you through creating a fundamental "Hello World" application. In this tutorial, our focus extends beyond the customary introduction to Spring; we delve into the intricacies of constructing a REST API while seamlessly integrating it with a NoSQL database. Spring Boot 3.2.0, with its array of new features and optimizations, sets the stage for an engaging exploration of contemporary development practices. This guide is tailored for both novices and seasoned developers, promising hands-on experience in harnessing the potential of Spring for robust, modern applications. Let's commence this journey into Spring 3.2.0, where simplicity meets innovation. What’s New in Spring Boot 3.2.0 Spring Boot 3.2.0 represents a significant leap forward in Java development, demanding a minimum Java 17 environment and extending support to the cutting-edge Java 21. This version introduces many features that redefine the landscape of Spring framework usage. One of the impressive features of Java is the support for virtual threads, which boosts scalability and responsiveness by utilizing lightweight threads. Furthermore, Spring Boot 3.2.0 introduces initial support for Project CRaC (JVM Checkpoint Restore), which enables applications to recover their state after a JVM restart, thus enhancing reliability and resilience. Security takes center stage with SSL Bundle Reloading, enabling dynamic reloading of SSL bundles. This feature empowers developers to manage SSL certificates more dynamically, ensuring both agility and security in their applications. Observability improvements are woven throughout the release, providing developers with enhanced monitoring and tracing capabilities for a more transparent and manageable development experience. In line with modern development practices, Spring Boot 3.2.0 introduces dedicated clients for RESTful (RestClient) and JDBC (JdbcClient) operations. These additions streamline communication with external services, enhancing integration capabilities. Compatibility with Jetty 12 is another noteworthy inclusion, allowing developers to leverage the latest features of the Jetty web server. Spring Boot 3.2.0 expands its ecosystem compatibility with support for Apache Pulsar, broadening the messaging capabilities of Spring for building robust, event-driven applications. Acknowledging the prevalence of Kafka and RabbitMQ, Spring Boot 3.2.0 introduces SSL bundle support for these popular messaging systems, bolstering the security posture of applications relying on these message brokers. The release also addresses dependency management intricacies with a reworked approach to handling nested JARs, ensuring more reliable and predictable application deployments. Lastly, Docker Image Building sees improvements, streamlining the containerization process and enhancing Spring applications' portability and deployment efficiency. In conclusion, Spring Boot 3.2.0 aligns itself with the latest Java versions, introduces groundbreaking features, and refines existing capabilities. This release empowers developers to confidently build modern, resilient, and highly performant applications in the ever-evolving landscape of Java development. Show Me the Code In this session, we embark on an exciting journey to develop a Pokemon API, leveraging the power of Spring and integrating it seamlessly with HarperDB. Our focus will be on implementing fundamental CRUD (Create, Read, Update, Delete) operations, with a special emphasis on utilizing unique identifiers (IDs) for each Pokemon. By the end of this session, you’ll not only have a fully functional Spring application but also a Pokemon API at your disposal, ready to be extended and integrated into larger projects. Let’s dive into the world of Pokemon and Spring development, where simplicity meets innovation. Ensure your NoSQL database, HarperDB, is up and running using Docker. Open your terminal and execute the following command: Shell docker run -d -e HDB_ADMIN_USERNAME=root -e HDB_ADMIN_PASSWORD=password -e HTTP_THREADS=4 -p 9925:9925 -p 9926:9926 harperdb/harperdb This command pulls the HarperDB Docker image and starts a container with the specified configuration. The -p option maps the container’s ports to your local machine, making the HarperDB interface accessible at http://localhost:9925. Head to the Spring Initializer to set up our Spring application. Follow these steps: Select the desired project settings (e.g., Maven or Gradle, Java version). Add dependencies: choose “Spring Web” from the dependencies list. Click “Generate” to download the project as a ZIP file. Extract the downloaded ZIP file and import the project into your preferred Integrated Development Environment (IDE), such as IntelliJ IDEA or Eclipse. Now that our Spring application is set up, the next crucial step is to integrate it with HarperDB. To achieve this, we must include the HarperDB dependency in our project. Add the following Maven dependency to your pom.xml file: XML <dependency> <groupId>io.harperdb</groupId> <artifactId>harpderdb-core</artifactId> <version>0.0.1</version> </dependency> With the dependency in place, let’s move on to the code. We’ll create a configuration class in Spring, HarperDB, to manage the connection and make it an integral part of the Spring Inversion of Control (IoC) container: Java @Configuration public class HarperDB { @Bean public Template template() { Server server = ServerBuilder.of("http://localhost:9925") .withCredentials("root", "password"); server.createDatabase("pokemons"); server.createTable("pokemon").id("id").database("pokemons"); return server.template("pokemons"); } } This configuration class, annotated with @Configuration, creates a Spring bean named template. The Template object is a key component for interacting with HarperDB. We initialize it with the server connection details, including the server URL and login credentials. Additionally, we create a database named “pokemons” and a table named “pokemon” with an “id” column. It sets the stage for storing our Pokemon entities in HarperDB. To enhance the demo, we’ll first create an immutable entity using Java’s record feature: Java public record Pokemon(String id, String name, String location) { } This simple Pokemon record class encapsulates the basic attributes of a Pokemon—its ID, name, and location—in an immutable manner. Next, let’s establish communication with the database by creating the PokemonService to serve as a bridge to HarperDB: Java @Service public class PokemonService { private final Template template; public PokemonService(Template template) { this.template = template; } public Optional<Pokemon> findById(String id) { return template.findById(Pokemon.class, id); } public void save(Pokemon pokemon) { template.upsert(pokemon); } public void delete(String id) { template.delete(Pokemon.class, id); } } The PokemonService class is a Spring service that handles basic operations related to Pokemon entities. It utilizes the Template object we configured earlier to interact with HarperDB. The findById method retrieves a Pokemon by its ID, saves adds, or updates a Pokemon, and deletes and removes it from the database. Lastly, let’s create the PokemonController to expose these operations as REST endpoints: Java @RestController public class PokemonController { private final PokemonService service; public PokemonController(PokemonService service) { this.service = service; } @GetMapping("/pokemons/{id}") Pokemon findById(@PathVariable String id) { return service.findById(id).orElseThrow(() -> new PokemonNotFoundException(id)); } @PutMapping("/pokemons") Pokemon newEmployee(@RequestBody Pokemon pokemon) { service.save(pokemon); return pokemon; } @DeleteMapping("/pokemons/{id}") void deleteEmployee(@PathVariable String id) { service.delete(id); } } This PokemonController class is annotated with @RestController and defines three endpoints: GET /pokemons/{id} retrieves a Pokemon by its ID. PUT /pokemons creates a new Pokemon or updates an existing one. DELETE /pokemons/{id} deletes a Pokemon by its ID. The controller relies on the PokemonService to handle these operations, providing a clean separation of concerns in our Spring application. With these components in place, our Pokemon API can perform basic CRUD operations using HarperDB. Feel free to test the endpoints and see the seamless integration of Spring with the NoSQL database in action! Your Spring application, integrated with HarperDB and equipped with a Pokemon API, is now ready for testing and execution. Let’s explore some common scenarios using curl commands. Before proceeding, make sure your Spring application is running. Create a Pokemon Shell curl -X PUT -H "Content-Type: application/json" -d '{"id": "1", "name": "Pikachu", "location": "Forest"}' http://localhost:8080/pokemons This command creates a new Pokemon with ID 1, the name Pikachu, and the location Forest. Retrieve a Pokemon by ID Shell curl http://localhost:8080/pokemons/{id} Replace {id} with the actual ID of the Pokemon you just created. Update a Pokemon Shell curl -X PUT -H "Content-Type: application/json" -d '{"id": "1", "name": "Raichu", "location": "Thunderstorm"}' http://localhost:8080/pokemons This command updates the existing Pokemon with ID 1 to have the name Raichu and location Thunderstorm. Delete a Pokemon by ID Shell curl -X DELETE http://localhost:8080/pokemons/{id} Replace {id} with the actual ID of the Pokemon you want to delete. These scenarios provide a comprehensive test of the basic CRUD operations in your Pokemon API, starting with creating a Pokemon. Adjust the commands as needed based on your specific use case and data. Happy testing! Conclusion In this tutorial, we harnessed the capabilities of Spring Boot 3.2.0 to craft a streamlined Pokemon API integrated seamlessly with HarperDB. The latest Spring version introduced key features, enhancing our ability to create resilient and scalable applications. Utilizing immutable entities, Spring IoC and HarperDB, we demonstrated the simplicity of modern Java development. The demo code, available here, is a foundation for your projects, ready for customization. For updates and in-depth insights, refer to the official Spring blog. May your Spring Boot and HarperDB journey be filled with innovation and coding joy! Happy coding!
This is an article from DZone's 2023 Enterprise Security Trend Report.For more: Read the Report In recent years, developments in artificial intelligence (AI) and automation technology have drastically reshaped application security. On one hand, the progress in AI and automation has strengthened security mechanisms, reduced reaction times, and reinforced system resilience. On the other hand, the challenges in AI and automation have created exploitable biases, overreliance on automation, and expanded attack surfaces for emerging threats. As we can see, there is immense value and growing potential for these technologies when redefining security scenarios, but we must not ignore that there are numerous challenges as well. Truthfully, every new technology brings a new opportunity for an exploit that, unless addressed, would compromise the very security it seeks to improve. Let's explore how AI and automation technology both help and hurt application security. Enhanced Threat Detection AI has evolved from basic anomaly detection to proactive threat response and continuous monitoring. With cybersecurity teams often required to do more with less, coupled with the need for greater resource efficiency, AI threat detection is crucial to addressing the increasingly complex and sophisticated cyber threats that organizations face. AI-powered tools offer real-time attack detection and maintain continuous observation, which is critical when threats can emerge suddenly and unexpectedly. Their adaptive learning allows AI technologies to identify patterns sooner and take proactive actions to avoid or deescalate potential threats and attacks. Additionally, learning from past incidents and adapting to new threats make systems more resilient against attacks, improving the detection of security breaches or vulnerabilities through advanced analytical capabilities. Similarly, the shift toward automated responses is also a response to the need for more efficient resource management. As seen in Table 1, we are able to observe key developments in AI detection and their results: EVOLUTION OF THREAT DETECTION Year Key Developments in AI Threat Detection Key Challenges and Advancements 1950s Early conceptualization of Al Threat detection applications were limited; Al primarily focused on symbolic reasoning and basic problem-solving 1980s Rule-based systems and basic expert systems were introduced for specific threat types Limited by the complexity of rule creation and the inability to adapt to evolving threats 1990s Machine learning (ML) algorithms gained popularity and were applied to signature-based threat detection SVMs, decision trees, and early neural networks were used for signature matching; limited effectiveness against new, unknown threats 2000s Introduction of behavior-based detection using anomaly detection algorithms Improved detection of previously unknown threats based on deviations from normal behavior; challenges in distinguishing between legitimate anomalies and actual threats 2010s Rise of deep learning, particularly convolutional neural networks for image-based threat detection; improved use of ML for behavioral analysis Enhanced accuracy in image-based threat detection; increased adoption of supervised learning for malware classification 2020s Continued advancements in deep learning, reinforcement learning, and natural language processing; integration of Al in next-gen antivirus solutions; increased use of threat intelligence and collaborative Al systems Growing focus on explainable Al, adversarial ML to address security vulnerabilities, and the use of Al in orchestrating threat responses Table 1 Improvements to Efficiency and Accuracy Automation presents a critical change in how security teams approach and manage cyber threats, moving away from traditional passive anomaly detection to modern active automated responses. Automation for incident response has impacted how threats are managed. It not only accelerates the response process but also ensures a consistent and thorough approach to threat management. A notable advancement in this area is the ability of AI systems to perform automated actions, such as isolating compromised devices to prevent the spread of threats and executing complex, AI-driven responses tailored to specific types of attacks. It also enables security teams to allocate their resources more strategically, focusing on higher-level tasks and strategies rather than routine threat monitoring and response. By moving from passive detection to active, automated actions, AI is empowering security teams to respond to threats more swiftly and effectively, ensuring that cybersecurity efforts are as efficient and impactful as possible. Reducing Human Error The use of AI is a major step forward in reducing human error and enhancing effective security overall. AI's capabilities — which include minimizing false positives, prioritizing alarms, strengthening access control, and mitigating insider threats — collectively create a more reliable and efficient security framework. Figure 1: Human error resolutions with AI Over-Reliance on Automation The incorporation of AI and automation into various business processes alleviates security needs while simultaneously broadening the potential attack surface, which results in a critical concern. This situation demands the development of robust security protocols tailored specifically for AI to prevent it from becoming a weak link in the security framework. As AI becomes more prevalent, cyber attackers are adapting and gaining a deeper understanding of AI systems. This expertise allows them to exploit weaknesses in AI algorithms and models. Consequently, cybersecurity strategies must evolve to defend against both traditional threats as well as sophisticated threats targeting AI vulnerabilities. Every AI system, interface, and data point represents a possible target, requiring a robust cybersecurity approach that covers all aspects of AI and automation within an organization. This evolving landscape requires continuous identification and mitigation of emergent risks, signifying a dynamic process where security strategies must be regularly assessed and adapted to address new vulnerabilities as they surface. This evolving cybersecurity challenge underscores the importance of ongoing vigilance and adaptability in protecting against AI and automation-related threats. Exploitable AI Biases Ensuring the integrity and effectiveness of AI systems involves addressing biases that are present in their training data and algorithms, which can lead to skewed results and potentially compromise security measures. Efforts to refine these algorithms are ongoing, focusing on using diverse datasets and implementing checks to ensure fair and unbiased AI decisions. As seen in Table 2, balancing AI security features with the need for ethical and privacy-conscious use is a significant and ongoing challenge. It demands a comprehensive approach that encompasses technical, legal, and ethical aspects of AI implementation. AI BIASES AND SOLUTIONS Common Biases in AI Strategies to Mitigate Bias Training data bias Use diverse and representative datasets Implement checks to ensure fairness Algorithmic bias Regularly refine algorithms to reduce bias Conduct audits and reviews of Al systems Privacy concerns Employ encryption and strict access controls Regularly audit Al systems for privacy compliance Ethical considerations Develop and follow ethical guidelines in Al operations Ensure respect for privacy, non-discrimination, and transparency Overall mitigation approach Adopt a comprehensive approach that covers technical, legal, and ethical aspects Balance Al functionality with privacy preservation Table 2 Potential for Malicious Use AI and automation present not only advancements but also significant challenges, particularly in how they can be exploited by malicious actors. The automation and learning capabilities of AI can be used to develop more adaptive and resilient malware, presenting a challenge to traditional cybersecurity defenses. Figure 2: Malicious uses for AI and automation and various challenges While AI aims to enhance efficiency, it raises questions about the reliability and potential unintended consequences of such automated actions, underscoring the need for careful integration of AI in cybersecurity strategies. Negligence and Security Oversight The emergence of AI and automation has not only transformed security but also altered regulation. 2023 is a turning point in the regulation of AI technologies, due largely in part to their growing sophistication and presence everywhere. The overall sentiment leans toward more stringent regulatory measures to ensure responsible, ethical, and secure use of AI, especially where cybersecurity is concerned. Regulatory initiatives like the NIST AI Risk Management Framework and the AI Accountability Act are at the center of this security challenge. These are designed to set guidelines and standards for AI development, deployment, and management. The NIST Framework provides a structured approach for assessing and mitigating AI-related risks, while the AI Accountability Act emphasizes transparency and accountability in AI operations. However, the adoption of AI and automation presents significant cybersecurity difficulties. The technical, social, and organizational challenges with implementing AI applications pose even greater hurdles compounded with growing costs of integrating robust AI algorithms into current cybersecurity designs. These considerations present organizations that are operating in uncertain regulatory environments with the daunting task of maintaining a delicate balance between leading practical implementation of edge security safeguards and compliance. Ultimately, this balance is crucial for ensuring that the benefits of AI and automation are used effectively while adhering to regulatory standards and maintaining ethical and secure AI practices. Conclusion The dual nature of AI and automation technology shows that they provide great returns but must be approached with caution in order to understand and minimize associated risks. It is apparent that while the use of AI and automation strengthens application security with enhanced detection capabilities, improved efficiency, and adaptive learning, they also introduce exploitable biases, potential over reliance on automated systems, and an expanded attack surface for adversaries. As these technologies evolve, it will be important for us to adopt a forward-looking framework that assumes a proactive and balanced approach to security. This entails not just leveraging the strengths of AI and automation for improved application security but also continuously identifying, assessing, and mitigating the emergent risks they pose. Ultimately, we must remain continuously vigilant because as these technologies evolve, so does the obligation to adapt to new risks. Resources "Thune, Klobuchar release bipartisan AI bill," Rebecca Klar, The Hill Artificial Intelligence 2023 Legislation, NCSL Artificial Intelligence Risk Management Framework (AI RMF 1.0), NIST This is an article from DZone's 2023 Enterprise Security Trend Report.For more: Read the Report
Spring, a widely embraced Java framework, empowers developers with a versatile toolkit to build robust and scalable applications. Among its many features, custom annotations are a powerful mechanism for enhancing code readability, reducing boilerplate, and encapsulating complex configurations. This article will explore the art of composing custom annotations in Spring, unraveling their potential through practical examples. The Essence of Custom Annotations Annotations in Java serve as a form of metadata, offering a way to add supplementary information to code elements. While Spring provides an array of built-in annotations, creating custom annotations allows developers to tailor their applications precisely to their needs. Custom annotations in Spring find applications in various scenarios: Configuration Simplification: Abstracting common configurations into custom annotations reduces the clutter in code and configuration files, leading to a more maintainable codebase. Readability and Organization: Annotations offer a concise and expressive means to convey the intent and purpose of classes, methods, or fields, enhancing overall code organization and readability. Behavioral Constraints: Custom annotations can be employed to enforce constraints on the usage of components, ensuring adherence to specific patterns or sequences. Set-Up All examples are built for Swagger, with the usage of Swagger annotations, but all below may be extended to your case. Set up Swagger: Java <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>3.0.0</version> </dependency> Anatomy of a Custom Annotation Creating a custom annotation in Spring involves defining a new interface annotated with @interface Let's embark on the journey of crafting a straightforward custom annotation called @AuthExample Java @Target({ElementType.METHOD, ElementType.TYPE}) @Retention(RetentionPolicy.RUNTIME) @ApiResponses(value = { @ApiResponse(responseCode = "401", description = "Unauthorized", content = @Content), @ApiResponse(responseCode = "403", description = "Forbidden", content = @Content), @ApiResponse(responseCode = "200")}) public @interface AuthExample { } After applying the annotation @AuthExample all annotations in composition are applied. In this example, @Target specifies that the annotation can be applied only to methods, and @Retention ensures that the annotation information is available at runtime. How To Pass Attributes Into Custom Annotations One powerful feature that Spring provides is the ability to create custom composite annotations using @AliasFor. The @AliasFor annotation is part of the Spring framework and is used to declare an alias for an attribute within the same annotation type. This allows developers to create composite annotations by reusing attributes from other annotations, promoting code reuse and enhancing clarity. In this example, the @AuthExample annotation is composed using @AliasFor to alias the myDescription attribute to the description attribute of Operation Java @Target({ElementType.METHOD, ElementType.TYPE}) @Retention(RetentionPolicy.RUNTIME) @ApiResponses(value = { @ApiResponse(responseCode = "401", description = "Unauthorized", content = @Content), @ApiResponse(responseCode = "403", description = "Forbidden", content = @Content), @ApiResponse(responseCode = "200")}) @Operation public @interface AuthExample { @AliasFor(annotation = Operation.class, attribute = "description") String myDescription() default ""; } Now, applying @AuthExample(myDescription = "Very smart solution") is the same as applying @Operation(myDescription = "Very smart solution")with all annotations included. There is a way to run multi-level @AliasFor Java @Target({ElementType.METHOD, ElementType.ANNOTATION_TYPE}) @Retention(RetentionPolicy.RUNTIME) @Operation public @interface Level1 { @AliasFor(annotation = Operation.class, attribute = "description") String description(); } @Target({ElementType.METHOD, ElementType.ANNOTATION_TYPE}) @Retention(RetentionPolicy.RUNTIME) @Level1(description = "That description is ignored") @interface Level2 { @AliasFor(annotation = Level1.class, attribute = "description") String description() default "Level2 default description"; } @Target({ElementType.METHOD, ElementType.ANNOTATION_TYPE}) @Retention(RetentionPolicy.RUNTIME) @Level2 @interface Level3 { @AliasFor(annotation = Level2.class, attribute = "description") String description() default "Level3 default description"; } In the example above, the value of default is associated with the annotation applied in the code. Resolving Attribute Values With @AliasFor When using @AliasFor, it's important to understand how attribute values are resolved. The value specified in the annotated element takes precedence, and if not provided, the value from the aliased attribute is used. This ensures flexibility and allows developers to customize behavior as needed. Conclusion Custom annotations in Spring elevate the expressiveness and flexibility of your codebase. By crafting annotations like @AuthExample, developers can encapsulate retry logic and reduce the complexity of error handling. Also, creating with @AliasFor in Spring provides a powerful way to simplify configuration and promote code reuse. As you delve deeper into Spring's annotation-driven world, custom annotations will become indispensable tools in your toolkit, offering a scalable and elegant approach to building resilient and well-organized applications.
Justin Albano
Software Engineer,
IBM
Thomas Hansen
CTO,
AINIRO.IO