DZone Research Report: A look at our developer audience, their tech stacks, and topics and tools they're exploring.
Getting Started With Large Language Models: A guide for both novices and seasoned practitioners to unlock the power of language models.
A microservices architecture is a development method for designing applications as modular services that seamlessly adapt to a highly scalable and dynamic environment. Microservices help solve complex issues such as speed and scalability, while also supporting continuous testing and delivery. This Zone will take you through breaking down the monolith step by step and designing a microservices architecture from scratch. Stay up to date on the industry's changes with topics such as container deployment, architectural design patterns, event-driven architecture, service meshes, and more.
Mastering System Design: A Comprehensive Guide to System Scaling for Millions (Part 1)
IntelliJ and Java Spring Microservices: Productivity Tips With GitHub Copilot
Lately I’ve been exploring what all the talk around "microservices architecture" is really about. From it popping up in every other social media debate to it increasingly becoming a must-have skill on job listings, what is it that has caused this strong divide between the proponents of the traditional monolithic approach and those who have embraced the microservices paradigm? In this article, I’m here to break it down for you as I outline the benefits, some common challenges, and offer some insights from microservices experts for those considering this approach. Monolith vs. Microservices in a Nutshell If you are not already familiar with monolithic vs. microservices architecture, you could imagine your software application as a structure made of Lego bricks. With monolithic architecture, you have one large Lego brick encompassing your entire application and all of its functionality. On the other hand, microservices architecture would be comparable to having a collection of smaller, specialized Lego bricks, each serving as individual components with specific tasks. Image 1: Monolith vs. microservices architecture More technically, microservices architecture is an approach to building software that involves breaking applications down into small, independent services. Each service focuses on a specific and explicit task and interacts with other services through well-defined interfaces. In fact, many of the key concepts of microservices have a lot in common with the Unix philosophy, which Mike Gancarz sums up as: Small is beautiful Make each program do one thing well Build a prototype as soon as possible Share or communicate data easily Use software leverage to your advantage Make every program a filter* In a nutshell, microservices architecture encapsulates the Unix philosophy of “Do one thing and do it well,” with some key characteristics being: Services are small, decentralized, and independently deployable Services are independent of each other and interact through well-defined interfaces, allowing them to be developed in different languages Services are organized around business capabilities Image 2: Visual representation of microservices Benefits of Microservices Architecture 1. Scalability As there are clear boundaries between microservices in terms of their code base and functionality, when it comes to adapting your system to meet evolving demands, scaling up or down can be done by adding or removing microservices (Lego bricks) without affecting the rest of an application. This contrasts with monolith applications where modifying or removing functionality can be cumbersome. Moreover, the scalability of microservices architecture lends itself to cloud deployment, for example as it allows for cloud resources to scale at the same rate as the application. 2. Maintainability and Resilience When it comes to development and maintainability, new features, bug fixes, and improvements, teams can focus on doing this for individual microservices without it affecting the rest of an application. As microservices are independent of each other, there is also greater application resilience, as a failure with one microservice does not lead to a complete system shutdown. 3. Developer Scalability and Team Productivity At an organizational level, it can often be difficult to scale the number of developers working on a project at the same rate that a project itself may be scaling; microservices structured by functionality can help to tackle this challenge. For instance, even with just a single developer, having microservices that are separated by functionality is beneficial in terms of having each segment logically arranged from a technical point of view, for reasons we just explored. With larger development teams, there is often a lack of awareness between different IT segments about each other’s projects, which can lead to complexity and confusion, as well as overlap or tasks going unassigned. Again, by having a microservices architecture that is segmented based on functionality, and which provides clearer boundaries, this allows the structure of your microservices to largely reflect your organizational chart. Teams can work on their tasks largely independently and at their own pace, and by reducing the need for extensive coordination, this translates to increased productivity and improved output quality. Challenges of Microservices Architecture Despite the apparent advantages, there are various challenges that I think are important to highlight. Worth noting is that they are all avoidable when considered and planned around upfront. A common reason why teams end up sticking with a traditional monolithic approach includes the fact that microservices bring increased complexity. This complexity comes in the form of teams needing to understand how to design, build, and manage distributed systems. More specifically, not knowing how to implement a reliable communication protocol for microservices to be able to communicate is a recurring pain point that leads to decreased system performance, and in turn, has teams switching back to their monolithic system. Another challenge that arises from having an increased number of interactions comes in the form of system testing and debugging. Aside from these difficulties, another major concern when considering microservices includes that of security. Implementing robust authentication, authorization, and encryption across each and every service is crucial. As much as these are valid concerns and are very real everyday challenges, working with microservices does not have to be so confusing, and these are all avoidable when considered upfront. Microservices Tips and Tricks If you are considering making the monolith-to-microservices switch, one top recommendation from microservices experts is to make sure that your microservices are independently deployable. More specifically, it is key that a microservice remains simple in terms of its functionality. It should “Do one thing and do it well” and should not depend on other services for its task. Below we can see how this approach affects the release process. For example, in the case of failure, with microservices, only one microservice needs to be retested and redeployed. Image 3: Comparing the release process for monolithic vs. microservices architecture While there are a few design approaches to building microservices, one that is recommended is that of Event-Driven Architecture (EDA). This design pattern supports the loosely coupled, asynchronized communication and decentralized control that is necessary in microservices architecture. Briefly, this is due to the fact that microservices can communicate indirectly through events rather than, for example, through direct API calls. For more details on developing with Event-Driven Architecture, see here. Moreover, if your application has stringent latency requirements and you have performance concerns when it comes to having to communicate between microservices, this article delves into some things to consider when building low-latency systems with a microservices architecture. Conclusion While microservices may be trendy, the benefits of scalability, resilience, and productiveness are anything but temporary. Despite challenges, software frameworks and mindful architecture design can mitigate complexity. Ultimately, the decision to switch to a microservices approach depends on specific business needs, but if flexibility and resilience are priorities, embracing the distributed future of software development is worth considering. *A filter is a program that gets most of its data from its standard input (the main input stream) and writes its main results to its standard output (the main output stream).
Customers today seek agile, nimble, flexible, and composable services. Services that are unhindered and unencumbered. Services that are easy to access and even easier to experience. Services that are quick and precise. These factors affect the collective CSAT and NPS of a modern-day enterprise. Enterprises acknowledge this, and hence, around 85% of medium to large-sized enterprises are already using the microservices architecture. The distributed architecture of microservices applications renders the components of the applications independent, decentralized, failure resistant, maintained, and upgraded in isolation, therefore fueling self-sufficiency, scalability, system reliability, and simplified service offerings. However, while microservices architecture readies the application for agile servicing, true customer experience arises not solely from the decoupled application components but the way in which every step in a customer success workflow triggers a logical subsequent step automatically to ensure customer delight. This is because as the business process extends and more components get added, “cohesion chaos” can become a reality. The absence of proper orchestration of process steps in a logical flow, keeping the customer end goal in mind, can quickly render the supposed benefits of the microservices landscape futile. Therefore, the microservices applications can be clustered, and the sequence of steps in each process flow can be orchestrated via an event streaming platform like Kafka while being managed and governed by a BPM or Integration engine, say a RHPAM or a Camunda or even MuleSoft that promises seamless co-existence of API led architecture and events-based architecture. Such an architecture will encapsulate various microservices in an event stream, with each service listening intently to the action taken by a user through the topic published into the event stream and the basis of that action, triggering a corresponding service as per the logical process flow defined. Therefore, each service is self-responsible and acts or reacts basis their trigger point in the true spirit of event-based orchestration. In my conversations with enterprises cutting across various geographies and domains, customers usually test the waters of this model of servicing through an event streaming platform like Kafka or centrally orchestrate the service through a BPMN engine like RHPAM. However, both these options have their own pros and cons. The hybrid model, which considers both a BPMN engine for centralized process orchestration while coordinating with the worker services via an event stream, is gaining very good traction and the rise of Enterprise Integration behemoths such as MuleSoft, which claim to support event-driven architecture alongside the more familiar API led integration is making the solution options very interesting for customers. Let’s evaluate these scenarios one by one by taking the use case of Bob, who wants to book a Ridola from Amsterdam to Den Haag, and see which services need to interact with each other to make the experience pleasant for Bob and how some of these tools make the experience seamless. The Mechanism at Play in Bob’s Ordering of a Ridola In ordering a Ridola, assuming he is signing in for the very first time, Bob would open the application and will undergo a journey through the following microservices- Customer Profile Service - Location Service, Cab and Driver Management, Trip Management, and Payments. The value chain, in layman's terms, flows with Bob opening the app, registering himself by providing his profile details, and then choosing the locations to and from which he needs to journey, upon which Ridola searches and recommends the available cabs and drivers in his vicinity with the associated tariff. Upon Bob’s decision of the cab, the trip management service manages the trip by guiding the driver and getting Bob his chosen cab, initiating the trip from Amsterdam towards Den Haag, post the completion of which, Bob is requested for the payment and upon payment a bill is sent to his email address. Options Available for Ridola To Provide a Well-Orchestrated Service to Bob BPMN-Driven Centralized Orchestration In this approach, Ridola would ingrain the business workflow logic in the centralized BPM Engine (say RHPAM, Camunda, etc.) These BPM technologies follow the Business Process Modeling Notation (BPMN), which is a standard for modeling business processes. The integration between Passenger UI and the application would be through REST. The moment Bob logs into the application, the centralized engine (the brain of the business workflow) triggers a command to the worker services and awaits their response. Such commands are issued by the Java delegate or something like AWS Lambda. The overall “cab ordering service” is built as a Springboot microservice. This means the first command from the centralized engine, upon Bob’s log-in, will be issued to the Customer Profile Service, which shall pop up and request Bob to sign in. Upon the completion of that step, the centralized engine shall command the location service to kick in that enquires Bob on his current location and his origin and destination stations. Thereafter, the cab and driver management service gets triggered centrally, and so on and so forth. BPMN-based orchestration architecture The moot point to note in this approach is the central orchestration engine triggering the actions on the worker services. The worker services do not pass the command to each other. This is all centrally managed using BPMN standards, thus enabling easy maintenance, support, and upgrades. The BPM engine can also become the single repository providing transaction updates, service state, etc., and, therefore, a source to drive observability. However, on the flip side, such one-on-one integration between the centralized orchestration engine and each of the worker services can render the landscape “tightly coupled” with “point-to-point integration,” precisely what an enterprise wants to avoid by embracing a microservices architecture. Therefore, while this approach is fair when the number of transactions is low, for a large enterprise like Ridola with a massive number of transactions, such an approach can very quickly heat the orchestration engine and spoil the experience that Ridola wants to provide the end customer. Event-Driven Orchestration Vis-à-vis the centralized approach explored in the above section, many customers seem to be choosing an event streaming platform-led orchestration. This could entail the usage of technologies such as Apache Kafka, IBM Data Streams, Amazon Kinesis, Confluent, etc. This is a decentralized approach where the business logic is imbued across all the microservices that Bob shall encounter while getting his cab service from Ridola. Each of these services – be it customer profile service, location service, cab & driver management service, trip management, or payment service, is integrated with a central event stream (say a Kafka or a Confluent) and listens to the required topic that pertains to the service. This topic that is published to the event stream is a result of the action taken by Bob (by signing into the app) or a result of the action taken by the preceding service (say, customer profile service). This topic is also a trigger or cue for the next service (say location) to get triggered by asking Bob about his location and his from and to stations. Likewise, each service becomes aware of its turn and responsibility through the topic published on the event stream, and the cab ordering process gets streamlined service after service in a true event-driven manner. Event Streaming-based orchestration While this approach brings in the “loose coupling” that the first approach lacked, the maintenance and upkeep of services will become tedious when the overall business process undergoes a change, thereby affecting the sub-services within that value chain. Likewise, there is no centralized observability of performance available, and each service needs to be referred to for logs and traces. This means in case of any error or troubleshooting, we would need to check each service one by one, which would take time. However, if a process value chain is fairly established and the services comprising the process are quite established, such an approach can work. The business owner needs to evaluate the scenarios and take a call. Hybrid Approach: BPMN-Led Event Orchestration Many customers understand the potent combination of the earlier two approaches and choose to undertake proof of value and, thereafter, full-blown implementation of such a hybrid solution. In this, while the centralized BPM engine (RHPAM or Camunda) houses the business logic, the communication with worker services downstream doesn’t take place in a point-to-point manner. This communication is established via the event broker. Therefore, loose coupling is ensured between the services. BPMN-led Event Orchestration As seen above, the moment Bob logs into the application, the centralized engine triggers a command to the worker services via the event streams and awaits their response. Such commands are issued by the Java delegate or something like AWS Lambda. Through this approach, the enterprise stands to gain centralized governance and observability benefits while not making the ecosystem tightly coupled and difficult to maintain. This is a very good model for large enterprises and is seeing wide adoption. MuleSoft + Event-Driven Orchestration Enterprise Integration is a given in any large enterprise, and most enterprises today, including Ridola, leverage API-led integration. However, even in an API-led architectural setup (which is synchronous in nature), there are scenarios where asynchronous communication becomes very important from a business standpoint. This is where event-driven architecture becomes an able foil for the API-led architecture, and they both can complement and co-exist beautifully to ensure that a customer like Bob is not hampered owing to internal architectural limitations. Some such scenarios where such a marriage of synchronous (API-led) and asynchronous (event-led) are seen as plausible are: Asynchronous Backend Updates: MuleSoft follows a three-layered architecture with Experience APIs servicing the customers across multiple channels on the top, Process APIs, which are the pipelines that process the actual task at hand and pass on the outcome to the Experience APIs, and the System APIs at the bottom which are the repository of enterprise data which is tapped on to by the Process APIs to make the solution contextual and tailored. Sometimes, there may come an avalanche of customer requests and the Process APIs may get overwhelmed by the repeated need to fetch data from the System APIs. Such to and fro can add to the latency in servicing the needs of the Experience APIs. Event Stream layer between System and Process APIs helps faster processing It is in such a scenario (as shown above) that an event broker can act as a storehouse of the most requested information (say customer information) and will act as a “one-stop shop” source for this information for the Process APIs and can asynchronously update appropriate systems with the needed information, thus preventing any unnecessary, repeated to and fro call to the CRM system for every Process API request. MuleSoft possesses connectors to various systems, which can help in capturing the data change and publishing it as simple events to the event broker. The upstream applications can then act as event consumers to subscribe to these events and update the end systems. Delayed processing owing to system overload and acknowledgment of customer requests: Sometimes, when the system layer fails after reaching its peak capacity, the to and fro between Experience APIs, Process APIs, and System APIs would continue repeatedly in a futile manner, further overloading the system. It could also happen that the system is down for maintenance over the weekend, but the requests are being received during that window. In such a scenario, MuleSoft-driven applications can generate simple events for every request at the Experience or Process layers, and those events can be stored in an event broker until the end system is ready to process the request. The requesting system will acknowledge the customer request, and those requests that can be addressed using the available Process or System APIs will still get processed in a timely manner. for those others for which sufficient information is not available, a notification stating possible delay can be sent to avoid any customer dissonance. These are emerging as key themes with modern-day customers who want to use the best of API-led and Event-led architectures to ensure seamless customer service, coupled with the avoidable burden on the systems. Conclusion Enterprises are wading through the “experience economy” and the only way to win market share is by winning the confidence of customers. This is where BPMN-led Event Orchestration strategically struck a balance between API-led and event-led architectures that can ensure system resilience, process pragmatism, and delightful customer experience, all in a continuum. This is the right time for enterprises to explore use cases contextual to their respective domain and then evaluate how can a combination of the above approaches help them in their business pursuit. All these approaches have their pros and cons and depend on several factors, such as the process value chain maturity of an enterprise, the frequency, and intensity of changes, the scale of these changes, the number of transactions, the breadth of the customer base, so on and so forth. Making the right decision and choosing the right option for the right use case can be a challenging process that may require careful due diligence and therefore, several enterprises worldwide are partnering with the leading System Integrators in the ecosystem. Thus, if you are thinking about embarking on Event orchestration, you have many ready partners to guide and walk you through the journey. Get started now !!
Implementing a microservices architecture in Java is a strategic decision that can have significant benefits for your application, such as improved scalability, flexibility, and maintainability. Here's a guide to help you embark on this journey. Understand the Basics Before diving into the implementation, it's crucial to understand what microservices are. Microservices architecture is a method of developing software systems that focuses on building single-function modules with well-defined interfaces and operations. These modules, or microservices, are independently deployable and scalable. Design Your Microservices Identify Business Capabilities Break down your application based on business functionalities. Each microservice should represent a single business capability. Define Service Boundaries Ensure that each microservice is loosely coupled and highly cohesive. Avoid too many dependencies between services. Choose the Right Tools and Technologies Java Frameworks Spring Boot: Popular for building stand-alone, production-grade Spring-based applications. Dropwizard: Useful for rapid development of RESTful web services. Micronaut: Great for building modular, easily testable microservices. Containerization Docker: Essential for creating, deploying, and running microservices in isolated environments. Kubernetes: A powerful system for automating deployment, scaling, and management of containerized applications. Database Use a database per service pattern. Each microservice should have its private database to ensure loose coupling. Develop Your Microservices Implement RESTful Services Use Spring Boot to create RESTful services due to its simplicity and power. Ensure API versioning to manage changes without breaking clients. Asynchronous Communication Implement asynchronous communication, especially for long-running or resource-intensive tasks. Use message queues like RabbitMQ or Kafka for reliable, scalable, and asynchronous communication between microservices. Build and Deployment Automate build and deployment processes using CI/CD tools like Jenkins or GitLab CI. Implement blue-green deployment or canary releases to reduce downtime and risk. Service Discovery and Configuration Service Discovery Use tools like Netflix Eureka for managing and discovering microservices in a distributed system. Configuration Management Centralize configuration management using tools like Spring Cloud Config. Store configuration in a version-controlled repository for auditability and rollback purposes. Monitoring and Logging Implement centralized logging using ELK Stack (Elasticsearch, Logstash, Kibana) for easier debugging and monitoring. Use Prometheus and Grafana for monitoring metrics and setting up alerts. Security Implement API gateways like Zuul or Spring Cloud Gateway for security, monitoring, and resilience. Use OAuth2 and JWT for secure, stateless authentication and authorization. Testing Write unit and integration tests for each microservice. Implement contract testing to ensure APIs meet the contract expected by clients. Documentation Document your APIs using tools like Swagger or OpenAPI. This helps in maintaining clarity about service endpoints and their purposes. Conclusion Implementing a Java microservices architecture can significantly enhance your application's scalability, flexibility, and maintainability. However, the complexity and technical expertise required can be considerable. Hiring Java developers or availing Java development services can be pivotal in navigating this transition successfully. They bring the necessary expertise in Java frameworks and microservices best practices to ensure your project's success. Ready to transform your application architecture? Reach out to professional Java development services from top Java companies today and take the first step towards a robust, scalable microservice architecture.
In the ever-evolving landscape of microservices development, Helidon has emerged as a beacon of innovation. The release of Helidon 4 brings forth a wave of enhancements and features that promise to redefine the way developers approach microservices architecture. In this article, we embark on a detailed journey, unraveling the intricacies of Helidon 4's new features through insightful examples. From MicroProfile 6.0 compatibility to enhanced support for reactive programming, simplified configuration management, and seamless integration with Oracle Cloud Infrastructure (OCI), Helidon 4 positions itself at the forefront of modern microservices frameworks. The Shift From Netty: Why Simplicity Matters Netty, known for its efficiency and scalability, played a crucial role in powering Helidon's HTTP server in earlier versions. However, as Helidon evolved, the framework's maintainers recognized the need for a simpler and more approachable architecture. This led to the decision to move away from Netty, making room for a more straightforward and user-friendly experience in Helidon 4. In previous versions, setting up a Helidon web server with Netty involved configuring various Netty-specific parameters. With Helidon 4, the process is more straightforward. Java public class SimpleWebServer { public static void main(String[] args) { WebServer.create(Routing.builder() .get("/", (req, res) -> res.send("Hello, Helidon 4!")) .build()) .start() .await(); } } In this example, the simplicity is evident as the developer creates a web server with just a few lines of code, without the need for intricate Netty configurations. Routing, a fundamental aspect of microservices development, becomes more intuitive. Java public class SimpleRouting { public static void main(String[] args) { WebServer.create((req, res) -> { if (req.path().equals("/hello")) { res.send("Hello, Helidon 4!"); } else { res.send("Welcome to Helidon 4!"); } }).start().await(); } } This example showcases the streamlined routing capabilities of Helidon 4, emphasizing a more natural and less verbose approach. MicroProfile 6.0: A Synergistic Approach Helidon 4's support for MicroProfile 6.0 signifies a crucial alignment with the latest standards in the microservices landscape. Developers can now leverage the enhancements introduced in MicroProfile 6.0 seamlessly within their Helidon applications, ensuring compatibility and interoperability with other MicroProfile-compliant services. MicroProfile Config simplifies the configuration of microservices, allowing developers to externalize configuration parameters easily. In Helidon 4, MicroProfile Config is seamlessly integrated, enabling developers to harness its power effortlessly. Java public static void main(String[] args) { String appName = ConfigProvider.getConfig().getValue("app.name", String.class); System.out.println("Application Name: " + appName); } In this example, the MicroProfile Config API is used to retrieve the value of the "app. name" configuration property, showcasing how Helidon 4 integrates with MicroProfile Config for streamlined configuration management. MicroProfile Fault Tolerance introduces resilience patterns to microservices, enhancing their fault tolerance. Helidon 4 seamlessly incorporates these patterns into its microservices development model. Java public class FaultToleranceExample { @CircuitBreaker(requestVolumeThreshold = 4) public void performOperation() { // Perform microservice operation } } In this example, the @CircuitBreaker An annotation from MicroProfile Fault Tolerance defines a circuit breaker for a specific microservice operation, showcasing Helidon 4's support for fault tolerance. Enhanced Support for Reactive Programming Helidon 4 places a strong emphasis on reactive programming, offering developers the tools to build responsive and scalable microservices. Java // Reactive programming with Helidon 4 WebServer.create(Routing.builder() .get("/reactive", (req, res) -> res.send("Hello, Reactive World!")) .build()) .start() .await(10, SECONDS); In this example, the reactive endpoint is defined using Helidon's routing. This allows developers to handle asynchronous operations more efficiently, crucial for building responsive microservices. Improved Configuration Management Helidon 4 introduces enhancements in configuration management, simplifying the process of externalized configuration. Java # application.yaml for Helidon 4 server: port: 8080 Helidon 4 allows developers to configure their microservices using YAML files, environment variables, or external configuration services. The application.yaml file above demonstrates a straightforward configuration for the server port. Integrated Health Checks and Metrics Helidon 4's integration of health checks and metrics offers a comprehensive solution, providing developers with real-time insights into application health, proactive issue identification, and data-driven decision-making for optimal performance. Defining Custom Health Checks to assess specific aspects of their microservices. In the following example, a custom health check is created to verify the responsiveness of an external service Java HealthSupport.builder() .addLiveness(() -> { // Custom health check logic boolean externalServiceReachable = checkExternalService(); return HealthCheckResponse.named("external-service-check") .state(externalServiceReachable) .build(); }) .build(); Here, the addLiveness method is used to incorporate a custom health check that evaluates the reachability of an external service. Developers can define various checks tailored to their application's requirements. Enabling Metrics for Key Components, such as the web server Java MetricsSupport.builder() .config(webServerConfig) .build(); In this snippet, metrics support is configured for the web server, providing granular insights into its performance metrics. Developers can extend this approach to other components critical to their microservices architecture. Exposing Metrics Endpoints, facilitating easy consumption by external monitoring tools. Java PrometheusSupport.create() .register(webServer); Here, Prometheus support is created, allowing developers to register the web server for metrics exposure. This integration streamlines the process of collecting and visualizing metrics data. Simplified Security Configuration Security is paramount in microservices, and Helidon 4 streamlines the configuration of security features. Java // Security configuration in Helidon 4 Security security = Security.builder() .addProvider(JwtProvider.create()) // Add JWT authentication provider .addProvider(HttpBasicAuthProvider.create()) // Add HTTP Basic authentication provider .build(); In this example, Helidon's Security module is configured to use JWT authentication and HTTP Basic authentication. This simplifies the implementation of security measures in microservices. Expanded MicroProfile Rest Client Support Microservices often communicate with each other, and Helidon 4 expands its support for MicroProfile Rest Client. Java // MicroProfile Rest Client in Helidon 4 @RegisterRestClient public interface GreetService { @GET @Path("/greet") @Produces(MediaType.TEXT_PLAIN) String greet(); } Here, a MicroProfile Rest Client interface is defined to interact with an /greet endpoint. Helidon 4 simplifies the creation of type-safe REST clients. Oracle Cloud Infrastructure (OCI) Integration The integration of Helidon 4 with Oracle Cloud Infrastructure represents a pivotal shift in microservices development. OCI, renowned for its scalability, security, and performance, becomes the natural habitat for Helidon 4, empowering developers to harness the full potential of cloud-native development. Configuring OCI Properties in Helidon 4 Java import io.helidon.config.Config; import io.helidon.config.ConfigSources; public class OCIConfigExample { public static void main(String[] args) { Config config = Config.builder() .sources(ConfigSources.classpath("application.yaml")) .addSource(ConfigSources.create(OCIConfigSource.class.getName())) .build(); String ociPropertyValue = config.get("oci.property", String.class).orElse("default-value"); System.out.println("OCI Property Value: " + ociPropertyValue); } } In this example, the OCIConfigSource integrates OCI-specific configuration into the Helidon configuration, allowing developers to access OCI properties seamlessly. They are leveraging OCI Identity and Access Management (IAM). OCI IAM plays a crucial role in managing access and permissions. Helidon 4 allows developers to leverage IAM for secure microservices deployment effortlessly. Java public class HelidonOCIIntegration { public static void main(String[] args) { Security security = Security.builder() .addProvider(OidcProvider.builder() .identityServerUrl("https://identity.oraclecloud.com/") .clientId("your-client-id") .clientSecret("your-client-secret") .build()) .build(); WebSecurity.create(security, webServer -> { // Configure security for web server }); } } In this example, the Helidon application integrates with OCI's Identity and Access Management through the OIDC provider, allowing developers to enforce secure authentication and authorization in their microservices. Deploying Helidon Microservices on OCI Java public static void main(String[] args) { Server.builder() .port(8080) .start(); } Streamlined Project Templates Getting started with microservices development is made easier with Helidon 4's streamlined project templates. Java # Create a new Helidon project with Maven archetype mvn archetype:generate -DinteractiveMode=false \ -DarchetypeGroupId=io.helidon.archetypes \ -DarchetypeArtifactId=helidon-quickstart-mp \ -DarchetypeVersion=2.0.0 \ -DgroupId=com.example \ -DartifactId=myproject \ -Dpackage=com.example.myproject The Maven archetype simplifies the creation of a new Helidon project, providing a well-defined structure to kickstart development. Conclusion Helidon 4's new features, as demonstrated through real-world examples, showcase the framework's commitment to providing a powerful and developer-friendly environment for microservices development. From MicroProfile compatibility to enhanced support for reactive programming, improved configuration management, and streamlined security configurations, Helidon 4 empowers developers to build scalable and resilient microservices with ease. As the landscape of microservices continues to evolve, Helidon 4 stands out as a versatile and robust framework, ready to meet the challenges of modern application development.
Does the time your CI/CD pipeline takes to deploy hold you back during development testing? This article demonstrates a faster way to develop Spring Boot microservices using a bare-metal Kubernetes cluster that runs on your own development machine. Recipe for Success This is the fourth article in a series on Ansible and Kubernetes. In the first post, I explained how to get Ansible up and running on a Linux virtual machine inside Windows. Subsequent posts demonstrated how to use Ansible to get a local Kubernetes cluster going on Ubuntu 20.04. It was tested on both native Linux- and Windows-based virtual machines running Linux. The last-mentioned approach works best when your devbox has a separate network adaptor that can be dedicated for use by the virtual machines. This article follows up on concepts used during the previous article and was tested on a cluster consisting of one control plane and one worker. As such a fronting proxy running HAProxy was not required and commented out in the inventory. The code is available on GitHub. When to Docker and When Not to Docker The secret to faster deployments to local infrastructure is to cut out on what is not needed. For instance, does one really need to have Docker fully installed to bake images? Should one push the image produced by each build to a formal Docker repository? Is a CI/CD platform even needed? Let us answer the last question first. Maven started life with both continuous integration and continuous deployment envisaged and should be able to replace a CI/CD platform such as Jenkins for local deployments. Now, it is widely known that all Maven problems can either be resolved by changing dependencies or by adding a plugin. We are not in jar-hell, so the answer must be a plugin. The Jib build plugin does just this for the sample Spring Boot microservice we will be deploying: <build> <plugins> <plugin> <groupId>com.google.cloud.tools</groupId> <artifactId>jib-maven-plugin</artifactId> <version>3.1.4</version> <configuration> <from>  </from> <to>  <tags> <tag>latest10</tag> </tags> </to> <allowInsecureRegistries>false</allowInsecureRegistries> </configuration> </plugin> </plugins> </build> Here we see how the Jib Maven plugin is configured to bake and push the image to a private Docker repo. However, the plugin can be steered from the command line as well. This Ansible shell task loops over one or more Spring Boot microservices and does just that: - name: Git checkouts ansible.builtin.git: repo: "{{ item.git_url }" dest: "~/{{ item.name }" version: "{{ item.git_branch }" loop: "{{ apps }" **************** - name: Run JIB builds ansible.builtin.command: "mvn clean compile jib:buildTar -Dimage={{ item.name }:{{ item.namespace }" args: chdir: "~/{{ item.name }/{{ item.jib_dir }" loop: "{{ apps }" The first task clones, while the last integrates the Docker image. However, it does not push the image to a Docker repo. Instead, it dumps it as a tar ball. We are therefore halfway towards removing the Docker repo from the loop. Since our Kubernetes cluster uses Containerd, a spinout from Docker, as its container daemon, all we need is something to load the tar ball directly into Containerd. It turns out such an application exists. It is called ctr and can be steered from Ansible: - name: Load images into containerd ansible.builtin.command: ctr -n=k8s.io images import jib-image.tar args: chdir: "/home/ansible/{{ item.name }/{{ item.jib_dir }/target" register: ctr_out become: true loop: "{{ apps }" Up to this point, task execution has been on the worker node. It might seem stupid to build the image on the worker node, but keep in mind that: It concerns local testing and there will seldom be a need for more than one K8s worker - the build will not happen on more than one machine. The base image Jib builds from is smaller than the produced image that normally is pulled from a Docker repo. This results in a faster download and a negligent upload time since the image is loaded directly into the Container daemon of the worker node. The time spent downloading Git and Maven is amortized over all deployments and therefore makes up less and less percentage of time as usage increases. Bypassing a CI/CD platform such as Jenkins or Git runners shared with other applications can save significantly on build and deployment time. You Are Deployment, I Declare Up to this point, I have only shown the Ansible tasks, but the variable declarations that are ingested have not been shown. It is now an opportune time to list part of the input: apps: - name: hello1 git_url: https://github.com/jrb-s2c-github/spinnaker_tryout.git jib_dir: hello_svc image: s2c/hello_svc namespace: env1 git_branch: kustomize application_properties: application.properties: | my_name: LocalKubeletEnv1 - name: hello2 git_url: https://github.com/jrb-s2c-github/spinnaker_tryout.git jib_dir: hello_svc image: s2c/hello_svc namespace: env2 config_map_path: git_branch: kustomize application_properties: application.properties: | my_name: LocalKubeletEnv2 It concerns the DevOps characteristics of a list of Spring Boot microservices that steer Ansible to clone, integrate, deploy, and orchestrate. We already saw how Ansible handles the first three. All that remains are the Ansible tasks that create Kubernetes deployments, services, and application.properties ConfigMaps: - name: Create k8s namespaces remote_user: ansible kubernetes.core.k8s: kubeconfig: /home/ansible/.kube/config name: "{{ item.namespace }" api_version: v1 kind: Namespace state: present loop: "{{ apps }" - name: Create application.property configmaps kubernetes.core.k8s: kubeconfig: /home/ansible/.kube/config namespace: "{{ item.namespace }" state: present definition: apiVersion: v1 kind: ConfigMap metadata: name: "{{ item.name }-cm" data: "{{ item.application_properties }" loop: "{{ apps }" - name: Create deployments kubernetes.core.k8s: kubeconfig: /home/ansible/.kube/config namespace: "{{ item.namespace }" state: present definition: apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: "{{ item.name }" name: "{{ item.name }" spec: replicas: 1 selector: matchLabels: app: "{{ item.name }" strategy: { } template: metadata: creationTimestamp: null labels: app: "{{ item.name }" spec: containers: - image: "{{ item.name }:{{ item.namespace }" name: "{{ item.name }" resources: { } imagePullPolicy: IfNotPresent volumeMounts: - mountPath: /config name: config volumes: - configMap: items: - key: application.properties path: application.properties name: "{{ item.name }-cm" name: config status: { } loop: "{{ apps }" - name: Create services kubernetes.core.k8s: kubeconfig: /home/ansible/.kube/config namespace: "{{ item.namespace }" state: present definition: apiVersion: v1 kind: List items: - apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: app: "{{ item.name }" name: "{{ item.name }" spec: ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: "{{ item.name }" type: ClusterIP status: loadBalancer: {} loop: "{{ apps }" These tasks run on the control plane and configure the orchestration of two microservices using the kubernetes.core.k8s Ansible task. To illustrate how different feature branches of the same application can be deployed simultaneously to different namespaces, the same image is used. However, each is deployed with different content in its application.properties. Different Git branches can also be specified. It should be noted that nothing prevents us from deploying two or more microservices into a single namespace to provide the backend services for a modern JavaScript frontend. The imagePullPolicy is set to "IfNotPresent". Since ctr already deployed the image directly to the container runtime, there is no need to pull the image from a Docker repo. Ingress Routing Ingress instances are used to expose microservices from multiple namespaces to clients outside of the cluster. The declaration of the Ingress and its routing rules are lower down in the input declaration partially listed above: ingress: host: www.demo.io rules: - service: hello1 namespace: env1 ingress_path: /env1/hello service_path: / - service: hello2 namespace: env2 ingress_path: /env2/hello service_path: / Note that the DNS name should be under your control or not be entered as a DNS entry on a DNS server anywhere in the world. Should this be the case, the traffic might be sent out of the cluster to that IP address. The service variable should match the name of the relevant microservice in the top half of the input declaration. The ingress path is what clients should use to access the service and the service path is the endpoint of the Spring controller that should be routed to. The Ansible tasks that interpret and enforce the above declarations are: - name: Create ingress master kubernetes.core.k8s: kubeconfig: /home/ansible/.kube/config namespace: default state: present definition: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-master annotations: nginx.org/mergeable-ingress-type: "master" spec: ingressClassName: nginx rules: - host: "{{ ingress.host }" - name: Create ingress minions kubernetes.core.k8s: kubeconfig: /home/ansible/.kube/config namespace: "{{ item.namespace }" state: present definition: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: " {{ item.service_path } " nginx.org/mergeable-ingress-type: "minion" name: "ingress-{{ item.namespace }" spec: ingressClassName: nginx rules: - host: "{{ ingress.host }" http: paths: - path: "{{ item.ingress_path }" pathType: Prefix backend: service: name: "{{ item.service }" port: number: 80 loop: "{{ ingress.rules }" We continue where we left off in my previous post and use Nginx Ingress Controller and MetalLB to establish Ingress routing. Once again, the use is made of the Ansible loop construct to cater to multiple routing rules. In this case, routing will proceed from the /env1/hello route to the Hello K8s Service in the env1 namespace and from the /env2/hello route to the Hello K8s Service in the env2 namespace. Routing into different namespaces is achieved using Nginx mergeable ingress types. More can be read here, but basically, one annotates Ingresses as being the master or one of the minions. Multiple instances thus combine together to allow for complex routing as can be seen above. The Ingress route can and probably will differ from the endpoint of the Spring controller(s). This certainly is the case here and a second annotation was required to change from the Ingress route to the endpoint the controller listens on: nginx.ingress.kubernetes.io/rewrite-target: " {{ item.service_path } " This is the sample controller: @RestController public class HelloController { @RequestMapping("/") public String index() { return "Greetings from " + name; } @Value(value = "${my_name}") private String name; } Since the value of the my_name field is replaced from what is defined in the application.properties and each instance of the microservice has a different value for it, we would expect a different welcome message from each of the K8S Services/Deployments. Hitting the different Ingress routes, we see this is indeed the case: On Secrets and Such It can happen that your Git repository requires token authentication. For such cases, one should add the entire git URL to the Ansible vault: apps: - name: mystery git_url: "{{ vault_git_url }" jib_dir: harvester image: s2c/harvester namespace: env1 git_branch: main application_properties: application.properties: | my_name: LocalKubeletEnv1 The content of variable vault_git_url is encrypted in all/vault.yaml and can be edited with: ansible-vault edit jetpack/group_vars/all/vault.yaml Enter the password of the vault and add/edit the URL to contain your authentication token: vault_git_url: https://AUTH TOKEN@github.com/jrb-s2c-github/demo.git Enough happens behind the scenes here to warrant an entire post. However, in short, group_vars are defined for inventory groups with the vars and vaults for each inventory group in its own sub-directory of the same name as the group. The "all" sub-folder acts as the catchall for all other managed servers that fall out of this construct. Consequently, only the "all" sub-directory is required for the master and workers groups of our inventory to use the same vault. It follows that the same approach can be followed to encrypt any secrets that should be added to the application.properties of Spring Boot. Conclusion We have seen how to make deployments of Sprint Boot microservices to local infrastructure faster by bypassing certain steps and technologies used during the CI/CD to higher environments. Multiple namespaces can be employed to allow the deployment of different versions of a micro-service architecture. Some thought will have to be given when secrets for different environments are in play though. The focus of the article is on a local environment and a description of how to use group vars to have different secrets for different environments is out of scope. It might be the topic of a future article. Please feel free to DM me on LinkedIn should you require assistance to get the rig up and running. Thank you for reading!
In the ever-evolving landscape of software architecture, the integration of artificial intelligence (AI) into microservices architecture is becoming increasingly pivotal. This approach offers modularity, scalability, and flexibility, which are crucial for the dynamic nature of AI applications. In this article, we'll explore ten key microservice design patterns that are essential for AI development, delving into how they facilitate efficient, robust, and scalable AI solutions. 1. Model as a Service (MaaS) MaaS treats each AI model as an autonomous service. By exposing AI functionalities through REST or gRPC APIs, MaaS allows for independent scaling and updating of models. This pattern is particularly advantageous in managing multiple AI models, enabling continuous integration and deployment without disrupting the entire system. 2. Data Lake Pattern AI thrives on data. The Data Lake Pattern centralizes raw data storage from various sources, mitigating the risks of data silos. It ensures that microservices can access a unified data source for AI model training and inference, which is crucial for maintaining data consistency and quality. 3. Training-Inference Separation AI models require regular training with large datasets, consuming significant resources. The Training-Inference Separation pattern separates these concerns, dedicating services for training and inference. This separation allows training operations to be scaled according to demand while keeping inference services lean and efficient. 4. Pipeline Pattern The Pipeline Pattern involves a sequence of microservices where the output of one service feeds into the next. This approach is ideal for sequential data processing tasks like data preprocessing, feature extraction, and model inference. It promotes reusability and modularity, essential for agile AI development. 5. Batch Serving and Stream Processing AI applications vary in their latency requirements. Batch Serving is suited for non-real-time tasks (e.g., data analysis), while Stream Processing caters to real-time applications like fraud detection. These patterns help in choosing the right processing approach based on the application's time sensitivity. 6. Sidecar Pattern The Sidecar Pattern is about deploying AI functionalities as an adjacent container to the main application. This pattern is useful for integrating AI features into existing systems without major rewrites, ensuring that AI components are maintained independently. 7. Gateway Aggregation Pattern AI systems often comprise multiple microservices. The Gateway Aggregation Pattern uses an API Gateway to provide a unified interface to these services, simplifying client interactions and reducing complexity. 8. Asynchronous Messaging AI operations can be time-consuming. The Asynchronous Messaging Pattern uses message queues to decouple services, ensuring that long-running AI tasks do not impede overall system performance. 9. Model Versioning AI models are continually refined. Model Versioning keeps track of different model iterations, enabling A/B testing, phased rollouts, and quick rollbacks if needed, thus ensuring system stability and performance. 10. Circuit Breaker Pattern The Circuit Breaker Pattern prevents failures in one service from cascading to others. This is particularly important in AI systems, where individual components may have varying stability. Conclusion Integrating AI into a microservices architecture is not without challenges, but the rewards in terms of scalability, flexibility, and maintainability are immense. The design patterns discussed provide a roadmap for building robust AI systems that can evolve with technological advancements and market demands. As AI continues to be a significant driver of innovation, these microservice patterns will play a critical role in shaping the future of AI development.
The Publish/Subscribe (Pub/Sub) pattern is a widely-used software architecture paradigm, particularly relevant in the design of distributed, messaging-driven systems. The communication framework is decoupled, scalable, and dynamic, making it useful for addressing complex software requirements in modern application development. At its core, the Pub/Sub pattern is about decoupling the message producer (publisher) from the message consumer (subscriber). In this framework, publishers broadcast messages without the knowledge of subscribers, and subscribers receive messages based on their interest without knowing about publishers. This decoupling is facilitated through a central component known as the message broker or event bus, which manages the delivery of messages. Key Components Publisher: Responsible for producing and sending messages to the message broker. Subscriber: Receives messages from the message broker based on subscribed topics or patterns. Message Broker/Event Bus: Mediates communication between publishers and subscribers. It filters messages and routes them from publishers to appropriate subscribers. Topic: Messages in Pub/Sub systems typically have a topic or subject and a payload. The topic categorizes the message, aiding the broker in message filtering and routing. A simple representation could look likes that: This leads to the following sequence schema: Asynchronous process In comparison, a more traditional approach that links each step together would have led to the following representation : Synchronous process The Asynchronous process makes it easier to introduce concepts such as parallelization, scalability, and resilience. However, it is important to note that this comes at a cost. Benefits Decoupling: Publishers and subscribers operate independently. This separation enhances system maintainability and scalability. Flexibility: New subscribers or publishers can be added without disrupting the existing system. Scalability: The pattern supports horizontal scaling, allowing systems to handle a high volume of messages and numerous subscribers. Resilience and Fault Tolerance: The system can tolerate and recover from component failures, as the components are loosely coupled. Asynchronous Communication: Enhances system responsiveness and efficiency. Trade-Offs Complexity in Message Management: Ensuring message consistency and handling duplicate messages can be challenging. Teams new to event architectures may find compensation strategies challenging. Dependency on Broker: System performance and reliability heavily depend on the broker’s capabilities. This is a counterbalance of system resilience. You rely heavily on your broker for system dependency, as well as in a vendor-locking manner. Message Serialization and Deserialization: May require additional processing power and handling logic. Use Cases Event-Driven Architectures The Publish/Subscribe pattern is exceptionally well-suited for systems built around event-driven architectures. In such architectures, components react to various events or state changes. By implementing this pattern, these systems can efficiently manage and respond to a high volume of events in real-time, ensuring that each component receives only relevant information without being overwhelmed by unnecessary data. Microservices Communication In a microservices architecture, where each service functions independently, effective communication is key. The Pub/Sub pattern plays a pivotal role here by facilitating seamless and efficient communication between microservices. It allows individual services to publish or subscribe to specific messages, enabling them to interact and exchange data without creating direct dependencies or tight coupling, thus maintaining the autonomy and scalability of each microservice. Real-Time Data Distribution The pattern finds extensive use in scenarios demanding real-time data distribution. This is particularly relevant in domains like financial markets for stock tickers, where immediate dissemination of stock price changes is crucial, or in Internet of Things (IoT) environments, where sensor data needs to be efficiently routed to various processing and monitoring systems. By employing the Pub/Sub pattern, systems can ensure that this time-sensitive data is broadcasted promptly and consumed by relevant parties in real-time, enhancing the overall responsiveness and efficiency of these systems. Conclusion The Publish/Subscribe pattern stands as a fundamental element in designing modern distributed systems that are responsive, resilient, and scalable. Its ability to effectively decouple the message producers (publishers) from message consumers (subscribers) revolutionizes the way communication is handled in complex architectures. This pattern enables the creation of flexible and dynamic communication structures, which are essential in accommodating the ever-evolving needs of modern software systems. However, while the benefits of the Pub/Sub pattern are substantial, architects and developers must carefully navigate its complexities. These include managing message consistency, handling the added latency due to broker use, and the system's reliance on the capabilities of the message broker. Understanding and addressing these trade-offs is crucial for leveraging the full potential of the Publish/Subscribe pattern in various software architecture designs, ensuring systems are not only efficient in their message handling but also robust and adaptable to changing requirements.
Over the past four years, developers have harnessed the power of Quarkus, experiencing its transformative capabilities in evolving Java microservices from local development to cloud deployments. As we stand on the brink of a new era, Quarkus 3 beckons with a promise of even more enhanced features, elevating developer experience, performance, scalability, and seamless cloud integration. In this enlightening journey, let’s delve into the heart of Quarkus 3's integration with virtual threads (Project Loom). You will learn how Quarkus enables you to simplify the creation of asynchronous concurrent applications, leveraging virtual threads for unparalleled scalability while ensuring efficient memory usage and peak performance. Journey of Java Threads You might have some experience with various types of Java threads if you have implemented Java applications for years. Let me remind you real quick how Java threads have been evolving over the last decades. Java threads have undergone significant advancements since their introduction in Java 1.0. The initial focus was on establishing fundamental concurrency mechanisms, including thread management, thread priorities, thread synchronization, and thread communication. As Java matured, it introduced atomic classes, concurrent collections, the ExecutorService framework, and the Lock and Condition interfaces, providing more sophisticated and efficient concurrency tools. Java 8 marked a turning point with the introduction of functional interfaces, lambda expressions, and the CompletableFuture API, enabling a more concise and expressive approach to asynchronous programming. Additionally, the Reactive Streams API standardized asynchronous stream processing and Project Loom introduced virtual threads, offering lightweight threads and improved concurrency support. Java 19 further enhanced concurrency features with structured concurrency constructs, such as Flow and WorkStealing, providing more structured and composable concurrency patterns. These advancements have significantly strengthened Java's concurrency capabilities, making it easier to develop scalable and performant concurrent applications. Java threads continue to evolve, with ongoing research and development focused on improving performance, scalability, and developer productivity in concurrent programming. Virtual threads, generally available (GA) in Java 21, are a revolutionary concurrency feature that addresses the limitations of traditional operating system (OS) threads. OS threads are heavyweight, limited in scalability, and complex to manage, posing challenges for developing scalable and performant concurrent applications. Virtual threads also offer several benefits, such as being a lightweight and efficient alternative, consuming less memory, reducing context-switching overhead, and supporting concurrent tasks. They simplify thread management, improve performance, and enhance scalability, paving the way for new concurrency paradigms and enabling more efficient serverless computing and microservices architectures. Virtual threads represent a significant advancement in Java concurrency, poised to shape the future of concurrent programming. Getting Started With Virtual Threads In general, you need to create a virtual thread using Thread.Builder directly in your Java project using JDK 21. For example, the following code snippet showcases how developers can create a new virtual thread and print a message to the console from the virtual thread. The Thread.ofVirtual() method creates a new virtual thread builder, and the name() method sets the name of the virtual thread to "virtual-thread". Then, the start() method starts the virtual thread and executes the provided Runnable lambda expression, which prints a message to the console. Lastly, the join() method waits for the virtual thread to finish executing before continuing. The System.out.println() statement in the main thread prints a message to the console after the virtual thread has finished executing. Java public class MyVirtualThread { public static void main(String[] args) throws InterruptedException { // Create a new virtual thread using Thread.Builder Thread thread = Thread .ofVirtual() .name("my-vt") .start(() -> { System.out.println("Hello from virtual thread!"); }); // Wait for the virtual thread to finish executing thread.join(); System.out.println("Main thread completed."); } } Alternatively, you can implement the ThreadFactory interface to start a new virtual thread in your Java project with JDK 21. The following code snippet showcases how developers can define a VirtualThreadFactory class that implements the ThreadFactory interface. The newThread() method of this class creates a new virtual thread using the Thread.ofVirtual() method. The name() method of the Builder object is used to set the name of the thread and the factory() method is used to set the ThreadFactory object. Java // Implement a ThreadFactory to start a new virtual thread public class VirtualThreadFactory implements ThreadFactory { private final String namePrefix; public VirtualThreadFactory(String namePrefix) { this.namePrefix = namePrefix; } @Override public Thread newThread(Runnable r) { return Thread.ofVirtual() .name(namePrefix + "-" + r.hashCode()) .factory(this) .build(); } } You might feel it will get more complex when you try to run your actual methods or classes on top of the virtual threads. Luckily, Quarkus enables you to skip the learning curve and execute the existing blocking services on the virtual threads quickly and efficiently. Let’s dive into it. Quarkus Way to the Virtual Thread You just need to keep reminding yourself of two things to run an application on virtual threads. Implement blocking services rather than reactive (or non-blocking) services based on JDK 21. Use @RunOnVirtualThread annotation on top of a method or a class that you want. Here is a code snippet of how Quarkus allows you to run the process() method on a virtual thread. Java @Path("/hello") public class GreetingResource { @GET @Produces(MediaType.TEXT_PLAIN) @RunOnVirtualThread public String hello() { Log.info(Thread.currentThread()); return "Quarkus 3: The Future of Java Microservices with Virtual Threads and Beyond"; } } You can start the Quarkus Dev mode (Live coding) to verify the above sample application. Then, invoke the REST endpoint using the curl command. Shell $ curl http://localhost:8080/hello The output should look like this. Shell Quarkus 3: The Future of Java Microservices with Virtual Threads and Beyond When you take a look at the terminal, you see that Quarkus dev mode is running. You can see that a virtual thread is created to run this application. Shell (quarkus-virtual-thread-0) VirtualThread[#123,quarkus-virtual-thread-0]/runnable@ForkJoinPool-1-worker-1 Try to invoke the endpoint a few more times, and the logs in the terminal should look like this. You learned how Quarkus integrates the virtual thread for Java developers to run blocking applications with a single @RunOnVirtualThread annotation. You should be aware that this annotation is not a silver bullet for all use cases. In the next article, I’ll introduce pitfalls, limitations, and performance test results against reactive applications.
The pursuit of speed and agility in software development has given rise to methodologies and practices that transcend traditional boundaries. Continuous testing, a cornerstone of modern DevOps practices, has evolved to meet the demands of accelerated software delivery. In this article, we'll explore the latest advancements in continuous testing, focusing on how it intersects with microservices and serverless architectures. I. The Foundation of Continuous Testing Continuous testing is a practice that emphasizes the need for testing at every stage of the software development lifecycle. From unit tests to integration tests and beyond, this approach aims to detect and rectify defects as early as possible, ensuring a high level of software quality. It extends beyond mere bug detection and it encapsulates a holistic approach. While unit tests can scrutinize individual components, integration tests can evaluate the collaboration between diverse modules. The practice allows not only the minimization of defects but also the robustness of the entire system. Its significance lies in fostering a continuous loop of refinement, where feedback from tests informs and enhances subsequent development cycles, creating a culture of continual improvement. II. Microservices: Decoding the Complexity Microservices architecture has become a dominant force in modern application development, breaking down monolithic applications into smaller, independent services. This signifies a departure from monolithic applications, introducing a paradigm shift in how software is developed and deployed. While this architecture offers scalability and flexibility, it comes with the challenge of managing and testing a multitude of distributed services. Microservices' complexity demands a nuanced testing strategy that acknowledges their independent functionalities and interconnected nature. Decomposed Testing Strategies Decomposed testing strategies are key to effective microservices testing. This approach advocates for the examination of each microservice in isolation. It involves a rigorous process of testing individual services to ensure their functionality meets specifications, followed by comprehensive integration testing. This methodical approach not only identifies defects at an early stage but also guarantees seamless communication between services, aligning with the modular nature of microservices. It fosters a testing ecosystem where each microservice is considered an independent unit, contributing to the overall reliability of the system. A sample of testing strategies that fall in this category include, but are not limited to: Unit Testing for Microservices Unit testing may be used to verify the correctness of individual microservices. If you have a microservice responsible for user authentication, for example, unit tests would check whether the authentication logic works correctly, handles different inputs, and responds appropriately to valid and invalid authentication attempts. Component Testing for Microservices Component testing may be used to test the functionality of a group of related microservices or components. In an e-commerce system, for example, you might have microservices for product cataloging, inventory management, and order processing. Component testing would involve verifying that these microservices work together seamlessly to enable processes like placing an order, checking inventory availability, and updating the product catalog. Contract Testing This is used to ensure that the contracts between microservices are honored. If microservice A relies on data from microservice B, contract tests would verify that microservice A can correctly consume the data provided by microservice B. This may ensure that changes to microservice B don't inadvertently break the expectations of microservice A. Performance Testing for Microservices Performance tests on a microservice could involve evaluating its response time, scalability, and resource utilization under various loads. This helps identify potential performance bottlenecks early in the development process. Security Testing for Microservices Security testing for a microservice might involve checking for vulnerabilities, ensuring proper authentication and authorization mechanisms are in place, and verifying that sensitive data is handled securely. Fault Injection Testing This is to assess the resilience of each microservice to failures. You could intentionally inject faults, such as network latency or service unavailability, into a microservice and observe how it responds. This helps ensure that microservices can gracefully handle unexpected failures. Isolation Testing Isolation testing verifies that a microservice operates independently of others. Isolation tests may involve testing a microservice with its dependencies mocked or stubbed. This ensures that the microservice can function in isolation and doesn't have hidden dependencies that could cause issues in a real-world environment. Service Virtualization Service virtualization is indispensable to microservices. It addresses the challenge of isolating and testing microservices by allowing teams to simulate their behavior in controlled environments. Service virtualization empowers development and testing teams to create replicas of microservices, facilitating isolated testing without dependencies on the entire system. This approach not only accelerates testing cycles but also enhances the accuracy of results by replicating real-world scenarios. It may become an enabler, ensuring thorough testing without compromising the agility required in the microservices ecosystem. API Testing Microservices heavily rely on APIs for seamless communication. Robust API testing becomes paramount in validating the reliability and functionality of these crucial interfaces. An approach to API testing involves scrutinizing each API endpoint's response to various inputs and edge cases. This examination may ensure that microservices can effectively communicate and exchange data as intended. API testing is not merely a validation of endpoints; it is a verification of the entire communication framework, forming a foundational layer of confidence in the microservices architecture. III. Serverless Computing: Revolutionizing Deployment Serverless computing takes the abstraction of infrastructure to unprecedented levels, allowing developers to focus solely on code without managing underlying servers. While promising unparalleled scalability and cost efficiency, it introduces a paradigm shift in testing methodologies that demands a new approach to ensure the reliability of serverless applications. Event-Driven Testing Serverless architectures are often event-driven, responding to triggers and stimuli. Event-driven testing becomes a cornerstone in validating the flawless execution of functions triggered by events. One approach involves not only scrutinizing the function's response to specific events but also assessing its adaptability to dynamic and unforeseen triggers. Event-driven testing ensures that serverless applications respond accurately and reliably to diverse events, fortifying the application against potential discrepancies. This approach could be pivotal in maintaining the responsiveness and integrity of serverless functions in an event-centric environment. Cold Start Challenges Testing the performance of serverless functions, especially during cold starts, emerges as a critical consideration in serverless computing. One approach to addressing cold start challenges involves continuous performance testing. This may help serverless functions perform optimally even when initiated from a dormant state, identifying and addressing latency issues promptly. By proactively tackling cold start challenges, development teams may confidently allow for a seamless user experience, regardless of the serverless function's initialization state. Third-Party Services Integration Serverless applications often rely on seamless integration with third-party services. Ensuring compatibility and robustness in these integrations becomes a crucial aspect of continuous testing for serverless architectures. One approach involves rigorous testing of the interactions between serverless functions and third-party services, verifying that data exchanges occur flawlessly. By addressing potential compatibility issues and ensuring the resilience of these integrations, development teams may fortify the serverless application's reliability and stability. IV. Tools and Technologies The evolution of continuous testing can be complemented by a suite of tools and technologies designed to streamline testing processes in microservices and serverless architectures. These tools not only facilitate testing but also enhance the overall efficiency and effectiveness of the testing lifecycle. Testing Frameworks for Microservices Tools like JUnit, TestNG, Spock, Pytest, and Behave are a sample of tools that can be useful in the comprehensive testing of microservices. These frameworks support unit tests, integration tests, and end-to-end tests. Contract tests may further validate that each microservice adheres to specified interfaces and communication protocols. Serverless Testing Tools Frameworks such as AWS SAM (Serverless Application Model), Serverless Framework, AWS Lambda Test, Azure Functions Core Tools, and Serverless Offline are all tools that help you develop, test, and deploy serverless applications. However, they have different features and purposes. AWS SAM is a tool that makes it easier to develop and deploy serverless applications on AWS. It provides a YAML-based syntax for defining your serverless applications, and it integrates with AWS CloudFormation to deploy your applications. Additionally, AWS SAM provides a local development environment that lets you test your applications before deploying them to AWS. Serverless Framework is a tool that supports serverless deployments on multiple cloud providers, including AWS, Azure, and Google Cloud Platform (GCP). It provides a CLI interface for creating, updating, and deploying serverless applications. Additionally, Serverless Framework provides a plugin system that lets you extend its functionality with third-party extensions. AWS Lambda Test is a tool that lets you test your AWS Lambda functions locally. It provides a simulated AWS Lambda environment that you can use to run your functions and debug errors. Additionally, AWS Lambda Test can generate test cases for your Lambda functions, which can help you improve your code coverage. Azure Functions Core Tools is a tool that lets you develop and test Azure Functions locally. It provides a CLI interface for creating, updating, and running Azure Functions. Additionally, Azure Functions Core Tools can generate test cases for your Azure Functions, which can help you improve your code coverage. Serverless Offline is a tool that lets you test serverless applications locally, regardless of the cloud provider that you are using. It provides a simulated cloud environment that you can use to run your serverless applications and debug errors. Additionally, Serverless Offline can generate test cases for your serverless applications, which can help you improve your code coverage. Here is a table that summarizes the key differences between the five tools: Feature AWS SAM Serverless Framework AWS Lambda Test Azure Functions Core Tools Serverless Offline Cloud provider support AWS AWS, Azure, GCP AWS Azure Multi-cloud Deployment YAML-based syntax integrates with AWS CloudFormation CLI interface Not supported CLI interface Not supported Local development environment Yes Yes Yes Yes Yes Plugin system No Yes No No No Test case generation Yes No Yes Yes Yes CI/CD Integration Continuous testing seamlessly integrates with CI/CD pipelines, forming a robust and automated testing process. Tools such as Jenkins, GitLab CI, and Travis CI orchestrate the entire testing workflow, ensuring that each code change undergoes rigorous testing before deployment. The integration of continuous testing with CI/CD pipelines provides a mechanism for maintaining software quality while achieving the speed demanded by today's digital economy. V. Wrapping Up Continuous testing is a central element in the process of delivering software quickly and reliably. It's an essential part that holds everything together since it involves consistently checking the software for issues and bugs throughout its development. As microservices and serverless architectures continue to reshape the software landscape, the role of continuous testing becomes even more pronounced. Embracing the challenges posed by these innovative architectures and leveraging the latest tools and methodologies may empower development teams to deliver high-quality software at the speed demanded by today's digital economy.
When building a large production-ready stateless microservices architecture, we always come across a common challenge of preserving request context across services and threads, including context propagation to the child threads. What Is Context Propagation? Context propagation means passing contextual information or states across different components or services in a distributed system where applications are often composed of multiple services running on different machines or containers. These services need to communicate and collaborate to fulfill a user request or perform a business process. Context propagation becomes crucial in such distributed systems to ensure that relevant information about a particular transaction or operation is carried along as it traverses different services. This context may include data such as: User authentication details Request identifiers Distributed Tracing information Other metadata (that helps in understanding the state and origin of a request) Key aspects of context propagation include: Request Context: When a user initiates a request, it often triggers a chain of interactions across multiple services. The context of the initial request, including relevant information like user identity, request timestamp, and unique identifiers, needs to be propagated to ensure consistent behavior and tracking. Distributed Tracing and Logging: Context propagation is closely tied to distributed tracing and logging mechanisms. By propagating context information, it becomes easier to trace the flow of a request through various services, aiding in debugging, performance analysis, and monitoring. Consistency: Maintaining a consistent context across services is essential for ensuring that each service involved in handling a request has the necessary information to perform its tasks correctly. This helps avoid inconsistencies and ensures coherent behavior across the distributed system. Middleware and Framework Support: Many middleware and frameworks provide built-in support for context propagation. For example, in microservices architectures, frameworks like Spring Cloud, Istio, or Zipkin offer tools for managing and propagating context seamlessly. Statelessness: Context propagation is especially important in stateless architectures where each service should operate independently without relying on a shared state. The context helps in providing the necessary information for a service to process a request without needing to store a persistent state. Effective context propagation contributes to the overall reliability, observability, and maintainability of distributed systems by providing a unified view of the state of a transaction as it moves through different services. It also helps in reducing the code. The Usecase Let's say you are building a Springboot Webflux-based Microservices/applications, and you need to ensure that the state of the user (Session Identifier, Request Identifier, LoggedIn Status, etc. ) and client ( Device Type, Client IP, etc.) passed in the originating request should be passed between the services. The Challenges Service-to-service call: For internal service-to-service calls, the context propagation does not happen automatically. Propagating context within classes: To refer to the context within service and/or helper classes, you need to explicitly pass it via the method arguments. This can be handled by creating a class with a static method that stores the context in the ThreadLocal object. Java Stream Operations: Since Java stream functions run in separate executor threads, the Context propagation via ThreadLocal to child threads needs to be done explicitly. Webflux: Similar to Java Stream functions, Context propagation in Webflux needs to be handled via reactor Hooks. The Idea here is how to ensure that context propagation happens automatically in the child threads and to the internal called service using a reactive web client. A similar pattern can be implemented for Non reactive code also. Solution Core Java provides two classes, ThreadLocal and InheritableThreadLocal, to store thread-scoped values. ThreadLocal allows the creation of variables that are local to a thread, ensuring each thread has its own copy of the variable. A limitation of ThreadLocal is that if a new thread is spawned within the scope of another thread, the child thread does not inherit the values of ThreadLocal variables from its parent. Java public class ExampleThreadLocal { private static ThreadLocal<String> threadLocal = new ThreadLocal<>(); public static void main(String[] args) { threadLocal.set("Main Thread Value"); new Thread(() -> { System.out.println("Child Thread: " + threadLocal.get()); // Outputs: Child Thread: null }).start(); System.out.println("Main Thread: " + threadLocal.get()); // Outputs: Main Thread: Main Thread Value } } On the other hand; InheritableThreadLocal extends ThreadLocal and provides the ability for child threads to inherit values from their parent threads. Java public class ExampleInheritableThreadLocal { private static InheritableThreadLocal<String> inheritableThreadLocal = new InheritableThreadLocal<>(); public static void main(String[] args) { inheritableThreadLocal.set("Main Thread Value"); new Thread(() -> { System.out.println("Child Thread: " + inheritableThreadLocal.get()); // Outputs: Child Thread: Main Thread Value }).start(); System.out.println("Main Thread: " + inheritableThreadLocal.get()); // Outputs: Main Thread: Main Thread Value } } Hence, in the scenarios where we need to ensure that context must be propagated between parent and child threads, we can use application-scoped static InheritableThreadLocal variables to hold the context and fetch it wherever needed. Java @Getter @ToString @Builder public class RequestContext { private String sessionId; private String correlationId; private String userStatus; private String channel; } Java public class ContextAdapter { final ThreadLocal<RequestContext> threadLocal = new InheritableThreadLocal<>(); public RequestContext getCurrentContext() { return threadLocal.get(); } public void setContext(tRequestContext requestContext) { threadLocal.set(requestContext); } public void clear() { threadLocal.remove(); } } Java public final class Context { static ContextAdapter contextAdapter; private Context() {} static { contextAdapter = new ContextAdapter(); } public static void clear() { if (contextAdapter == null) { throw new IllegalStateException(); } contextAdapter.clear(); } public static RequestContext getContext() { if (contextAdapter == null) { throw new IllegalStateException(); } return contextAdapter.getCurrentContext(); } public static void setContext(RequestContext requestContext) { if (cContextAdapter == null) { throw new IllegalStateException(); } contextAdapter.setContext(requestContext); } public static ContextAdapter getContextAdapter() { return contextAdapter; } } We can then refer to the context by calling the static method wherever required in the code. Java Context.getContext() This solves for: Propagating context within classes. Java Stream Operations Webflux In order to ensure that context is propagated to external calls via webclient, automatically, we can create a custom ExchangeFilterFunctionto read the context from Context.getContext() and then add the context to the header or query params as required. Java public class HeaderExchange implements ExchangeFilterFunction { @Override public Mono<ClientResponse> filter( ClientRequest clientRequest, ExchangeFunction exchangeFunction) { return Mono.deferContextual(Mono::just) .flatMap( context -> { RequestContext currentContext = Context.getContext(); ClientRequest newRequest = ClientRequest.from(clientRequest) .headers(httpHeaders ->{ httpHeaders.add("context-session-id",currentContext.getSessionId() ); httpHeaders.add("context-correlation-id",currentContext.getCorrelationId() ); }).build(); return exchangeFunction.exchange(newRequest); }); } } Initializing the Context as part of WebFilter. Java @Slf4j @Component public class RequestContextFilter implements WebFilter { @Override public Mono<Void> filter(ServerWebExchange exchange, WebFilterChain chain) { String sessionId = exchange.getRequest().getHeaders().getFirst("context-session-id"); String correlationId = exchange.getRequest().getHeaders().getFirst("context-correlation-id"); RequestContext requestContext = RequestContext.builder().sessionId(sessionId).correlationId(correlationId).build() Context.setContext(requestContext); return chain.filter(exchange); } }
Nuwan Dias
VP and Deputy CTO,
WSO2
Ray Elenteny
Solution Architect,
SOLTECH